Jan 20 01:06:24.559910 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 01:06:24.559952 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:06:24.559969 kernel: BIOS-provided physical RAM map: Jan 20 01:06:24.559979 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 01:06:24.559987 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 01:06:24.559995 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 01:06:24.560004 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 01:06:24.560012 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 01:06:24.560023 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 01:06:24.560031 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 01:06:24.560039 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:06:24.560052 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 01:06:24.560060 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 01:06:24.560071 kernel: NX (Execute Disable) protection: active Jan 20 01:06:24.560084 kernel: APIC: Static calls initialized Jan 20 01:06:24.560093 kernel: SMBIOS 2.8 present. Jan 20 01:06:24.560198 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 01:06:24.560292 kernel: DMI: Memory slots populated: 1/1 Jan 20 01:06:24.560454 kernel: Hypervisor detected: KVM Jan 20 01:06:24.560464 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 01:06:24.560475 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 01:06:24.560484 kernel: kvm-clock: using sched offset of 48376679069 cycles Jan 20 01:06:24.560494 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:06:24.560503 kernel: tsc: Detected 2445.426 MHz processor Jan 20 01:06:24.560513 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 01:06:24.560525 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 01:06:24.560539 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 01:06:24.560640 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 01:06:24.560652 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 01:06:24.560663 kernel: Using GB pages for direct mapping Jan 20 01:06:24.560675 kernel: ACPI: Early table checksum verification disabled Jan 20 01:06:24.560684 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 01:06:24.560693 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:06:24.560702 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:06:24.560711 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:06:24.560725 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 01:06:24.560734 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:06:24.560747 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:06:24.560758 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:06:24.560768 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:06:24.560783 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 01:06:24.560796 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 01:06:24.560805 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 01:06:24.560815 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 01:06:24.560828 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 01:06:24.560839 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 01:06:24.560848 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 01:06:24.560858 kernel: No NUMA configuration found Jan 20 01:06:24.560867 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 01:06:24.560880 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 01:06:24.560892 kernel: Zone ranges: Jan 20 01:06:24.560904 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 01:06:24.560916 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 01:06:24.560928 kernel: Normal empty Jan 20 01:06:24.560938 kernel: Device empty Jan 20 01:06:24.560947 kernel: Movable zone start for each node Jan 20 01:06:24.560956 kernel: Early memory node ranges Jan 20 01:06:24.560965 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 01:06:24.560979 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 01:06:24.560991 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 01:06:24.561003 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:06:24.561014 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 01:06:24.561102 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 01:06:24.561115 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 01:06:24.561127 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 01:06:24.561136 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 01:06:24.561145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 01:06:24.561160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 01:06:24.561169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 01:06:24.561178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 01:06:24.561190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 01:06:24.561202 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 01:06:24.561213 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 01:06:24.561224 kernel: TSC deadline timer available Jan 20 01:06:24.561233 kernel: CPU topo: Max. logical packages: 1 Jan 20 01:06:24.561243 kernel: CPU topo: Max. logical dies: 1 Jan 20 01:06:24.561256 kernel: CPU topo: Max. dies per package: 1 Jan 20 01:06:24.561265 kernel: CPU topo: Max. threads per core: 1 Jan 20 01:06:24.561275 kernel: CPU topo: Num. cores per package: 4 Jan 20 01:06:24.561287 kernel: CPU topo: Num. threads per package: 4 Jan 20 01:06:24.561472 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 01:06:24.561486 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 01:06:24.561499 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 01:06:24.561509 kernel: kvm-guest: setup PV sched yield Jan 20 01:06:24.561518 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 01:06:24.561532 kernel: Booting paravirtualized kernel on KVM Jan 20 01:06:24.561542 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 01:06:24.561657 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 01:06:24.561671 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 01:06:24.561684 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 01:06:24.561696 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 01:06:24.561708 kernel: kvm-guest: PV spinlocks enabled Jan 20 01:06:24.571034 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 01:06:24.571052 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:06:24.571161 kernel: random: crng init done Jan 20 01:06:24.571171 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:06:24.571181 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:06:24.571190 kernel: Fallback order for Node 0: 0 Jan 20 01:06:24.571200 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 01:06:24.571209 kernel: Policy zone: DMA32 Jan 20 01:06:24.571222 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:06:24.571233 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 01:06:24.571245 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 01:06:24.571260 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 01:06:24.571270 kernel: Dynamic Preempt: voluntary Jan 20 01:06:24.571279 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:06:24.571290 kernel: rcu: RCU event tracing is enabled. Jan 20 01:06:24.571484 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 01:06:24.571500 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:06:24.574890 kernel: Rude variant of Tasks RCU enabled. Jan 20 01:06:24.574908 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:06:24.574989 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:06:24.575007 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 01:06:24.575019 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:06:24.575032 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:06:24.575044 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:06:24.575057 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 01:06:24.575071 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:06:24.575103 kernel: Console: colour VGA+ 80x25 Jan 20 01:06:24.575116 kernel: printk: legacy console [ttyS0] enabled Jan 20 01:06:24.575128 kernel: ACPI: Core revision 20240827 Jan 20 01:06:24.575139 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 01:06:24.575150 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 01:06:24.575168 kernel: x2apic enabled Jan 20 01:06:24.575181 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 01:06:24.575195 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 01:06:24.575209 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 01:06:24.575222 kernel: kvm-guest: setup PV IPIs Jan 20 01:06:24.575240 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 01:06:24.575254 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:06:24.575268 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 01:06:24.575281 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 01:06:24.575293 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 01:06:24.575490 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 01:06:24.575504 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 01:06:24.575517 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 01:06:24.575531 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 01:06:24.575666 kernel: Speculative Store Bypass: Vulnerable Jan 20 01:06:24.575680 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 01:06:24.575695 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 01:06:24.575710 kernel: active return thunk: srso_alias_return_thunk Jan 20 01:06:24.575723 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 01:06:24.575735 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 01:06:24.575746 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 01:06:24.575757 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 01:06:24.575772 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 01:06:24.575783 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 01:06:24.575793 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 01:06:24.575804 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 01:06:24.575815 kernel: Freeing SMP alternatives memory: 32K Jan 20 01:06:24.575826 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:06:24.575836 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 01:06:24.575847 kernel: landlock: Up and running. Jan 20 01:06:24.575857 kernel: SELinux: Initializing. Jan 20 01:06:24.575871 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:06:24.575883 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:06:24.575974 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 01:06:24.575987 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 01:06:24.575997 kernel: signal: max sigframe size: 1776 Jan 20 01:06:24.576008 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:06:24.576019 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:06:24.576030 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 01:06:24.576040 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 01:06:24.576055 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:06:24.576065 kernel: smpboot: x86: Booting SMP configuration: Jan 20 01:06:24.576076 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 01:06:24.576086 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 01:06:24.576097 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 01:06:24.576108 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145096K reserved, 0K cma-reserved) Jan 20 01:06:24.576119 kernel: devtmpfs: initialized Jan 20 01:06:24.576130 kernel: x86/mm: Memory block size: 128MB Jan 20 01:06:24.576141 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:06:24.576157 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 01:06:24.576172 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:06:24.576184 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:06:24.576197 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:06:24.576208 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:06:24.576219 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 01:06:24.576230 kernel: audit: type=2000 audit(1768871137.795:1): state=initialized audit_enabled=0 res=1 Jan 20 01:06:24.576240 kernel: cpuidle: using governor menu Jan 20 01:06:24.576251 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:06:24.576265 kernel: dca service started, version 1.12.1 Jan 20 01:06:24.576276 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 01:06:24.576287 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 01:06:24.576441 kernel: PCI: Using configuration type 1 for base access Jan 20 01:06:24.576454 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 01:06:24.576464 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:06:24.576475 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:06:24.576486 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:06:24.576496 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:06:24.576511 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:06:24.576522 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:06:24.576532 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:06:24.576543 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:06:24.588062 kernel: ACPI: Interpreter enabled Jan 20 01:06:24.588077 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 01:06:24.588090 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 01:06:24.588105 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 01:06:24.588116 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 01:06:24.588135 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 01:06:24.588145 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 01:06:24.616974 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 01:06:24.617236 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 01:06:24.624003 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 01:06:24.624031 kernel: PCI host bridge to bus 0000:00 Jan 20 01:06:24.624998 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 01:06:24.625225 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 01:06:24.636501 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 01:06:24.643064 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 01:06:24.653159 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 01:06:24.653645 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 01:06:24.656696 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 01:06:24.666061 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 01:06:24.678025 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 01:06:24.678267 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 01:06:24.678744 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 01:06:24.678949 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 01:06:24.679148 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 01:06:24.688844 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 01:06:24.690684 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 01:06:24.693783 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 01:06:24.697870 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 01:06:24.703182 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 01:06:24.703655 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 01:06:24.703865 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 01:06:24.704156 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 01:06:24.711016 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 01:06:24.715279 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 01:06:24.720974 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 01:06:24.721179 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 01:06:24.723503 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 01:06:24.729064 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 01:06:24.734166 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 01:06:24.740192 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 18554 usecs Jan 20 01:06:24.776512 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 01:06:24.776896 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 01:06:24.777125 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 01:06:24.795239 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 01:06:24.800107 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 01:06:24.800159 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 01:06:24.800171 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 01:06:24.800182 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 01:06:24.800193 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 01:06:24.800205 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 01:06:24.800216 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 01:06:24.800226 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 01:06:24.800237 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 01:06:24.800248 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 01:06:24.800262 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 01:06:24.800274 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 01:06:24.800285 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 01:06:24.800483 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 01:06:24.800501 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 01:06:24.800511 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 01:06:24.800521 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 01:06:24.800531 kernel: iommu: Default domain type: Translated Jan 20 01:06:24.800541 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 01:06:24.800681 kernel: PCI: Using ACPI for IRQ routing Jan 20 01:06:24.800693 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 01:06:24.800704 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 01:06:24.800713 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 01:06:24.800940 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 01:06:24.801152 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 01:06:24.801964 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 01:06:24.802279 kernel: vgaarb: loaded Jan 20 01:06:24.802292 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 01:06:24.802733 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 01:06:24.802747 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 01:06:24.802757 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:06:24.802767 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:06:24.802777 kernel: pnp: PnP ACPI init Jan 20 01:06:24.803891 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 01:06:24.803913 kernel: pnp: PnP ACPI: found 6 devices Jan 20 01:06:24.803927 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 01:06:24.803947 kernel: NET: Registered PF_INET protocol family Jan 20 01:06:24.803960 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:06:24.803974 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:06:24.803985 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:06:24.803996 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:06:24.804007 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:06:24.804017 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:06:24.804028 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:06:24.804038 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:06:24.804053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:06:24.804064 kernel: NET: Registered PF_XDP protocol family Jan 20 01:06:24.804254 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 01:06:24.804733 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 01:06:24.805197 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 01:06:24.805919 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 01:06:24.806125 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 01:06:24.806506 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 01:06:24.806535 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:06:24.806661 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:06:24.806679 kernel: Initialise system trusted keyrings Jan 20 01:06:24.806691 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:06:24.806701 kernel: Key type asymmetric registered Jan 20 01:06:24.806712 kernel: Asymmetric key parser 'x509' registered Jan 20 01:06:24.806722 kernel: hrtimer: interrupt took 8962950 ns Jan 20 01:06:24.806945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:06:24.806968 kernel: io scheduler mq-deadline registered Jan 20 01:06:24.806986 kernel: io scheduler kyber registered Jan 20 01:06:24.806997 kernel: io scheduler bfq registered Jan 20 01:06:24.807007 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 01:06:24.807018 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 01:06:24.807029 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 01:06:24.807039 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 01:06:24.807050 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:06:24.807060 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 01:06:24.807071 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 01:06:24.807086 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 01:06:24.807097 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 01:06:24.809671 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 01:06:24.809696 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 01:06:24.810021 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 01:06:24.810221 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T01:06:16 UTC (1768871176) Jan 20 01:06:24.810889 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 01:06:24.810908 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 01:06:24.810926 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:06:24.810936 kernel: Segment Routing with IPv6 Jan 20 01:06:24.810946 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:06:24.810956 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:06:24.811224 kernel: Key type dns_resolver registered Jan 20 01:06:24.811242 kernel: IPI shorthand broadcast: enabled Jan 20 01:06:24.811253 kernel: sched_clock: Marking stable (27764080646, 5465128762)->(40316619571, -7087410163) Jan 20 01:06:24.811263 kernel: registered taskstats version 1 Jan 20 01:06:24.811272 kernel: Loading compiled-in X.509 certificates Jan 20 01:06:24.811290 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 01:06:24.811486 kernel: Demotion targets for Node 0: null Jan 20 01:06:24.811497 kernel: Key type .fscrypt registered Jan 20 01:06:24.811507 kernel: Key type fscrypt-provisioning registered Jan 20 01:06:24.811517 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:06:24.811529 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:06:24.811541 kernel: ima: No architecture policies found Jan 20 01:06:24.811651 kernel: clk: Disabling unused clocks Jan 20 01:06:24.811662 kernel: Warning: unable to open an initial console. Jan 20 01:06:24.811679 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 01:06:24.811688 kernel: Write protecting the kernel read-only data: 40960k Jan 20 01:06:24.811698 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 01:06:24.811711 kernel: Run /init as init process Jan 20 01:06:24.811723 kernel: with arguments: Jan 20 01:06:24.811733 kernel: /init Jan 20 01:06:24.811742 kernel: with environment: Jan 20 01:06:24.811752 kernel: HOME=/ Jan 20 01:06:24.811761 kernel: TERM=linux Jan 20 01:06:24.811780 systemd[1]: Successfully made /usr/ read-only. Jan 20 01:06:24.811795 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:06:24.811806 systemd[1]: Detected virtualization kvm. Jan 20 01:06:24.811816 systemd[1]: Detected architecture x86-64. Jan 20 01:06:24.811829 systemd[1]: Running in initrd. Jan 20 01:06:24.811842 systemd[1]: No hostname configured, using default hostname. Jan 20 01:06:24.811853 systemd[1]: Hostname set to . Jan 20 01:06:24.811868 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:06:24.811896 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:06:24.811914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:06:24.811925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:06:24.811937 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:06:24.811948 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:06:24.811965 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:06:24.811979 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:06:24.811991 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:06:24.812002 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:06:24.812013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:06:24.812027 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:06:24.812040 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:06:24.812055 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:06:24.812069 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:06:24.812081 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:06:24.812095 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:06:24.812106 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:06:24.812117 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:06:24.812128 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 01:06:24.812139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:06:24.812159 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:06:24.812172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:06:24.812186 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:06:24.812197 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:06:24.812207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:06:24.812218 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:06:24.812229 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 01:06:24.812244 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:06:24.812256 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:06:24.812271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:06:24.812282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:06:24.812293 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:06:24.812691 systemd-journald[203]: Collecting audit messages is disabled. Jan 20 01:06:24.812929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:06:24.812950 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:06:24.812961 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:06:24.812973 systemd-journald[203]: Journal started Jan 20 01:06:24.812997 systemd-journald[203]: Runtime Journal (/run/log/journal/9a1529e8dc884c0c866c1d1c116f33b4) is 6M, max 48.3M, 42.2M free. Jan 20 01:06:24.828882 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:06:24.826817 systemd-modules-load[204]: Inserted module 'overlay' Jan 20 01:06:24.893925 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:06:24.980941 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:06:28.021784 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:06:28.021829 kernel: Bridge firewalling registered Jan 20 01:06:25.622089 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 20 01:06:28.003465 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:06:28.195847 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:06:28.240195 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 01:06:28.254290 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:06:28.523120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:06:28.606971 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:06:28.731952 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:06:28.937864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:06:29.027159 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:06:29.232016 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:06:29.364536 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:06:29.500557 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:06:30.022683 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:06:30.168220 systemd-resolved[241]: Positive Trust Anchors: Jan 20 01:06:30.170142 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:06:30.170181 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:06:30.204829 systemd-resolved[241]: Defaulting to hostname 'linux'. Jan 20 01:06:30.228790 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:06:30.397725 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:06:32.299113 kernel: SCSI subsystem initialized Jan 20 01:06:32.385450 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:06:32.584921 kernel: iscsi: registered transport (tcp) Jan 20 01:06:32.966176 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:06:32.969232 kernel: QLogic iSCSI HBA Driver Jan 20 01:06:33.818089 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:06:34.178984 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:06:34.248585 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:06:35.234007 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:06:35.299104 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:06:36.040854 kernel: raid6: avx2x4 gen() 11931 MB/s Jan 20 01:06:36.065848 kernel: raid6: avx2x2 gen() 9010 MB/s Jan 20 01:06:36.099641 kernel: raid6: avx2x1 gen() 8453 MB/s Jan 20 01:06:36.099827 kernel: raid6: using algorithm avx2x4 gen() 11931 MB/s Jan 20 01:06:36.146903 kernel: raid6: .... xor() 1197 MB/s, rmw enabled Jan 20 01:06:36.146990 kernel: raid6: using avx2x2 recovery algorithm Jan 20 01:06:36.319820 kernel: xor: automatically using best checksumming function avx Jan 20 01:06:38.898979 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:06:38.986943 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:06:39.053226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:06:39.327140 systemd-udevd[452]: Using default interface naming scheme 'v255'. Jan 20 01:06:39.487641 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:06:39.566005 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:06:40.043951 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Jan 20 01:06:40.569228 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:06:40.668687 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:06:42.485645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:06:42.855943 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:06:43.878926 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 01:06:43.923961 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:06:43.929007 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:06:44.127041 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:06:44.141041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:06:44.373535 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:06:45.171469 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 01:06:45.334869 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 01:06:45.405625 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 01:06:45.405848 kernel: GPT:9289727 != 19775487 Jan 20 01:06:45.405881 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 01:06:45.405901 kernel: GPT:9289727 != 19775487 Jan 20 01:06:45.405917 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 01:06:45.405933 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:06:46.948487 kernel: libata version 3.00 loaded. Jan 20 01:06:47.254663 kernel: AES CTR mode by8 optimization enabled Jan 20 01:06:47.331613 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 01:06:47.332190 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 01:06:47.507111 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 01:06:47.507626 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 01:06:47.508631 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 01:06:47.695536 kernel: scsi host0: ahci Jan 20 01:06:47.698928 kernel: scsi host1: ahci Jan 20 01:06:47.719199 kernel: scsi host2: ahci Jan 20 01:06:47.769140 kernel: scsi host3: ahci Jan 20 01:06:47.774606 kernel: scsi host4: ahci Jan 20 01:06:47.797769 kernel: scsi host5: ahci Jan 20 01:06:47.799541 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 20 01:06:47.799566 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 20 01:06:47.799581 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 20 01:06:47.799596 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 20 01:06:47.799621 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 20 01:06:47.799640 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 20 01:06:47.810746 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 01:06:47.886959 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 01:06:49.660100 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 01:06:49.660170 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 01:06:49.660186 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 01:06:49.660477 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 01:06:49.660499 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 01:06:49.660516 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 01:06:49.660532 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:06:49.660639 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 01:06:49.660657 kernel: ata3.00: applying bridge limits Jan 20 01:06:49.660672 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:06:49.660687 kernel: ata3.00: configured for UDMA/100 Jan 20 01:06:49.660703 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 01:06:49.684254 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 01:06:49.684735 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:06:49.829775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:06:50.032456 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 01:06:50.056641 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 01:06:50.181709 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 01:06:50.279082 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:06:50.383742 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 01:06:50.515991 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:06:50.881542 disk-uuid[616]: Primary Header is updated. Jan 20 01:06:50.881542 disk-uuid[616]: Secondary Entries is updated. Jan 20 01:06:50.881542 disk-uuid[616]: Secondary Header is updated. Jan 20 01:06:51.033776 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:06:52.335525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:06:52.385570 disk-uuid[618]: The operation has completed successfully. Jan 20 01:06:52.822576 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:06:52.827051 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:06:52.915775 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:06:53.303023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:06:53.372274 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:06:53.408214 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:06:53.519832 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:06:53.565785 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:06:53.877759 sh[638]: Success Jan 20 01:06:53.955627 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:06:54.246053 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:06:54.264468 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:06:54.264558 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 01:06:54.656629 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 01:06:55.205612 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:06:55.310601 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:06:55.396832 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:06:55.894631 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (658) Jan 20 01:06:55.976559 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 01:06:55.976646 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:06:56.262766 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:06:56.262859 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 01:06:56.306680 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:06:56.375804 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:06:56.500769 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:06:56.516292 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:06:56.721608 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:06:57.198554 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (689) Jan 20 01:06:57.323147 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:06:57.323231 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:06:57.524180 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:06:57.524277 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:06:57.639172 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:06:57.705022 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:06:57.773777 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:07:01.885559 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:07:01.997531 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:07:03.508818 ignition[753]: Ignition 2.22.0 Jan 20 01:07:03.508839 ignition[753]: Stage: fetch-offline Jan 20 01:07:03.677685 ignition[753]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:07:03.677815 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:07:03.716528 systemd-networkd[832]: lo: Link UP Jan 20 01:07:03.727556 ignition[753]: parsed url from cmdline: "" Jan 20 01:07:03.716537 systemd-networkd[832]: lo: Gained carrier Jan 20 01:07:03.727566 ignition[753]: no config URL provided Jan 20 01:07:03.755679 systemd-networkd[832]: Enumeration completed Jan 20 01:07:03.727578 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:07:03.760154 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:07:03.727598 ignition[753]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:07:03.785865 systemd-networkd[832]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:07:03.727637 ignition[753]: op(1): [started] loading QEMU firmware config module Jan 20 01:07:03.785873 systemd-networkd[832]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:07:03.727646 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 01:07:03.832617 systemd-networkd[832]: eth0: Link UP Jan 20 01:07:04.327106 ignition[753]: op(1): [finished] loading QEMU firmware config module Jan 20 01:07:03.859734 systemd-networkd[832]: eth0: Gained carrier Jan 20 01:07:03.859760 systemd-networkd[832]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:07:03.879641 systemd[1]: Reached target network.target - Network. Jan 20 01:07:04.367665 systemd-networkd[832]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:07:04.723101 systemd-resolved[241]: Detected conflict on linux IN A 10.0.0.15 Jan 20 01:07:04.723121 systemd-resolved[241]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jan 20 01:07:05.350502 systemd-networkd[832]: eth0: Gained IPv6LL Jan 20 01:07:11.129793 ignition[753]: parsing config with SHA512: 74c1c2568c471668ee47d831bd99b3117aa0671067217c18e3cd7970c8e093f22cc153e8d3442fa37e6bff359887aa03b3e946ad891fa7b84f57b301ec2fe0bc Jan 20 01:07:11.284942 unknown[753]: fetched base config from "system" Jan 20 01:07:11.284962 unknown[753]: fetched user config from "qemu" Jan 20 01:07:11.297041 ignition[753]: fetch-offline: fetch-offline passed Jan 20 01:07:11.374011 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:07:11.297539 ignition[753]: Ignition finished successfully Jan 20 01:07:11.508761 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 01:07:11.530697 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:07:18.984032 ignition[840]: Ignition 2.22.0 Jan 20 01:07:18.992062 ignition[840]: Stage: kargs Jan 20 01:07:19.076046 ignition[840]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:07:19.076504 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:07:19.920042 ignition[840]: kargs: kargs passed Jan 20 01:07:19.993897 ignition[840]: Ignition finished successfully Jan 20 01:07:20.253036 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:07:20.400645 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:07:22.901769 ignition[848]: Ignition 2.22.0 Jan 20 01:07:22.903961 ignition[848]: Stage: disks Jan 20 01:07:22.907178 ignition[848]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:07:22.908145 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:07:22.923153 ignition[848]: disks: disks passed Jan 20 01:07:22.923578 ignition[848]: Ignition finished successfully Jan 20 01:07:23.104002 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:07:23.168971 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:07:23.199652 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:07:23.253769 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:07:23.296516 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:07:23.329539 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:07:23.382778 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:07:23.951843 systemd-fsck[858]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 01:07:24.012899 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:07:24.100930 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:07:28.959107 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 01:07:29.021025 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:07:29.063104 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:07:29.223044 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:07:29.350904 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:07:29.399848 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 01:07:29.399955 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:07:29.400008 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:07:29.646182 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:07:29.672898 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:07:30.097786 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Jan 20 01:07:30.165771 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:07:30.206690 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:07:30.390135 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:07:30.390831 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:07:30.434189 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:07:31.077044 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:07:31.257911 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:07:31.352678 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:07:31.559081 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:07:37.428999 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:07:37.487924 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:07:37.718232 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:07:38.035270 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:07:38.207059 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:07:38.911025 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:07:40.189989 ignition[983]: INFO : Ignition 2.22.0 Jan 20 01:07:40.381284 ignition[983]: INFO : Stage: mount Jan 20 01:07:40.434022 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:07:40.434022 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:07:40.537264 ignition[983]: INFO : mount: mount passed Jan 20 01:07:40.537264 ignition[983]: INFO : Ignition finished successfully Jan 20 01:07:40.572252 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:07:40.774881 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:07:41.087011 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:07:42.238984 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (996) Jan 20 01:07:42.425782 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:07:42.430644 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:07:42.628776 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:07:42.629136 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:07:42.666158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:07:43.869086 ignition[1014]: INFO : Ignition 2.22.0 Jan 20 01:07:43.869086 ignition[1014]: INFO : Stage: files Jan 20 01:07:44.035734 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:07:44.035734 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:07:44.035734 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:07:44.035734 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:07:44.035734 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:07:44.562824 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:07:44.562824 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:07:44.562824 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:07:44.562824 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:07:44.562824 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 01:07:44.091864 unknown[1014]: wrote ssh authorized keys file for user: core Jan 20 01:07:46.328797 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:07:49.857211 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:07:50.008800 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 01:07:50.008800 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 20 01:07:50.768173 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 01:07:52.777684 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1164883805 wd_nsec: 1164883175 Jan 20 01:08:01.672577 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 01:08:01.672577 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:08:01.911825 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:08:01.911825 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:08:02.122275 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 20 01:08:03.907949 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 20 01:08:42.647557 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:08:42.823589 ignition[1014]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 20 01:08:42.823589 ignition[1014]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:08:43.083252 ignition[1014]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:08:43.083252 ignition[1014]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 20 01:08:43.083252 ignition[1014]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 20 01:08:43.083252 ignition[1014]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:08:43.083252 ignition[1014]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:08:43.083252 ignition[1014]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 20 01:08:43.083252 ignition[1014]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 01:08:44.870114 ignition[1014]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:08:45.089842 ignition[1014]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:08:45.089842 ignition[1014]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 01:08:45.089842 ignition[1014]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:08:45.089842 ignition[1014]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:08:45.879935 ignition[1014]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:08:45.879935 ignition[1014]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:08:45.879935 ignition[1014]: INFO : files: files passed Jan 20 01:08:45.879935 ignition[1014]: INFO : Ignition finished successfully Jan 20 01:08:45.707769 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:08:46.104595 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:08:46.463926 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:08:46.877769 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:08:47.071574 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 01:08:46.877952 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:08:47.280595 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:08:47.280595 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:08:47.442872 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:08:47.553845 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:08:47.663567 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:08:47.873793 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:08:48.748290 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:08:48.748971 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:08:48.816255 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:08:48.855839 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:08:48.885708 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:08:48.923574 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:08:49.353493 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:08:49.421666 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:08:49.734856 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:08:49.859548 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:08:49.912859 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:08:50.040981 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:08:50.041868 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:08:50.238239 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:08:50.286762 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:08:50.329203 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:08:50.374644 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:08:50.461795 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:08:50.609823 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:08:50.697184 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:08:50.738610 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:08:50.927250 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:08:50.997273 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:08:51.280763 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:08:51.316175 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:08:51.316715 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:08:51.789913 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:08:51.879196 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:08:52.096670 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:08:52.230859 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:08:52.466776 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:08:52.470703 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:08:52.666908 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:08:52.701188 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:08:52.808901 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:08:52.895706 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:08:52.919802 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:08:53.037715 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:08:53.122253 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:08:53.265015 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:08:53.271213 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:08:53.290599 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:08:53.290876 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:08:53.291619 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:08:53.291927 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:08:53.292626 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:08:53.292786 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:08:53.314057 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:08:53.573878 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:08:53.580766 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:08:53.890746 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:08:54.095960 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:08:54.099974 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:08:54.102577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:08:54.102848 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:08:55.064739 ignition[1070]: INFO : Ignition 2.22.0 Jan 20 01:08:55.064739 ignition[1070]: INFO : Stage: umount Jan 20 01:08:55.064739 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:08:55.064739 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:08:55.064739 ignition[1070]: INFO : umount: umount passed Jan 20 01:08:55.064739 ignition[1070]: INFO : Ignition finished successfully Jan 20 01:08:55.111540 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:08:55.161553 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:08:55.224957 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:08:55.231793 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:08:55.766250 systemd[1]: Stopped target network.target - Network. Jan 20 01:08:55.974710 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:08:56.003849 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:08:56.183534 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:08:56.183825 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:08:56.403730 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:08:56.403983 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:08:56.513089 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:08:56.513680 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:08:56.689905 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:08:56.902279 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:08:57.183033 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:08:57.196781 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:08:57.197097 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:08:57.560012 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 01:08:57.570957 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:08:57.574827 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:08:57.820900 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 01:08:57.821837 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:08:57.822024 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:08:58.372113 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 01:08:58.423080 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:08:58.429780 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:08:58.742838 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:08:58.748853 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:08:59.091608 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:08:59.184880 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:08:59.185011 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:08:59.203689 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:08:59.203809 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:08:59.456708 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:08:59.457034 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:08:59.462876 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:08:59.462975 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:08:59.622682 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:08:59.835566 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 01:08:59.835711 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:08:59.850572 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:08:59.850964 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:09:00.258936 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:09:00.259077 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:09:00.518903 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:09:00.519099 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:09:00.573032 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:09:00.577951 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:09:00.755648 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:09:00.755917 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:09:00.900931 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:09:00.901075 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:09:01.073933 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:09:01.305139 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 01:09:01.305771 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:09:01.396718 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:09:01.396820 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:09:01.513790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:09:01.517797 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:09:01.679918 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 01:09:01.680019 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 01:09:01.680087 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:09:01.681675 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:09:01.684139 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:09:01.926085 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:09:01.926725 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:09:02.082772 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:09:02.386830 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:09:02.627940 systemd[1]: Switching root. Jan 20 01:09:02.825062 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 20 01:09:02.825542 systemd-journald[203]: Journal stopped Jan 20 01:09:30.190261 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:09:30.191023 kernel: SELinux: policy capability open_perms=1 Jan 20 01:09:30.191048 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:09:30.191076 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:09:30.191102 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:09:30.191118 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:09:30.191134 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:09:30.191150 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:09:30.196090 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 01:09:30.196116 kernel: audit: type=1403 audit(1768871344.773:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:09:30.196146 systemd[1]: Successfully loaded SELinux policy in 960.384ms. Jan 20 01:09:30.196182 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 61.590ms. Jan 20 01:09:30.196210 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:09:30.196228 systemd[1]: Detected virtualization kvm. Jan 20 01:09:30.196244 systemd[1]: Detected architecture x86-64. Jan 20 01:09:30.196261 systemd[1]: Detected first boot. Jan 20 01:09:30.196289 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:09:30.196687 zram_generator::config[1115]: No configuration found. Jan 20 01:09:30.196709 kernel: Guest personality initialized and is inactive Jan 20 01:09:30.196725 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 01:09:30.196750 kernel: Initialized host personality Jan 20 01:09:30.196775 kernel: NET: Registered PF_VSOCK protocol family Jan 20 01:09:30.196791 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:09:30.196810 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 01:09:30.196828 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:09:30.196844 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:09:30.196860 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:09:30.196876 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:09:30.196893 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:09:30.197183 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:09:30.197202 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:09:30.197219 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:09:30.197236 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:09:30.197253 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:09:30.197277 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:09:30.197293 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:09:30.197702 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:09:30.197721 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:09:30.197743 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:09:30.197759 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:09:30.197777 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:09:30.197793 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 01:09:30.197810 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:09:30.197829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:09:30.197846 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:09:30.197866 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:09:30.197882 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:09:30.197900 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:09:30.197919 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:09:30.197938 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:09:30.198135 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:09:30.198155 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:09:30.198175 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:09:30.198191 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:09:30.198208 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 01:09:30.198231 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:09:30.198251 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:09:30.198271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:09:30.198290 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:09:30.240108 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:09:30.266070 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:09:30.266114 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:09:30.266133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:09:30.266173 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:09:30.266191 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:09:30.266208 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:09:30.266226 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:09:30.266243 systemd[1]: Reached target machines.target - Containers. Jan 20 01:09:30.266258 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:09:30.266274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:09:30.266291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:09:30.295908 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:09:30.296067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:09:30.296096 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:09:30.322670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:09:30.322872 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:09:30.322894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:09:30.322912 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:09:30.322933 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:09:30.322950 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:09:30.322997 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:09:30.323019 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:09:30.323037 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:09:30.323054 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:09:30.323070 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:09:30.323089 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:09:30.323107 kernel: fuse: init (API version 7.41) Jan 20 01:09:30.323126 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:09:30.323143 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 01:09:30.323166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:09:30.323186 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 01:09:30.323206 systemd[1]: Stopped verity-setup.service. Jan 20 01:09:30.323223 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:09:30.337075 systemd-journald[1202]: Collecting audit messages is disabled. Jan 20 01:09:30.337159 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:09:30.337188 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:09:30.337207 systemd-journald[1202]: Journal started Jan 20 01:09:30.337244 systemd-journald[1202]: Runtime Journal (/run/log/journal/9a1529e8dc884c0c866c1d1c116f33b4) is 6M, max 48.3M, 42.2M free. Jan 20 01:09:16.489242 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:09:16.623871 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 01:09:16.631045 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:09:16.639053 systemd[1]: systemd-journald.service: Consumed 5.532s CPU time. Jan 20 01:09:30.536912 kernel: ACPI: bus type drm_connector registered Jan 20 01:09:30.801229 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:09:30.878005 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:09:30.984680 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:09:31.070009 kernel: loop: module loaded Jan 20 01:09:31.097873 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:09:31.187903 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:09:31.372375 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:09:31.540652 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:09:31.705633 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:09:31.706140 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:09:31.781119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:09:31.781869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:09:31.840261 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:09:31.841726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:09:31.932158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:09:31.936209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:09:32.019099 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:09:32.025115 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:09:32.135137 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:09:32.135962 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:09:32.216124 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:09:32.315093 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:09:32.416998 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:09:32.658143 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 01:09:32.790003 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:09:33.600878 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:09:33.767051 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:09:33.921771 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:09:33.998164 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:09:33.998242 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:09:34.080820 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 01:09:34.190992 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:09:34.319190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:09:34.377663 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:09:34.517089 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:09:34.612069 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:09:34.683234 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:09:34.818983 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:09:34.833151 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:09:35.989854 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:09:36.274810 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:09:36.286742 systemd-journald[1202]: Time spent on flushing to /var/log/journal/9a1529e8dc884c0c866c1d1c116f33b4 is 656.164ms for 984 entries. Jan 20 01:09:36.286742 systemd-journald[1202]: System Journal (/var/log/journal/9a1529e8dc884c0c866c1d1c116f33b4) is 8M, max 195.6M, 187.6M free. Jan 20 01:09:37.582790 systemd-journald[1202]: Received client request to flush runtime journal. Jan 20 01:09:36.394245 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:09:36.512228 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:09:37.763878 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:09:38.016187 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:09:38.482264 kernel: loop0: detected capacity change from 0 to 219144 Jan 20 01:09:38.485956 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:09:38.807281 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 01:09:39.308992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:09:40.121059 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:09:40.315180 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:09:40.377958 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:09:40.776147 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:09:40.872947 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 01:09:40.920738 kernel: loop1: detected capacity change from 0 to 110984 Jan 20 01:09:41.873993 kernel: loop2: detected capacity change from 0 to 128560 Jan 20 01:09:43.059819 kernel: loop3: detected capacity change from 0 to 219144 Jan 20 01:09:44.540767 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 20 01:09:44.540800 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 20 01:09:45.671978 kernel: loop4: detected capacity change from 0 to 110984 Jan 20 01:09:45.957062 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:09:47.188111 kernel: loop5: detected capacity change from 0 to 128560 Jan 20 01:09:48.293579 (sd-merge)[1259]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 01:09:48.322932 (sd-merge)[1259]: Merged extensions into '/usr'. Jan 20 01:09:49.022840 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:09:49.023195 systemd[1]: Reloading... Jan 20 01:09:52.126140 zram_generator::config[1292]: No configuration found. Jan 20 01:09:59.333574 systemd[1]: Reloading finished in 10217 ms. Jan 20 01:09:59.734699 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:09:59.826232 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:10:00.533005 systemd[1]: Starting ensure-sysext.service... Jan 20 01:10:00.619236 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:10:00.773899 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:10:01.303170 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:10:01.303517 systemd[1]: Reloading... Jan 20 01:10:01.357114 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:10:01.357169 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:10:01.364619 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:10:01.368588 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:10:01.404626 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:10:01.412269 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 20 01:10:01.412668 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 20 01:10:01.467580 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:10:01.532706 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:10:01.533183 systemd-tmpfiles[1326]: Skipping /boot Jan 20 01:10:01.901586 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 20 01:10:03.066707 zram_generator::config[1354]: No configuration found. Jan 20 01:10:03.113997 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:10:03.292719 systemd-tmpfiles[1326]: Skipping /boot Jan 20 01:10:11.839682 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 01:10:11.900116 systemd[1]: Reloading finished in 10589 ms. Jan 20 01:10:11.939962 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:10:11.983924 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:10:12.011031 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 01:10:12.109766 kernel: ACPI: button: Power Button [PWRF] Jan 20 01:10:12.112563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:10:12.389153 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:10:12.767745 systemd[1]: Finished ensure-sysext.service. Jan 20 01:10:13.621431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:10:13.708617 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:10:13.717928 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:10:13.795393 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 01:10:13.824515 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 01:10:14.825148 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:10:14.959859 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:10:14.992653 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:10:15.080218 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:10:15.117562 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:10:15.491738 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:10:15.492456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:10:15.516790 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:10:15.591437 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:10:15.703423 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:10:16.039567 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:10:16.491790 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:10:16.893558 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 01:10:16.935045 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:10:16.980526 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:10:16.986259 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:10:16.986971 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:10:17.128676 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:10:17.234618 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:10:17.235160 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:10:17.354709 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:10:17.414847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:10:17.415511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:10:17.484463 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:10:17.723867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:10:17.868492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:10:17.885847 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:10:18.222538 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:10:18.272154 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:10:18.701711 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:10:18.705190 augenrules[1488]: No rules Jan 20 01:10:18.768980 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:10:18.769680 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:10:18.984858 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:10:19.134106 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:10:19.169595 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:10:19.488580 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:10:20.162897 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:10:23.406150 systemd-networkd[1462]: lo: Link UP Jan 20 01:10:23.406226 systemd-networkd[1462]: lo: Gained carrier Jan 20 01:10:23.423524 systemd-networkd[1462]: Enumeration completed Jan 20 01:10:23.426097 systemd-networkd[1462]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:10:23.426514 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:10:23.440458 systemd-networkd[1462]: eth0: Link UP Jan 20 01:10:23.444292 systemd-networkd[1462]: eth0: Gained carrier Jan 20 01:10:23.460813 systemd-networkd[1462]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:10:23.600730 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 01:10:23.635906 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:10:23.675413 systemd-networkd[1462]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:10:23.680636 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 20 01:10:23.685241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:10:23.692927 systemd-timesyncd[1469]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 01:10:23.693108 systemd-timesyncd[1469]: Initial clock synchronization to Tue 2026-01-20 01:10:23.992103 UTC. Jan 20 01:10:23.731870 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:10:23.788695 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 01:10:23.880500 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:10:23.883748 systemd-resolved[1464]: Positive Trust Anchors: Jan 20 01:10:23.883765 systemd-resolved[1464]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:10:23.883809 systemd-resolved[1464]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:10:24.018508 systemd-resolved[1464]: Defaulting to hostname 'linux'. Jan 20 01:10:24.335855 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:10:24.413687 systemd[1]: Reached target network.target - Network. Jan 20 01:10:24.518923 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:10:24.522572 systemd-networkd[1462]: eth0: Gained IPv6LL Jan 20 01:10:24.613714 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:10:24.739470 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:10:24.888998 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:10:24.997682 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 01:10:25.036912 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:10:25.138884 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:10:25.183002 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:10:25.244176 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:10:25.244289 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:10:25.281259 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:10:25.332104 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:10:25.383001 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:10:25.442243 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 01:10:25.467558 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 01:10:25.514871 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 01:10:25.590720 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:10:25.636105 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 01:10:25.722567 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 01:10:25.761030 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:10:25.879851 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:10:26.030519 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:10:26.071723 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:10:26.124924 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:10:26.153430 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:10:26.153583 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:10:26.168289 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:10:26.305040 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 01:10:26.360445 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:10:26.415920 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:10:26.503545 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:10:26.552515 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:10:26.582668 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:10:26.727680 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 01:10:26.806941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:10:26.815280 jq[1523]: false Jan 20 01:10:26.875642 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:10:26.933193 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:10:27.014677 extend-filesystems[1524]: Found /dev/vda6 Jan 20 01:10:27.182840 extend-filesystems[1524]: Found /dev/vda9 Jan 20 01:10:27.182840 extend-filesystems[1524]: Checking size of /dev/vda9 Jan 20 01:10:27.071790 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:10:27.091453 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:10:27.197746 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:10:27.340127 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing passwd entry cache Jan 20 01:10:27.334907 oslogin_cache_refresh[1525]: Refreshing passwd entry cache Jan 20 01:10:27.373234 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:10:27.401061 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:10:27.406871 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:10:27.415264 extend-filesystems[1524]: Resized partition /dev/vda9 Jan 20 01:10:27.479625 extend-filesystems[1557]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 01:10:27.528215 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 01:10:27.424135 oslogin_cache_refresh[1525]: Failure getting users, quitting Jan 20 01:10:27.417768 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:10:27.528959 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting users, quitting Jan 20 01:10:27.528959 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:10:27.528959 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing group entry cache Jan 20 01:10:27.528959 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting groups, quitting Jan 20 01:10:27.528959 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:10:27.425131 oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:10:27.425211 oslogin_cache_refresh[1525]: Refreshing group entry cache Jan 20 01:10:27.446049 oslogin_cache_refresh[1525]: Failure getting groups, quitting Jan 20 01:10:27.446072 oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:10:27.744285 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:10:27.882125 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:10:27.937798 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:10:27.950713 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:10:27.957510 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 01:10:27.957967 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 01:10:28.017888 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:10:28.018637 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:10:28.046848 jq[1558]: true Jan 20 01:10:28.050607 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:10:28.143858 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 01:10:28.294078 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:10:28.295119 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:10:28.460233 update_engine[1556]: I20260120 01:10:28.428212 1556 main.cc:92] Flatcar Update Engine starting Jan 20 01:10:28.469717 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 01:10:28.469717 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 01:10:28.469717 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 01:10:28.543086 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:10:28.656927 extend-filesystems[1524]: Resized filesystem in /dev/vda9 Jan 20 01:10:28.599593 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:10:28.725292 jq[1564]: true Jan 20 01:10:29.036914 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 01:10:29.307662 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 01:10:29.308181 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 01:10:29.452586 dbus-daemon[1521]: [system] SELinux support is enabled Jan 20 01:10:29.458124 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:10:29.635500 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:10:29.838035 tar[1563]: linux-amd64/LICENSE Jan 20 01:10:29.838035 tar[1563]: linux-amd64/helm Jan 20 01:10:29.726108 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:10:29.838995 update_engine[1556]: I20260120 01:10:29.834669 1556 update_check_scheduler.cc:74] Next update check in 7m57s Jan 20 01:10:29.726689 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:10:29.729544 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:10:29.776080 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:10:29.776280 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:10:29.965138 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:10:29.971815 systemd-logind[1552]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 01:10:29.971854 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:10:29.975054 systemd-logind[1552]: New seat seat0. Jan 20 01:10:29.994945 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:10:30.159471 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:10:30.551682 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:10:30.657669 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:10:30.875544 bash[1609]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:10:30.777554 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:10:30.800060 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:10:30.808742 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:10:30.841675 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:10:30.871788 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:10:31.894548 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:10:31.951029 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:10:31.975775 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 01:10:31.987779 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:10:32.021696 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:10:34.752762 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:10:34.793476 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:33702.service - OpenSSH per-connection server daemon (10.0.0.1:33702). Jan 20 01:10:39.028199 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 33702 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:10:39.032080 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:10:39.308743 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:10:39.361159 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:10:40.196139 systemd-logind[1552]: New session 1 of user core. Jan 20 01:10:40.530170 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:10:40.576855 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:10:41.326679 (systemd)[1641]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:10:41.491023 kernel: kvm_amd: TSC scaling supported Jan 20 01:10:41.492693 kernel: kvm_amd: Nested Virtualization enabled Jan 20 01:10:41.560439 kernel: kvm_amd: Nested Paging enabled Jan 20 01:10:41.561764 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 01:10:41.563278 kernel: kvm_amd: PMU virtualization is disabled Jan 20 01:10:41.625532 systemd-logind[1552]: New session c1 of user core. Jan 20 01:10:41.990059 containerd[1566]: time="2026-01-20T01:10:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 01:10:42.036460 containerd[1566]: time="2026-01-20T01:10:42.035175706Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 01:10:43.999989 containerd[1566]: time="2026-01-20T01:10:43.999641348Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="441.938µs" Jan 20 01:10:44.013757 containerd[1566]: time="2026-01-20T01:10:44.013574536Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 01:10:44.013920 containerd[1566]: time="2026-01-20T01:10:44.013890882Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 01:10:44.031605 containerd[1566]: time="2026-01-20T01:10:44.031546831Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 01:10:44.031876 containerd[1566]: time="2026-01-20T01:10:44.031852710Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 01:10:44.038457 containerd[1566]: time="2026-01-20T01:10:44.038415511Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:10:44.038821 containerd[1566]: time="2026-01-20T01:10:44.038787163Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:10:44.038940 containerd[1566]: time="2026-01-20T01:10:44.038916016Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:10:44.039804 containerd[1566]: time="2026-01-20T01:10:44.039770725Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:10:44.039903 containerd[1566]: time="2026-01-20T01:10:44.039883395Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:10:44.039982 containerd[1566]: time="2026-01-20T01:10:44.039960730Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:10:44.040074 containerd[1566]: time="2026-01-20T01:10:44.040050719Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 01:10:44.048788 containerd[1566]: time="2026-01-20T01:10:44.048743528Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 01:10:44.049570 containerd[1566]: time="2026-01-20T01:10:44.049539104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:10:44.049730 containerd[1566]: time="2026-01-20T01:10:44.049704197Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:10:44.049816 containerd[1566]: time="2026-01-20T01:10:44.049796648Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 01:10:44.114089 containerd[1566]: time="2026-01-20T01:10:44.104697706Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 01:10:44.318722 containerd[1566]: time="2026-01-20T01:10:44.308939842Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 01:10:44.339271 containerd[1566]: time="2026-01-20T01:10:44.327764305Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.887112717Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.895220435Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.895614165Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.895648407Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.895672916Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.895691488Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.895714862Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.895735475Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.895753926Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.895770771Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.895785958Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.898996991Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.899703110Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 01:10:44.944744 containerd[1566]: time="2026-01-20T01:10:44.899740285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.899767656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.899852834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.899873538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.899890613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.899911154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.899925247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.899940636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.899958334Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.899973291Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.904811988Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.904925995Z" level=info msg="Start snapshots syncer" Jan 20 01:10:44.949590 containerd[1566]: time="2026-01-20T01:10:44.908746686Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 01:10:44.950114 containerd[1566]: time="2026-01-20T01:10:44.909844888Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 01:10:44.950114 containerd[1566]: time="2026-01-20T01:10:44.909922342Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.910291886Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.922944213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.923011834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.923034474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.923053910Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.923073286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.923087722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.923109790Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.923295946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.930804079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.930832636Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.933659029Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.933776170Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:10:44.963727 containerd[1566]: time="2026-01-20T01:10:44.933796239Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:10:44.964144 containerd[1566]: time="2026-01-20T01:10:44.933888048Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:10:44.964144 containerd[1566]: time="2026-01-20T01:10:44.933904018Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 01:10:44.964144 containerd[1566]: time="2026-01-20T01:10:44.933919196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 01:10:44.964144 containerd[1566]: time="2026-01-20T01:10:44.934023127Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 01:10:44.964144 containerd[1566]: time="2026-01-20T01:10:44.934135186Z" level=info msg="runtime interface created" Jan 20 01:10:44.964144 containerd[1566]: time="2026-01-20T01:10:44.934150835Z" level=info msg="created NRI interface" Jan 20 01:10:44.964144 containerd[1566]: time="2026-01-20T01:10:44.934168995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 01:10:44.964144 containerd[1566]: time="2026-01-20T01:10:44.934282821Z" level=info msg="Connect containerd service" Jan 20 01:10:44.964144 containerd[1566]: time="2026-01-20T01:10:44.939689333Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:10:44.964144 containerd[1566]: time="2026-01-20T01:10:44.955835614Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:10:48.467725 systemd[1641]: Queued start job for default target default.target. Jan 20 01:10:48.553143 systemd[1641]: Created slice app.slice - User Application Slice. Jan 20 01:10:48.553273 systemd[1641]: Reached target paths.target - Paths. Jan 20 01:10:48.557463 systemd[1641]: Reached target timers.target - Timers. Jan 20 01:10:48.595846 systemd[1641]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:10:49.019975 tar[1563]: linux-amd64/README.md Jan 20 01:10:49.096002 systemd[1641]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:10:49.096197 systemd[1641]: Reached target sockets.target - Sockets. Jan 20 01:10:49.096492 systemd[1641]: Reached target basic.target - Basic System. Jan 20 01:10:49.096559 systemd[1641]: Reached target default.target - Main User Target. Jan 20 01:10:49.096612 systemd[1641]: Startup finished in 6.782s. Jan 20 01:10:49.096846 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:10:49.209087 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:10:49.279047 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:10:50.414938 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:42006.service - OpenSSH per-connection server daemon (10.0.0.1:42006). Jan 20 01:10:51.656621 containerd[1566]: time="2026-01-20T01:10:51.454191744Z" level=info msg="Start subscribing containerd event" Jan 20 01:10:51.665712 containerd[1566]: time="2026-01-20T01:10:51.665294632Z" level=info msg="Start recovering state" Jan 20 01:10:51.667554 containerd[1566]: time="2026-01-20T01:10:51.643579880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:10:51.676613 containerd[1566]: time="2026-01-20T01:10:51.676568286Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:10:51.695190 containerd[1566]: time="2026-01-20T01:10:51.691894817Z" level=info msg="Start event monitor" Jan 20 01:10:51.708940 containerd[1566]: time="2026-01-20T01:10:51.703738754Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:10:51.709160 containerd[1566]: time="2026-01-20T01:10:51.709124850Z" level=info msg="Start streaming server" Jan 20 01:10:51.709551 containerd[1566]: time="2026-01-20T01:10:51.709516137Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 01:10:51.710264 containerd[1566]: time="2026-01-20T01:10:51.710235950Z" level=info msg="runtime interface starting up..." Jan 20 01:10:51.710694 containerd[1566]: time="2026-01-20T01:10:51.710670021Z" level=info msg="starting plugins..." Jan 20 01:10:51.710891 containerd[1566]: time="2026-01-20T01:10:51.710861051Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 01:10:51.716611 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:10:51.726840 containerd[1566]: time="2026-01-20T01:10:51.720687436Z" level=info msg="containerd successfully booted in 9.740090s" Jan 20 01:10:52.708950 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 42006 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:10:52.730112 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:10:52.938002 systemd-logind[1552]: New session 2 of user core. Jan 20 01:10:53.148860 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:10:53.552197 sshd[1674]: Connection closed by 10.0.0.1 port 42006 Jan 20 01:10:53.565859 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Jan 20 01:10:53.688605 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:42006.service: Deactivated successfully. Jan 20 01:10:53.713554 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 01:10:53.762236 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. Jan 20 01:10:53.800640 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:42032.service - OpenSSH per-connection server daemon (10.0.0.1:42032). Jan 20 01:10:53.814907 systemd-logind[1552]: Removed session 2. Jan 20 01:10:55.105667 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 42032 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:10:55.254618 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:10:55.274138 systemd-logind[1552]: New session 3 of user core. Jan 20 01:10:55.717850 kernel: EDAC MC: Ver: 3.0.0 Jan 20 01:10:55.770728 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:10:56.362210 sshd[1684]: Connection closed by 10.0.0.1 port 42032 Jan 20 01:10:56.381904 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jan 20 01:10:56.407948 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. Jan 20 01:10:56.412601 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:42032.service: Deactivated successfully. Jan 20 01:10:56.532866 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 01:10:56.576725 systemd-logind[1552]: Removed session 3. Jan 20 01:11:01.507768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:11:01.519189 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:11:01.587217 systemd[1]: Startup finished in 29.259s (kernel) + 2min 45.602s (initrd) + 1min 57.776s (userspace) = 5min 12.637s. Jan 20 01:11:01.709092 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:11:07.173889 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:59016.service - OpenSSH per-connection server daemon (10.0.0.1:59016). Jan 20 01:11:08.583796 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 59016 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:11:08.606205 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:11:08.844196 systemd-logind[1552]: New session 4 of user core. Jan 20 01:11:08.880741 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:11:09.689766 sshd[1704]: Connection closed by 10.0.0.1 port 59016 Jan 20 01:11:09.697649 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jan 20 01:11:09.792285 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:59016.service: Deactivated successfully. Jan 20 01:11:09.824126 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:11:09.853494 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:11:09.957816 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:59034.service - OpenSSH per-connection server daemon (10.0.0.1:59034). Jan 20 01:11:09.974506 systemd-logind[1552]: Removed session 4. Jan 20 01:11:11.570451 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 59034 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:11:11.714163 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:11:11.772563 systemd-logind[1552]: New session 5 of user core. Jan 20 01:11:11.793635 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:11:12.087973 sshd[1714]: Connection closed by 10.0.0.1 port 59034 Jan 20 01:11:12.097214 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jan 20 01:11:12.306197 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:59034.service: Deactivated successfully. Jan 20 01:11:12.356033 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:11:12.371162 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:11:12.395713 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:59050.service - OpenSSH per-connection server daemon (10.0.0.1:59050). Jan 20 01:11:12.421070 systemd-logind[1552]: Removed session 5. Jan 20 01:11:13.215836 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 59050 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:11:13.220070 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:11:13.333008 systemd-logind[1552]: New session 6 of user core. Jan 20 01:11:13.365774 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:11:14.147512 sshd[1724]: Connection closed by 10.0.0.1 port 59050 Jan 20 01:11:14.157877 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jan 20 01:11:14.210770 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:59050.service: Deactivated successfully. Jan 20 01:11:14.469078 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:11:14.485022 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:11:14.503857 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:59088.service - OpenSSH per-connection server daemon (10.0.0.1:59088). Jan 20 01:11:14.514189 systemd-logind[1552]: Removed session 6. Jan 20 01:11:14.578708 kubelet[1694]: E0120 01:11:14.578178 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:11:14.608197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:11:14.613072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:11:14.616861 systemd[1]: kubelet.service: Consumed 9.478s CPU time, 259.3M memory peak. Jan 20 01:11:14.961158 update_engine[1556]: I20260120 01:11:14.957656 1556 update_attempter.cc:509] Updating boot flags... Jan 20 01:11:15.478930 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 59088 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:11:15.494502 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:11:15.645865 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:11:15.658536 systemd-logind[1552]: New session 7 of user core. Jan 20 01:11:16.388021 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:11:16.388955 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:11:17.460179 sudo[1749]: pam_unix(sudo:session): session closed for user root Jan 20 01:11:17.530534 sshd[1747]: Connection closed by 10.0.0.1 port 59088 Jan 20 01:11:17.616463 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jan 20 01:11:17.836868 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:46212.service - OpenSSH per-connection server daemon (10.0.0.1:46212). Jan 20 01:11:17.884068 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:59088.service: Deactivated successfully. Jan 20 01:11:17.901813 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:11:18.315641 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:11:19.048753 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 46212 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:11:19.057231 systemd-logind[1552]: Removed session 7. Jan 20 01:11:19.066819 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:11:19.657242 systemd-logind[1552]: New session 8 of user core. Jan 20 01:11:19.687007 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:11:20.091967 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:11:20.109870 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:11:20.714976 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 20 01:11:20.900491 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 01:11:20.901194 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:11:21.190948 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:11:22.543808 augenrules[1786]: No rules Jan 20 01:11:22.605869 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:11:22.618265 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:11:22.799884 sudo[1763]: pam_unix(sudo:session): session closed for user root Jan 20 01:11:22.827505 sshd[1762]: Connection closed by 10.0.0.1 port 46212 Jan 20 01:11:22.837772 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jan 20 01:11:23.000663 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:46212.service: Deactivated successfully. Jan 20 01:11:23.231448 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:11:23.268474 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:11:23.305941 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:46242.service - OpenSSH per-connection server daemon (10.0.0.1:46242). Jan 20 01:11:23.326600 systemd-logind[1552]: Removed session 8. Jan 20 01:11:24.126864 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 46242 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:11:24.156628 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:11:24.252273 systemd-logind[1552]: New session 9 of user core. Jan 20 01:11:24.325047 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:11:24.618951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:11:24.661991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:11:24.682524 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:11:24.683112 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:11:34.733647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:11:34.780198 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:11:40.405211 kubelet[1825]: E0120 01:11:40.399692 1825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:11:40.433555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:11:40.433825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:11:40.442918 systemd[1]: kubelet.service: Consumed 3.876s CPU time, 115.7M memory peak. Jan 20 01:11:43.732209 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:11:43.812695 (dockerd)[1836]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:11:50.633640 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:11:50.732546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:11:58.030518 dockerd[1836]: time="2026-01-20T01:11:58.024606372Z" level=info msg="Starting up" Jan 20 01:11:58.058259 dockerd[1836]: time="2026-01-20T01:11:58.057500963Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 01:11:59.542495 dockerd[1836]: time="2026-01-20T01:11:59.541122169Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 01:12:02.860736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:12:03.049174 systemd[1]: var-lib-docker-metacopy\x2dcheck1570131334-merged.mount: Deactivated successfully. Jan 20 01:12:03.065947 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:12:04.053788 dockerd[1836]: time="2026-01-20T01:12:04.023176411Z" level=info msg="Loading containers: start." Jan 20 01:12:04.389803 kernel: Initializing XFRM netlink socket Jan 20 01:12:06.565163 kubelet[1868]: E0120 01:12:06.545107 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:12:06.595608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:12:06.596079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:12:06.611096 systemd[1]: kubelet.service: Consumed 3.273s CPU time, 114.6M memory peak. Jan 20 01:12:12.100620 systemd-networkd[1462]: docker0: Link UP Jan 20 01:12:12.154956 dockerd[1836]: time="2026-01-20T01:12:12.151218124Z" level=info msg="Loading containers: done." Jan 20 01:12:12.533063 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1008600136-merged.mount: Deactivated successfully. Jan 20 01:12:12.592449 dockerd[1836]: time="2026-01-20T01:12:12.592003088Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:12:12.593219 dockerd[1836]: time="2026-01-20T01:12:12.593175510Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 01:12:12.593831 dockerd[1836]: time="2026-01-20T01:12:12.593798487Z" level=info msg="Initializing buildkit" Jan 20 01:12:13.055738 dockerd[1836]: time="2026-01-20T01:12:13.046384115Z" level=info msg="Completed buildkit initialization" Jan 20 01:12:13.202419 dockerd[1836]: time="2026-01-20T01:12:13.200936096Z" level=info msg="Daemon has completed initialization" Jan 20 01:12:13.202419 dockerd[1836]: time="2026-01-20T01:12:13.202587686Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:12:13.219117 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:12:16.779260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:12:17.620636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:12:24.023858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:12:24.082198 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:12:26.699030 kubelet[2074]: E0120 01:12:26.698658 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:12:26.726895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:12:26.738748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:12:28.222963 systemd[1]: kubelet.service: Consumed 2.000s CPU time, 110.4M memory peak. Jan 20 01:12:28.315811 containerd[1566]: time="2026-01-20T01:12:28.315555880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 01:12:37.077227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 01:12:37.122162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:12:39.366179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110730726.mount: Deactivated successfully. Jan 20 01:12:46.501003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:12:46.627486 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:12:51.569607 kubelet[2107]: E0120 01:12:51.536085 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:12:51.578669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:12:51.578930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:12:51.582643 systemd[1]: kubelet.service: Consumed 2.180s CPU time, 110.4M memory peak. Jan 20 01:13:01.625477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 01:13:01.828177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:13:11.176582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:13:11.339807 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:13:17.473134 kubelet[2169]: E0120 01:13:17.472737 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:13:17.522999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:13:17.525047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:13:17.535553 systemd[1]: kubelet.service: Consumed 5.186s CPU time, 111M memory peak. Jan 20 01:13:27.713168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 01:13:27.782287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:13:39.295587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:13:39.687004 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:13:41.654165 containerd[1566]: time="2026-01-20T01:13:41.631151167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:13:41.724983 containerd[1566]: time="2026-01-20T01:13:41.724900550Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 20 01:13:41.816631 containerd[1566]: time="2026-01-20T01:13:41.794548331Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:13:41.873687 containerd[1566]: time="2026-01-20T01:13:41.873625050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:13:41.880057 containerd[1566]: time="2026-01-20T01:13:41.880007221Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1m13.563057359s" Jan 20 01:13:41.880874 containerd[1566]: time="2026-01-20T01:13:41.880838519Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 20 01:13:42.067881 containerd[1566]: time="2026-01-20T01:13:42.065911242Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 01:13:42.866855 kubelet[2186]: E0120 01:13:42.866574 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:13:42.961860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:13:42.962625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:13:43.134840 systemd[1]: kubelet.service: Consumed 3.022s CPU time, 111.4M memory peak. Jan 20 01:13:53.154953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 01:13:53.217899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:13:59.917724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:14:00.022907 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:14:04.201580 kubelet[2209]: E0120 01:14:04.182009 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:14:04.290176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:14:04.290879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:14:04.303172 systemd[1]: kubelet.service: Consumed 2.856s CPU time, 110M memory peak. Jan 20 01:14:14.709038 containerd[1566]: time="2026-01-20T01:14:14.501199818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:14:14.799788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 01:14:14.845223 containerd[1566]: time="2026-01-20T01:14:14.839668658Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 20 01:14:15.101933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:14:15.428961 containerd[1566]: time="2026-01-20T01:14:15.408855384Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:14:15.507900 containerd[1566]: time="2026-01-20T01:14:15.496173086Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 33.4266805s" Jan 20 01:14:15.507900 containerd[1566]: time="2026-01-20T01:14:15.496481499Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 20 01:14:15.507900 containerd[1566]: time="2026-01-20T01:14:15.503479161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:14:15.532961 containerd[1566]: time="2026-01-20T01:14:15.532493705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 01:14:23.684752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:14:23.854439 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:14:31.010981 kubelet[2229]: E0120 01:14:31.008756 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:14:31.040127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:14:31.063758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:14:31.081943 systemd[1]: kubelet.service: Consumed 4.403s CPU time, 112.3M memory peak. Jan 20 01:14:41.201762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 01:14:41.311213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:14:43.088147 containerd[1566]: time="2026-01-20T01:14:43.080618159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:14:43.182892 containerd[1566]: time="2026-01-20T01:14:43.182256229Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 20 01:14:43.210488 containerd[1566]: time="2026-01-20T01:14:43.209790419Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:14:43.239830 containerd[1566]: time="2026-01-20T01:14:43.239766059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:14:43.264031 containerd[1566]: time="2026-01-20T01:14:43.263573922Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 27.731023489s" Jan 20 01:14:43.264031 containerd[1566]: time="2026-01-20T01:14:43.263729547Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 20 01:14:43.397158 containerd[1566]: time="2026-01-20T01:14:43.394592170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 01:14:52.316880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:14:52.521495 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:14:59.744823 kubelet[2249]: E0120 01:14:59.710165 2249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:14:59.780457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:14:59.781667 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:14:59.784617 systemd[1]: kubelet.service: Consumed 4.214s CPU time, 116.3M memory peak. Jan 20 01:15:10.049160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 01:15:10.108208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:15:13.588581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442312954.mount: Deactivated successfully. Jan 20 01:15:15.554276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:15:15.657045 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:15:18.338719 kubelet[2274]: E0120 01:15:18.326102 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:15:18.358728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:15:18.359239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:15:18.370224 systemd[1]: kubelet.service: Consumed 2.302s CPU time, 110.3M memory peak. Jan 20 01:15:28.423997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 01:15:28.479998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:15:30.381931 containerd[1566]: time="2026-01-20T01:15:30.363806653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:15:30.431649 containerd[1566]: time="2026-01-20T01:15:30.420151953Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 20 01:15:30.450760 containerd[1566]: time="2026-01-20T01:15:30.448886011Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:15:30.489829 containerd[1566]: time="2026-01-20T01:15:30.476820464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:15:30.507750 containerd[1566]: time="2026-01-20T01:15:30.506180089Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 47.109676527s" Jan 20 01:15:30.510978 containerd[1566]: time="2026-01-20T01:15:30.508614904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 20 01:15:30.527698 containerd[1566]: time="2026-01-20T01:15:30.524111779Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 01:15:36.758233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:15:36.874492 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:15:37.493179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439037201.mount: Deactivated successfully. Jan 20 01:15:39.341006 kubelet[2292]: E0120 01:15:39.339122 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:15:39.453106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:15:39.453896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:15:39.493513 systemd[1]: kubelet.service: Consumed 2.166s CPU time, 112.6M memory peak. Jan 20 01:15:49.682641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 20 01:15:49.831653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:16:01.122174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:16:01.797887 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:16:04.086034 kubelet[2364]: E0120 01:16:04.084237 2364 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:16:04.114113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:16:04.116081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:16:04.125675 systemd[1]: kubelet.service: Consumed 2.237s CPU time, 110.4M memory peak. Jan 20 01:16:14.706134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 20 01:16:14.721967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:16:16.339398 containerd[1566]: time="2026-01-20T01:16:16.338778477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:16:16.372683 containerd[1566]: time="2026-01-20T01:16:16.372616950Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 20 01:16:16.380093 containerd[1566]: time="2026-01-20T01:16:16.379692004Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:16:16.436606 containerd[1566]: time="2026-01-20T01:16:16.436490657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:16:16.490592 containerd[1566]: time="2026-01-20T01:16:16.477731237Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 45.952631796s" Jan 20 01:16:16.502908 containerd[1566]: time="2026-01-20T01:16:16.492870469Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 20 01:16:16.534135 containerd[1566]: time="2026-01-20T01:16:16.528858858Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 01:16:19.796744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:16:19.980753 (kubelet)[2381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:16:20.232637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739313774.mount: Deactivated successfully. Jan 20 01:16:20.426972 containerd[1566]: time="2026-01-20T01:16:20.425190295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:16:20.434357 containerd[1566]: time="2026-01-20T01:16:20.433993195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 20 01:16:20.437682 containerd[1566]: time="2026-01-20T01:16:20.437630591Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:16:20.459647 containerd[1566]: time="2026-01-20T01:16:20.459545860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:16:20.468225 containerd[1566]: time="2026-01-20T01:16:20.462544996Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 3.93362406s" Jan 20 01:16:20.468225 containerd[1566]: time="2026-01-20T01:16:20.462641419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 20 01:16:20.480232 containerd[1566]: time="2026-01-20T01:16:20.479134227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 01:16:21.738553 kubelet[2381]: E0120 01:16:21.737395 2381 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:16:21.809501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:16:21.809783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:16:21.818814 systemd[1]: kubelet.service: Consumed 1.729s CPU time, 110.8M memory peak. Jan 20 01:16:23.567031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176788624.mount: Deactivated successfully. Jan 20 01:16:34.615538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 20 01:16:34.705968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:16:48.993728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:16:49.282922 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:16:53.050681 kubelet[2426]: E0120 01:16:53.047108 2426 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:16:53.113535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:16:53.113813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:16:53.120065 systemd[1]: kubelet.service: Consumed 4.197s CPU time, 111.9M memory peak. Jan 20 01:17:03.121252 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Jan 20 01:17:03.157836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:17:11.590790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:17:11.630844 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:17:12.986769 kubelet[2472]: E0120 01:17:12.982932 2472 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:17:13.497182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:17:13.499695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:17:13.519755 systemd[1]: kubelet.service: Consumed 2.281s CPU time, 110.3M memory peak. Jan 20 01:17:23.818131 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Jan 20 01:17:23.877975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:17:28.700836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:17:28.804675 (kubelet)[2488]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:17:30.501736 kubelet[2488]: E0120 01:17:30.496801 2488 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:17:30.545817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:17:30.546168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:17:30.564561 systemd[1]: kubelet.service: Consumed 1.565s CPU time, 110.5M memory peak. Jan 20 01:17:40.743264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Jan 20 01:17:40.806117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:17:47.013026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:17:47.116665 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:17:49.220428 kubelet[2509]: E0120 01:17:49.219160 2509 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:17:49.237914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:17:49.242074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:17:49.278694 systemd[1]: kubelet.service: Consumed 1.785s CPU time, 110.4M memory peak. Jan 20 01:17:49.878799 containerd[1566]: time="2026-01-20T01:17:49.874093096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:17:49.904679 containerd[1566]: time="2026-01-20T01:17:49.904125586Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 20 01:17:49.934834 containerd[1566]: time="2026-01-20T01:17:49.910511374Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:17:49.934834 containerd[1566]: time="2026-01-20T01:17:49.933226952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:17:49.949948 containerd[1566]: time="2026-01-20T01:17:49.948909095Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 1m29.469709763s" Jan 20 01:17:49.949948 containerd[1566]: time="2026-01-20T01:17:49.949054671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 20 01:17:59.521470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Jan 20 01:17:59.568948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:04.047168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:04.193010 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:18:06.303827 kubelet[2548]: E0120 01:18:06.302485 2548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:18:06.339607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:18:06.341766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:18:06.346060 systemd[1]: kubelet.service: Consumed 1.955s CPU time, 110.6M memory peak. Jan 20 01:18:16.392086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Jan 20 01:18:16.430786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:19.269765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:19.336869 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:18:20.807288 kubelet[2565]: E0120 01:18:20.805819 2565 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:18:20.833062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:18:20.833935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:18:20.858584 systemd[1]: kubelet.service: Consumed 1.067s CPU time, 110.7M memory peak. Jan 20 01:18:25.596256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:25.597238 systemd[1]: kubelet.service: Consumed 1.067s CPU time, 110.7M memory peak. Jan 20 01:18:25.688813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:26.366281 systemd[1]: Reload requested from client PID 2581 ('systemctl') (unit session-9.scope)... Jan 20 01:18:26.367931 systemd[1]: Reloading... Jan 20 01:18:27.086162 update_engine[1556]: I20260120 01:18:27.085654 1556 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 01:18:27.086162 update_engine[1556]: I20260120 01:18:27.086007 1556 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 01:18:27.094632 update_engine[1556]: I20260120 01:18:27.093165 1556 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 01:18:27.097644 update_engine[1556]: I20260120 01:18:27.095096 1556 omaha_request_params.cc:62] Current group set to stable Jan 20 01:18:27.101217 update_engine[1556]: I20260120 01:18:27.101173 1556 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 01:18:27.101510 update_engine[1556]: I20260120 01:18:27.101483 1556 update_attempter.cc:643] Scheduling an action processor start. Jan 20 01:18:27.101777 update_engine[1556]: I20260120 01:18:27.101675 1556 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:18:27.104743 update_engine[1556]: I20260120 01:18:27.104712 1556 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 01:18:27.109579 update_engine[1556]: I20260120 01:18:27.109540 1556 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:18:27.109681 update_engine[1556]: I20260120 01:18:27.109658 1556 omaha_request_action.cc:272] Request: Jan 20 01:18:27.109681 update_engine[1556]: Jan 20 01:18:27.109681 update_engine[1556]: Jan 20 01:18:27.109681 update_engine[1556]: Jan 20 01:18:27.109681 update_engine[1556]: Jan 20 01:18:27.109681 update_engine[1556]: Jan 20 01:18:27.109681 update_engine[1556]: Jan 20 01:18:27.109681 update_engine[1556]: Jan 20 01:18:27.109681 update_engine[1556]: Jan 20 01:18:27.110159 update_engine[1556]: I20260120 01:18:27.110126 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:18:27.137902 update_engine[1556]: I20260120 01:18:27.137844 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:18:27.151844 update_engine[1556]: I20260120 01:18:27.151603 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:18:27.172772 update_engine[1556]: E20260120 01:18:27.172719 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:18:27.173101 update_engine[1556]: I20260120 01:18:27.173069 1556 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 01:18:27.198879 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 01:18:27.418580 zram_generator::config[2624]: No configuration found. Jan 20 01:18:30.295047 systemd[1]: Reloading finished in 3916 ms. Jan 20 01:18:30.792054 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:18:30.795704 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:18:30.820764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:30.835026 systemd[1]: kubelet.service: Consumed 427ms CPU time, 98.3M memory peak. Jan 20 01:18:30.891676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:33.220130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:33.302798 (kubelet)[2673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:18:35.265631 kubelet[2673]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:18:35.265631 kubelet[2673]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:18:35.265631 kubelet[2673]: I0120 01:18:35.262597 2673 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:18:36.981525 update_engine[1556]: I20260120 01:18:36.976135 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:18:36.981525 update_engine[1556]: I20260120 01:18:36.981132 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:18:37.094083 update_engine[1556]: I20260120 01:18:37.090984 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:18:37.106146 update_engine[1556]: E20260120 01:18:37.104538 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:18:37.106146 update_engine[1556]: I20260120 01:18:37.105978 1556 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 01:18:41.177770 kubelet[2673]: I0120 01:18:41.167911 2673 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 01:18:41.177770 kubelet[2673]: I0120 01:18:41.168032 2673 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:18:41.201808 kubelet[2673]: I0120 01:18:41.180749 2673 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 01:18:41.201808 kubelet[2673]: I0120 01:18:41.180789 2673 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:18:41.201808 kubelet[2673]: I0120 01:18:41.199870 2673 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:18:41.522046 kubelet[2673]: E0120 01:18:41.503954 2673 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:18:41.582252 kubelet[2673]: I0120 01:18:41.576241 2673 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:18:41.839509 kubelet[2673]: I0120 01:18:41.839194 2673 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:18:42.095277 kubelet[2673]: I0120 01:18:42.086860 2673 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 01:18:42.122098 kubelet[2673]: I0120 01:18:42.118184 2673 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:18:42.145723 kubelet[2673]: I0120 01:18:42.118864 2673 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:18:42.154619 kubelet[2673]: I0120 01:18:42.147237 2673 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:18:42.154619 kubelet[2673]: I0120 01:18:42.147632 2673 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 01:18:42.154619 kubelet[2673]: I0120 01:18:42.148177 2673 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 01:18:42.233894 kubelet[2673]: I0120 01:18:42.230861 2673 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:18:42.240864 kubelet[2673]: I0120 01:18:42.237666 2673 kubelet.go:475] "Attempting to sync node with API server" Jan 20 01:18:42.240864 kubelet[2673]: I0120 01:18:42.237870 2673 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:18:42.247771 kubelet[2673]: I0120 01:18:42.246224 2673 kubelet.go:387] "Adding apiserver pod source" Jan 20 01:18:42.247771 kubelet[2673]: I0120 01:18:42.246593 2673 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:18:42.282006 kubelet[2673]: E0120 01:18:42.278806 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:18:42.282006 kubelet[2673]: E0120 01:18:42.281679 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:18:42.340616 kubelet[2673]: I0120 01:18:42.335891 2673 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:18:42.483930 kubelet[2673]: I0120 01:18:42.469291 2673 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:18:42.495009 kubelet[2673]: I0120 01:18:42.491657 2673 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 01:18:42.503245 kubelet[2673]: W0120 01:18:42.503099 2673 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:18:42.709502 kubelet[2673]: I0120 01:18:42.696286 2673 server.go:1262] "Started kubelet" Jan 20 01:18:42.769606 kubelet[2673]: I0120 01:18:42.750499 2673 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:18:42.770076 kubelet[2673]: I0120 01:18:42.770021 2673 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 01:18:42.838004 kubelet[2673]: I0120 01:18:42.837952 2673 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:18:42.868833 kubelet[2673]: I0120 01:18:42.868753 2673 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:18:42.892625 kubelet[2673]: I0120 01:18:42.871174 2673 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:18:42.892625 kubelet[2673]: I0120 01:18:42.880097 2673 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 01:18:42.975573 kubelet[2673]: I0120 01:18:42.969267 2673 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:18:43.091694 kubelet[2673]: E0120 01:18:43.059505 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4ba44d75fd6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,LastTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:18:43.167600 kubelet[2673]: I0120 01:18:43.096696 2673 server.go:310] "Adding debug handlers to kubelet server" Jan 20 01:18:43.167600 kubelet[2673]: I0120 01:18:43.165006 2673 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 01:18:43.167600 kubelet[2673]: I0120 01:18:43.166061 2673 reconciler.go:29] "Reconciler: start to sync state" Jan 20 01:18:43.219903 kubelet[2673]: E0120 01:18:43.216833 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:43.219903 kubelet[2673]: E0120 01:18:43.219186 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Jan 20 01:18:43.297778 kubelet[2673]: E0120 01:18:43.297716 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:18:43.342824 kubelet[2673]: E0120 01:18:43.323192 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:43.372692 kubelet[2673]: I0120 01:18:43.369227 2673 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:18:43.372692 kubelet[2673]: I0120 01:18:43.369783 2673 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:18:43.381037 kubelet[2673]: E0120 01:18:43.381007 2673 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:18:43.478039 kubelet[2673]: E0120 01:18:43.477249 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:43.506125 kubelet[2673]: E0120 01:18:43.505705 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Jan 20 01:18:43.569872 kubelet[2673]: E0120 01:18:43.569828 2673 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:18:43.591642 kubelet[2673]: E0120 01:18:43.591601 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:43.595944 kubelet[2673]: I0120 01:18:43.595747 2673 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:18:43.678597 kubelet[2673]: E0120 01:18:43.677215 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:18:43.694527 kubelet[2673]: E0120 01:18:43.693805 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:43.780721 kubelet[2673]: E0120 01:18:43.775100 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:18:43.800721 kubelet[2673]: E0120 01:18:43.800113 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:43.892564 kubelet[2673]: I0120 01:18:43.890037 2673 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:18:43.892564 kubelet[2673]: I0120 01:18:43.890135 2673 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:18:43.892564 kubelet[2673]: I0120 01:18:43.890165 2673 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:18:43.906540 kubelet[2673]: E0120 01:18:43.905558 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:43.906540 kubelet[2673]: E0120 01:18:43.906201 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Jan 20 01:18:43.940813 kubelet[2673]: I0120 01:18:43.940770 2673 policy_none.go:49] "None policy: Start" Jan 20 01:18:43.944984 kubelet[2673]: I0120 01:18:43.944954 2673 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 01:18:43.945124 kubelet[2673]: I0120 01:18:43.945104 2673 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 01:18:43.987693 kubelet[2673]: I0120 01:18:43.987647 2673 policy_none.go:47] "Start" Jan 20 01:18:44.008098 kubelet[2673]: E0120 01:18:44.006148 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:44.095799 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:18:44.179802 kubelet[2673]: E0120 01:18:44.178206 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:44.276150 kubelet[2673]: I0120 01:18:44.274683 2673 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 01:18:44.278900 kubelet[2673]: E0120 01:18:44.278858 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:44.323915 kubelet[2673]: I0120 01:18:44.315034 2673 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 01:18:44.323915 kubelet[2673]: I0120 01:18:44.315677 2673 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 01:18:44.390591 kubelet[2673]: I0120 01:18:44.387660 2673 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 01:18:44.390591 kubelet[2673]: E0120 01:18:44.390175 2673 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:18:44.468526 kubelet[2673]: E0120 01:18:44.458706 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:44.525057 kubelet[2673]: E0120 01:18:44.515474 2673 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:18:44.525057 kubelet[2673]: E0120 01:18:44.524927 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:18:44.569536 kubelet[2673]: E0120 01:18:44.566500 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:44.593602 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:18:44.632079 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:18:44.745164 kubelet[2673]: E0120 01:18:44.740500 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:44.745164 kubelet[2673]: E0120 01:18:44.740144 2673 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:18:44.780771 kubelet[2673]: E0120 01:18:44.754105 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:18:44.804011 kubelet[2673]: E0120 01:18:44.802525 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Jan 20 01:18:44.842089 kubelet[2673]: E0120 01:18:44.841630 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:44.847637 kubelet[2673]: E0120 01:18:44.842958 2673 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:18:44.847637 kubelet[2673]: I0120 01:18:44.843896 2673 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:18:44.847637 kubelet[2673]: I0120 01:18:44.843914 2673 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:18:44.922124 kubelet[2673]: E0120 01:18:44.904789 2673 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:18:44.922124 kubelet[2673]: E0120 01:18:44.905007 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:18:44.922124 kubelet[2673]: I0120 01:18:44.916038 2673 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:18:45.013293 kubelet[2673]: I0120 01:18:45.012661 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:45.013957 kubelet[2673]: E0120 01:18:45.013924 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 20 01:18:45.427788 kubelet[2673]: I0120 01:18:45.421038 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37bc278c1c70218bb9ba1b32f2e9b66e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"37bc278c1c70218bb9ba1b32f2e9b66e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:18:45.462878 kubelet[2673]: I0120 01:18:45.422096 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37bc278c1c70218bb9ba1b32f2e9b66e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"37bc278c1c70218bb9ba1b32f2e9b66e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:18:45.462878 kubelet[2673]: I0120 01:18:45.431655 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37bc278c1c70218bb9ba1b32f2e9b66e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"37bc278c1c70218bb9ba1b32f2e9b66e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:18:45.462878 kubelet[2673]: E0120 01:18:45.449049 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:18:45.473795 kubelet[2673]: E0120 01:18:45.473647 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:18:45.529774 kubelet[2673]: I0120 01:18:45.529733 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:45.539089 kubelet[2673]: I0120 01:18:45.539048 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:18:45.550537 kubelet[2673]: I0120 01:18:45.548115 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:18:45.550537 kubelet[2673]: I0120 01:18:45.549867 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:18:45.550537 kubelet[2673]: I0120 01:18:45.549899 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:18:45.561821 kubelet[2673]: E0120 01:18:45.548023 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 20 01:18:45.562014 kubelet[2673]: I0120 01:18:45.561987 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:18:45.718035 systemd[1]: Created slice kubepods-burstable-pod37bc278c1c70218bb9ba1b32f2e9b66e.slice - libcontainer container kubepods-burstable-pod37bc278c1c70218bb9ba1b32f2e9b66e.slice. Jan 20 01:18:45.769464 kubelet[2673]: I0120 01:18:45.768619 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:18:45.906817 kubelet[2673]: E0120 01:18:45.900078 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:45.988986 kubelet[2673]: E0120 01:18:45.978219 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:46.044484 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 20 01:18:46.047940 containerd[1566]: time="2026-01-20T01:18:46.047836764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:37bc278c1c70218bb9ba1b32f2e9b66e,Namespace:kube-system,Attempt:0,}" Jan 20 01:18:46.083086 kubelet[2673]: I0120 01:18:46.082902 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:46.099862 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 20 01:18:46.102680 kubelet[2673]: E0120 01:18:46.101080 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 20 01:18:46.167488 kubelet[2673]: E0120 01:18:46.159055 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:46.207662 kubelet[2673]: E0120 01:18:46.205682 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:46.215968 kubelet[2673]: E0120 01:18:46.214081 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:46.233950 containerd[1566]: time="2026-01-20T01:18:46.226734466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 20 01:18:46.245955 kubelet[2673]: E0120 01:18:46.243841 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:46.252217 containerd[1566]: time="2026-01-20T01:18:46.252071523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 20 01:18:46.443863 kubelet[2673]: E0120 01:18:46.432881 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="3.2s" Jan 20 01:18:46.604672 kubelet[2673]: E0120 01:18:46.591699 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4ba44d75fd6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,LastTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:18:46.962806 update_engine[1556]: I20260120 01:18:46.961452 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:18:46.962806 update_engine[1556]: I20260120 01:18:46.961699 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:18:46.962806 update_engine[1556]: I20260120 01:18:46.962738 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:18:46.996895 update_engine[1556]: E20260120 01:18:46.989720 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:18:46.996895 update_engine[1556]: I20260120 01:18:46.993814 1556 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 01:18:47.005800 kubelet[2673]: E0120 01:18:47.005744 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:18:47.017815 kubelet[2673]: I0120 01:18:47.017779 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:47.043908 kubelet[2673]: E0120 01:18:47.041751 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 20 01:18:47.657601 kubelet[2673]: E0120 01:18:47.652857 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:18:47.678625 kubelet[2673]: E0120 01:18:47.677718 2673 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:18:48.410569 kubelet[2673]: E0120 01:18:48.402872 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:18:48.847562 kubelet[2673]: I0120 01:18:48.840650 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:48.901089 kubelet[2673]: E0120 01:18:48.884671 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 20 01:18:49.296125 kubelet[2673]: E0120 01:18:49.296037 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:18:49.655774 kubelet[2673]: E0120 01:18:49.643264 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="6.4s" Jan 20 01:18:50.388099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080774222.mount: Deactivated successfully. Jan 20 01:18:50.534936 containerd[1566]: time="2026-01-20T01:18:50.524733925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:18:50.566031 containerd[1566]: time="2026-01-20T01:18:50.565661606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 01:18:50.610539 containerd[1566]: time="2026-01-20T01:18:50.606516858Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:18:50.678967 containerd[1566]: time="2026-01-20T01:18:50.647048755Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:18:50.782201 containerd[1566]: time="2026-01-20T01:18:50.781641412Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:18:50.808459 containerd[1566]: time="2026-01-20T01:18:50.787607625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 01:18:50.820660 containerd[1566]: time="2026-01-20T01:18:50.820596577Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 01:18:50.843431 containerd[1566]: time="2026-01-20T01:18:50.842777173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:18:50.871810 containerd[1566]: time="2026-01-20T01:18:50.871744609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.748903553s" Jan 20 01:18:50.926208 containerd[1566]: time="2026-01-20T01:18:50.922552917Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.664569068s" Jan 20 01:18:51.005145 containerd[1566]: time="2026-01-20T01:18:51.003291637Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.710382391s" Jan 20 01:18:51.429736 containerd[1566]: time="2026-01-20T01:18:51.428089913Z" level=info msg="connecting to shim 3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20" address="unix:///run/containerd/s/b300e0b873644f498b396527866ecbf526e8b806b595a74b8bd537fbc88b091f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:18:52.020605 kubelet[2673]: E0120 01:18:52.020035 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:18:52.038893 kubelet[2673]: E0120 01:18:52.036509 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:18:52.248051 containerd[1566]: time="2026-01-20T01:18:52.241716693Z" level=info msg="connecting to shim 77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd" address="unix:///run/containerd/s/67df6f9f643dff0e5a2de5d8ebba56686cc2aa08237c90040202dd99a7cd6a97" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:18:52.348464 kubelet[2673]: I0120 01:18:52.329946 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:52.348464 kubelet[2673]: E0120 01:18:52.331163 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 20 01:18:52.433489 containerd[1566]: time="2026-01-20T01:18:52.433157382Z" level=info msg="connecting to shim 69e23e24d15b608b00592e42dbacea249a423118b3a43cfff8cc36e09104aa2b" address="unix:///run/containerd/s/d266525a834128aa77a5c9a9d930cda44b77905abdc82b84d4d46838995006fe" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:18:53.526601 kubelet[2673]: E0120 01:18:53.520561 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:18:54.893911 systemd[1]: Started cri-containerd-77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd.scope - libcontainer container 77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd. Jan 20 01:18:54.932495 kubelet[2673]: E0120 01:18:54.930237 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:18:55.281289 systemd[1]: Started cri-containerd-3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20.scope - libcontainer container 3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20. Jan 20 01:18:55.444228 systemd[1]: Started cri-containerd-69e23e24d15b608b00592e42dbacea249a423118b3a43cfff8cc36e09104aa2b.scope - libcontainer container 69e23e24d15b608b00592e42dbacea249a423118b3a43cfff8cc36e09104aa2b. Jan 20 01:18:56.176153 kubelet[2673]: E0120 01:18:56.169925 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="7s" Jan 20 01:18:56.264478 kubelet[2673]: E0120 01:18:56.260993 2673 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:18:56.653005 kubelet[2673]: E0120 01:18:56.640290 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4ba44d75fd6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,LastTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:18:56.977175 update_engine[1556]: I20260120 01:18:56.972242 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:18:56.977175 update_engine[1556]: I20260120 01:18:56.975930 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:18:57.001954 update_engine[1556]: I20260120 01:18:56.979053 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:18:57.017887 update_engine[1556]: E20260120 01:18:57.015619 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:18:57.017887 update_engine[1556]: I20260120 01:18:57.015825 1556 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 01:18:57.017887 update_engine[1556]: I20260120 01:18:57.015846 1556 omaha_request_action.cc:617] Omaha request response: Jan 20 01:18:57.017887 update_engine[1556]: E20260120 01:18:57.016022 1556 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 01:18:57.028792 update_engine[1556]: I20260120 01:18:57.019104 1556 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 01:18:57.028792 update_engine[1556]: I20260120 01:18:57.019125 1556 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:18:57.028792 update_engine[1556]: I20260120 01:18:57.019136 1556 update_attempter.cc:306] Processing Done. Jan 20 01:18:57.028792 update_engine[1556]: E20260120 01:18:57.020795 1556 update_attempter.cc:619] Update failed. Jan 20 01:18:57.028792 update_engine[1556]: I20260120 01:18:57.020913 1556 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 01:18:57.028792 update_engine[1556]: I20260120 01:18:57.020933 1556 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 01:18:57.028792 update_engine[1556]: I20260120 01:18:57.020944 1556 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 01:18:57.028792 update_engine[1556]: I20260120 01:18:57.021118 1556 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:18:57.028792 update_engine[1556]: I20260120 01:18:57.021253 1556 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:18:57.028792 update_engine[1556]: I20260120 01:18:57.021268 1556 omaha_request_action.cc:272] Request: Jan 20 01:18:57.028792 update_engine[1556]: Jan 20 01:18:57.028792 update_engine[1556]: Jan 20 01:18:57.028792 update_engine[1556]: Jan 20 01:18:57.028792 update_engine[1556]: Jan 20 01:18:57.028792 update_engine[1556]: Jan 20 01:18:57.028792 update_engine[1556]: Jan 20 01:18:57.028792 update_engine[1556]: I20260120 01:18:57.021279 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:18:57.039866 update_engine[1556]: I20260120 01:18:57.033135 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:18:57.056747 update_engine[1556]: I20260120 01:18:57.054131 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:18:57.100178 update_engine[1556]: E20260120 01:18:57.093873 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:18:57.100178 update_engine[1556]: I20260120 01:18:57.094020 1556 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 01:18:57.100178 update_engine[1556]: I20260120 01:18:57.094043 1556 omaha_request_action.cc:617] Omaha request response: Jan 20 01:18:57.100178 update_engine[1556]: I20260120 01:18:57.094056 1556 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:18:57.100178 update_engine[1556]: I20260120 01:18:57.094067 1556 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:18:57.100178 update_engine[1556]: I20260120 01:18:57.094076 1556 update_attempter.cc:306] Processing Done. Jan 20 01:18:57.100178 update_engine[1556]: I20260120 01:18:57.094090 1556 update_attempter.cc:310] Error event sent. Jan 20 01:18:57.110900 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 01:18:57.115897 update_engine[1556]: I20260120 01:18:57.094210 1556 update_check_scheduler.cc:74] Next update check in 41m41s Jan 20 01:18:57.148145 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 01:18:57.620097 containerd[1566]: time="2026-01-20T01:18:57.566183724Z" level=error msg="get state for 77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd" error="context deadline exceeded" Jan 20 01:18:57.660894 containerd[1566]: time="2026-01-20T01:18:57.645814654Z" level=warning msg="unknown status" status=0 Jan 20 01:18:58.598757 containerd[1566]: time="2026-01-20T01:18:58.597974077Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:18:59.140679 kubelet[2673]: I0120 01:18:59.136958 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:59.172276 kubelet[2673]: E0120 01:18:59.172196 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 20 01:18:59.650968 containerd[1566]: time="2026-01-20T01:18:59.645291353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:37bc278c1c70218bb9ba1b32f2e9b66e,Namespace:kube-system,Attempt:0,} returns sandbox id \"69e23e24d15b608b00592e42dbacea249a423118b3a43cfff8cc36e09104aa2b\"" Jan 20 01:18:59.673723 kubelet[2673]: E0120 01:18:59.665126 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:59.714122 containerd[1566]: time="2026-01-20T01:18:59.705075819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20\"" Jan 20 01:18:59.718887 kubelet[2673]: E0120 01:18:59.718854 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:59.795135 containerd[1566]: time="2026-01-20T01:18:59.795078030Z" level=info msg="CreateContainer within sandbox \"69e23e24d15b608b00592e42dbacea249a423118b3a43cfff8cc36e09104aa2b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:18:59.846675 containerd[1566]: time="2026-01-20T01:18:59.845937025Z" level=info msg="CreateContainer within sandbox \"3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:19:00.606270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2837981118.mount: Deactivated successfully. Jan 20 01:19:00.720987 containerd[1566]: time="2026-01-20T01:19:00.720185056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\"" Jan 20 01:19:00.728844 containerd[1566]: time="2026-01-20T01:19:00.725178533Z" level=info msg="Container 8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:00.744622 kubelet[2673]: E0120 01:19:00.737101 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:00.764198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542811816.mount: Deactivated successfully. Jan 20 01:19:00.792716 containerd[1566]: time="2026-01-20T01:19:00.788774000Z" level=info msg="Container 8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:00.950078 containerd[1566]: time="2026-01-20T01:19:00.946087361Z" level=info msg="CreateContainer within sandbox \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:19:01.292706 containerd[1566]: time="2026-01-20T01:19:01.287970129Z" level=info msg="CreateContainer within sandbox \"3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073\"" Jan 20 01:19:01.292706 containerd[1566]: time="2026-01-20T01:19:01.289681556Z" level=info msg="StartContainer for \"8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073\"" Jan 20 01:19:01.312204 containerd[1566]: time="2026-01-20T01:19:01.304188208Z" level=info msg="CreateContainer within sandbox \"69e23e24d15b608b00592e42dbacea249a423118b3a43cfff8cc36e09104aa2b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3\"" Jan 20 01:19:01.312204 containerd[1566]: time="2026-01-20T01:19:01.309865107Z" level=info msg="connecting to shim 8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073" address="unix:///run/containerd/s/b300e0b873644f498b396527866ecbf526e8b806b595a74b8bd537fbc88b091f" protocol=ttrpc version=3 Jan 20 01:19:01.336215 containerd[1566]: time="2026-01-20T01:19:01.334173389Z" level=info msg="StartContainer for \"8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3\"" Jan 20 01:19:01.343588 containerd[1566]: time="2026-01-20T01:19:01.341120647Z" level=info msg="connecting to shim 8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3" address="unix:///run/containerd/s/d266525a834128aa77a5c9a9d930cda44b77905abdc82b84d4d46838995006fe" protocol=ttrpc version=3 Jan 20 01:19:01.357093 kubelet[2673]: E0120 01:19:01.349099 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:19:01.361581 containerd[1566]: time="2026-01-20T01:19:01.359814935Z" level=info msg="Container 2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:01.455735 containerd[1566]: time="2026-01-20T01:19:01.454681560Z" level=info msg="CreateContainer within sandbox \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80\"" Jan 20 01:19:01.462902 containerd[1566]: time="2026-01-20T01:19:01.457862388Z" level=info msg="StartContainer for \"2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80\"" Jan 20 01:19:01.500900 containerd[1566]: time="2026-01-20T01:19:01.497674977Z" level=info msg="connecting to shim 2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80" address="unix:///run/containerd/s/67df6f9f643dff0e5a2de5d8ebba56686cc2aa08237c90040202dd99a7cd6a97" protocol=ttrpc version=3 Jan 20 01:19:01.640055 systemd[1]: Started cri-containerd-8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073.scope - libcontainer container 8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073. Jan 20 01:19:02.102038 kubelet[2673]: E0120 01:19:02.049949 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:19:02.102038 kubelet[2673]: E0120 01:19:02.087185 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:19:02.123781 kubelet[2673]: E0120 01:19:02.107199 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:19:02.282878 systemd[1]: Started cri-containerd-8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3.scope - libcontainer container 8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3. Jan 20 01:19:03.306997 kubelet[2673]: E0120 01:19:03.306834 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="7s" Jan 20 01:19:03.339995 systemd[1]: Started cri-containerd-2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80.scope - libcontainer container 2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80. Jan 20 01:19:04.305932 containerd[1566]: time="2026-01-20T01:19:04.303954141Z" level=error msg="get state for 8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073" error="context deadline exceeded" Jan 20 01:19:04.305932 containerd[1566]: time="2026-01-20T01:19:04.304795254Z" level=warning msg="unknown status" status=0 Jan 20 01:19:04.939006 kubelet[2673]: E0120 01:19:04.938633 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:05.231189 containerd[1566]: time="2026-01-20T01:19:05.195791394Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:19:06.698639 kubelet[2673]: E0120 01:19:06.697716 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4ba44d75fd6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,LastTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:19:06.856617 kubelet[2673]: I0120 01:19:06.844821 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:19:06.898594 kubelet[2673]: E0120 01:19:06.898531 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 20 01:19:06.944804 containerd[1566]: time="2026-01-20T01:19:06.944751980Z" level=info msg="StartContainer for \"8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073\" returns successfully" Jan 20 01:19:07.006276 containerd[1566]: time="2026-01-20T01:19:07.000556348Z" level=info msg="StartContainer for \"2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80\" returns successfully" Jan 20 01:19:07.239810 containerd[1566]: time="2026-01-20T01:19:07.239736210Z" level=info msg="StartContainer for \"8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3\" returns successfully" Jan 20 01:19:08.219694 kubelet[2673]: E0120 01:19:08.210809 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:08.219694 kubelet[2673]: E0120 01:19:08.215094 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:09.364145 kubelet[2673]: E0120 01:19:09.344616 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:09.364145 kubelet[2673]: E0120 01:19:09.363807 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:09.472600 kubelet[2673]: E0120 01:19:09.376754 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:09.472600 kubelet[2673]: E0120 01:19:09.377685 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:09.536290 kubelet[2673]: E0120 01:19:09.511007 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:09.564184 kubelet[2673]: E0120 01:19:09.555838 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:10.365235 kubelet[2673]: E0120 01:19:10.344648 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="7s" Jan 20 01:19:10.447287 kubelet[2673]: E0120 01:19:10.447184 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:10.456746 kubelet[2673]: E0120 01:19:10.455248 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:10.484622 kubelet[2673]: E0120 01:19:10.480504 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:10.490119 kubelet[2673]: E0120 01:19:10.480795 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:11.571177 kubelet[2673]: E0120 01:19:11.564958 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:11.571177 kubelet[2673]: E0120 01:19:11.565545 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:11.637242 kubelet[2673]: E0120 01:19:11.630618 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:11.731214 kubelet[2673]: E0120 01:19:11.731166 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:13.988824 kubelet[2673]: I0120 01:19:13.985074 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:19:14.965997 kubelet[2673]: E0120 01:19:14.965124 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:16.793015 kubelet[2673]: E0120 01:19:16.792864 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:16.832833 kubelet[2673]: E0120 01:19:16.829502 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:18.819258 kubelet[2673]: E0120 01:19:18.819018 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:18.840160 kubelet[2673]: E0120 01:19:18.835821 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:18.981880 kubelet[2673]: E0120 01:19:18.981559 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:19.001020 kubelet[2673]: E0120 01:19:19.000978 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:22.386934 kubelet[2673]: E0120 01:19:22.379262 2673 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:19:22.386934 kubelet[2673]: E0120 01:19:22.383595 2673 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:19:27.814569 kubelet[2673]: E0120 01:19:27.814244 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:28.473030 kubelet[2673]: E0120 01:19:28.472732 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 20 01:19:29.707884 kubelet[2673]: E0120 01:19:29.694224 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:19:31.017864 kubelet[2673]: E0120 01:19:28.590026 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4ba44d75fd6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,LastTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:19:35.977267 kubelet[2673]: E0120 01:19:35.976919 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:19:36.027008 kubelet[2673]: E0120 01:19:36.000935 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:19:36.137094 kubelet[2673]: E0120 01:19:36.123671 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:19:38.544817 kubelet[2673]: E0120 01:19:38.537624 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:41.109224 kubelet[2673]: E0120 01:19:41.108610 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:19:43.613240 kubelet[2673]: I0120 01:19:43.592924 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:19:46.001714 kubelet[2673]: E0120 01:19:46.001659 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:46.010663 kubelet[2673]: E0120 01:19:46.008088 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:46.197008 kubelet[2673]: E0120 01:19:46.195124 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 20 01:19:48.816814 kubelet[2673]: E0120 01:19:48.811136 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:52.517917 kubelet[2673]: E0120 01:19:52.504976 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4ba44d75fd6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,LastTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:19:53.779092 kubelet[2673]: E0120 01:19:53.750638 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:19:58.824053 kubelet[2673]: E0120 01:19:58.818102 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:20:04.354966 kubelet[2673]: E0120 01:20:04.330850 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 01:20:04.490233 kubelet[2673]: I0120 01:20:04.490072 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:20:04.525667 kubelet[2673]: E0120 01:20:04.518160 2673 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:20:08.825031 kubelet[2673]: E0120 01:20:08.824891 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:20:14.344930 kubelet[2673]: E0120 01:20:14.338139 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4ba44d75fd6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,LastTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:20:14.803789 kubelet[2673]: E0120 01:20:14.802756 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:20:18.816494 kubelet[2673]: E0120 01:20:18.811955 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:20:18.830874 kubelet[2673]: E0120 01:20:18.820428 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:20:18.830874 kubelet[2673]: E0120 01:20:18.828626 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:20:20.788888 kubelet[2673]: E0120 01:20:20.785118 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:20:21.009042 kubelet[2673]: E0120 01:20:21.002670 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:20:23.739794 kubelet[2673]: E0120 01:20:23.709029 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 20 01:20:24.815528 kubelet[2673]: I0120 01:20:24.803615 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:20:24.864230 kubelet[2673]: E0120 01:20:24.864094 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:20:25.644041 kubelet[2673]: E0120 01:20:25.573779 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:20:30.059031 kubelet[2673]: E0120 01:20:30.053082 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:20:33.959648 kubelet[2673]: E0120 01:20:33.933791 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:20:34.599772 kubelet[2673]: E0120 01:20:34.590570 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4ba44d75fd6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,LastTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:20:34.829653 kubelet[2673]: E0120 01:20:34.822987 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:20:39.925055 kubelet[2673]: E0120 01:20:39.917648 2673 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:20:41.102531 kubelet[2673]: E0120 01:20:41.083791 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:20:41.168742 kubelet[2673]: E0120 01:20:41.141642 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:20:41.168742 kubelet[2673]: E0120 01:20:41.164026 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 01:20:42.146715 kubelet[2673]: I0120 01:20:42.146025 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:20:51.618679 kubelet[2673]: E0120 01:20:51.611293 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:20:52.400750 kubelet[2673]: E0120 01:20:52.392486 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:20:55.418659 kubelet[2673]: E0120 01:20:55.414870 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4ba44d75fd6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,LastTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:20:58.449817 kubelet[2673]: E0120 01:20:58.439017 2673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 20 01:21:02.587638 kubelet[2673]: E0120 01:21:02.563969 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:21:02.915676 kubelet[2673]: I0120 01:21:02.911889 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:21:08.919905 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 01:21:09.196715 kubelet[2673]: E0120 01:21:09.192197 2673 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:21:10.334072 systemd-tmpfiles[2983]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:21:10.334211 systemd-tmpfiles[2983]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:21:10.374745 systemd-tmpfiles[2983]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:21:10.378454 systemd-tmpfiles[2983]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:21:10.380038 systemd-tmpfiles[2983]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:21:10.388217 systemd-tmpfiles[2983]: ACLs are not supported, ignoring. Jan 20 01:21:10.388806 systemd-tmpfiles[2983]: ACLs are not supported, ignoring. Jan 20 01:21:10.603816 systemd-tmpfiles[2983]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:21:10.637929 systemd-tmpfiles[2983]: Skipping /boot Jan 20 01:21:10.788944 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 01:21:10.789989 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 01:21:11.732258 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Jan 20 01:21:12.493042 kubelet[2673]: E0120 01:21:12.492699 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:21:12.528005 kubelet[2673]: E0120 01:21:12.522680 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:21:12.602066 kubelet[2673]: E0120 01:21:12.601987 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:21:13.913978 kubelet[2673]: E0120 01:21:13.913576 2673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:21:14.197514 kubelet[2673]: E0120 01:21:14.085122 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:21:19.804987 kubelet[2673]: E0120 01:21:19.802929 2673 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 01:21:20.702508 kubelet[2673]: E0120 01:21:20.692721 2673 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4ba44d75fd6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,LastTimestamp:2026-01-20 01:18:42.69606027 +0000 UTC m=+9.035014206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:21:21.442477 kubelet[2673]: I0120 01:21:21.440622 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:21:21.487142 kubelet[2673]: E0120 01:21:21.485173 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:21:21.487142 kubelet[2673]: E0120 01:21:21.486063 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:21:22.137651 kubelet[2673]: E0120 01:21:22.123161 2673 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4ba476490bcc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:43.380980684 +0000 UTC m=+9.719934620,LastTimestamp:2026-01-20 01:18:43.380980684 +0000 UTC m=+9.719934620,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:21:22.771544 kubelet[2673]: E0120 01:21:22.747041 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:21:23.382746 kubelet[2673]: I0120 01:21:23.382691 2673 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:21:23.383102 kubelet[2673]: E0120 01:21:23.383078 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:21:23.448866 kubelet[2673]: E0120 01:21:23.357110 2673 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4ba4938b4d89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:43.871862153 +0000 UTC m=+10.210816068,LastTimestamp:2026-01-20 01:18:43.871862153 +0000 UTC m=+10.210816068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:21:28.163678 kubelet[2673]: E0120 01:21:28.149949 2673 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:21:28.322778 kubelet[2673]: E0120 01:21:28.311149 2673 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Jan 20 01:21:31.620504 kubelet[2673]: E0120 01:21:31.614791 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:21:31.629763 kubelet[2673]: E0120 01:21:31.625913 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:21:32.331771 kubelet[2673]: E0120 01:21:32.278813 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:32.785133 kubelet[2673]: E0120 01:21:32.785079 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:21:37.445092 kubelet[2673]: E0120 01:21:37.441886 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:38.220005 kubelet[2673]: E0120 01:21:38.219859 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:21:42.484891 kubelet[2673]: E0120 01:21:42.483982 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:42.801811 kubelet[2673]: E0120 01:21:42.791827 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:21:47.503229 kubelet[2673]: E0120 01:21:47.503151 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:49.178567 kubelet[2673]: E0120 01:21:49.174182 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:21:52.542677 kubelet[2673]: E0120 01:21:52.542227 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:52.810554 kubelet[2673]: E0120 01:21:52.809699 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:21:57.515834 systemd[1]: cri-containerd-8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3.scope: Deactivated successfully. Jan 20 01:21:57.532811 systemd[1]: cri-containerd-8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3.scope: Consumed 45.115s CPU time, 202.8M memory peak. Jan 20 01:21:57.541661 kubelet[2673]: E0120 01:21:57.519225 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/kube-system/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-localhost.188c4bd1a9cb0014 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:37bc278c1c70218bb9ba1b32f2e9b66e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://10.0.0.15:6443/livez\": read tcp 10.0.0.15:38434->10.0.0.15:6443: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:21:57.5186637 +0000 UTC m=+203.857617626,LastTimestamp:2026-01-20 01:21:57.5186637 +0000 UTC m=+203.857617626,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:21:57.564201 kubelet[2673]: E0120 01:21:57.548975 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:57.596785 containerd[1566]: time="2026-01-20T01:21:57.588543521Z" level=info msg="received container exit event container_id:\"8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3\" id:\"8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3\" pid:2903 exit_status:255 exited_at:{seconds:1768872117 nanos:578681099}" Jan 20 01:21:58.106104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3-rootfs.mount: Deactivated successfully. Jan 20 01:21:58.117950 kubelet[2673]: E0120 01:21:58.113288 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/kube-system/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-localhost.188c4bd1a9cb0014 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:37bc278c1c70218bb9ba1b32f2e9b66e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://10.0.0.15:6443/livez\": read tcp 10.0.0.15:38434->10.0.0.15:6443: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:21:57.5186637 +0000 UTC m=+203.857617626,LastTimestamp:2026-01-20 01:21:57.5186637 +0000 UTC m=+203.857617626,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:21:58.430277 kubelet[2673]: E0120 01:21:58.430233 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:21:58.430907 kubelet[2673]: I0120 01:21:58.430884 2673 scope.go:117] "RemoveContainer" containerID="8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3" Jan 20 01:21:58.431079 kubelet[2673]: E0120 01:21:58.431060 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:21:58.451294 systemd[1]: cri-containerd-2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80.scope: Deactivated successfully. Jan 20 01:21:58.453189 systemd[1]: cri-containerd-2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80.scope: Consumed 8.373s CPU time, 22.5M memory peak. Jan 20 01:21:58.531698 containerd[1566]: time="2026-01-20T01:21:58.498026199Z" level=info msg="CreateContainer within sandbox \"69e23e24d15b608b00592e42dbacea249a423118b3a43cfff8cc36e09104aa2b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}" Jan 20 01:21:58.544119 containerd[1566]: time="2026-01-20T01:21:58.543862713Z" level=info msg="received container exit event container_id:\"2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80\" id:\"2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80\" pid:2912 exit_status:1 exited_at:{seconds:1768872118 nanos:496956535}" Jan 20 01:21:58.729248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323593203.mount: Deactivated successfully. Jan 20 01:21:58.816175 containerd[1566]: time="2026-01-20T01:21:58.816109794Z" level=info msg="Container fbb65d8cec4da6d6200e238fc8cccb5cebeb9e88bd83f772a2d979f0cacd8e42: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:21:58.966141 containerd[1566]: time="2026-01-20T01:21:58.962943174Z" level=info msg="CreateContainer within sandbox \"69e23e24d15b608b00592e42dbacea249a423118b3a43cfff8cc36e09104aa2b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"fbb65d8cec4da6d6200e238fc8cccb5cebeb9e88bd83f772a2d979f0cacd8e42\"" Jan 20 01:21:58.966141 containerd[1566]: time="2026-01-20T01:21:58.963981350Z" level=info msg="StartContainer for \"fbb65d8cec4da6d6200e238fc8cccb5cebeb9e88bd83f772a2d979f0cacd8e42\"" Jan 20 01:21:58.975119 containerd[1566]: time="2026-01-20T01:21:58.975075080Z" level=info msg="connecting to shim fbb65d8cec4da6d6200e238fc8cccb5cebeb9e88bd83f772a2d979f0cacd8e42" address="unix:///run/containerd/s/d266525a834128aa77a5c9a9d930cda44b77905abdc82b84d4d46838995006fe" protocol=ttrpc version=3 Jan 20 01:21:59.404647 systemd[1]: Started cri-containerd-fbb65d8cec4da6d6200e238fc8cccb5cebeb9e88bd83f772a2d979f0cacd8e42.scope - libcontainer container fbb65d8cec4da6d6200e238fc8cccb5cebeb9e88bd83f772a2d979f0cacd8e42. Jan 20 01:21:59.714957 kubelet[2673]: E0120 01:21:59.685991 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:21:59.774267 kubelet[2673]: E0120 01:21:59.724871 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.15:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" Jan 20 01:21:59.774267 kubelet[2673]: E0120 01:21:59.725196 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.15:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" Jan 20 01:21:59.735225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80-rootfs.mount: Deactivated successfully. Jan 20 01:21:59.793742 kubelet[2673]: E0120 01:21:59.791260 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.15:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" Jan 20 01:21:59.797033 kubelet[2673]: E0120 01:21:59.796997 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.15:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" Jan 20 01:21:59.816059 kubelet[2673]: E0120 01:21:59.797112 2673 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count" Jan 20 01:22:00.601798 kubelet[2673]: E0120 01:22:00.599264 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:22:00.601798 kubelet[2673]: I0120 01:22:00.599828 2673 scope.go:117] "RemoveContainer" containerID="2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80" Jan 20 01:22:00.601798 kubelet[2673]: E0120 01:22:00.599936 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:00.648042 containerd[1566]: time="2026-01-20T01:22:00.630810588Z" level=info msg="CreateContainer within sandbox \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 01:22:00.977737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1269380902.mount: Deactivated successfully. Jan 20 01:22:01.026661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705324394.mount: Deactivated successfully. Jan 20 01:22:01.050282 containerd[1566]: time="2026-01-20T01:22:01.049824954Z" level=info msg="Container 171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:22:01.203614 containerd[1566]: time="2026-01-20T01:22:01.203140452Z" level=info msg="CreateContainer within sandbox \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4\"" Jan 20 01:22:01.215729 containerd[1566]: time="2026-01-20T01:22:01.211701269Z" level=info msg="StartContainer for \"fbb65d8cec4da6d6200e238fc8cccb5cebeb9e88bd83f772a2d979f0cacd8e42\" returns successfully" Jan 20 01:22:01.245230 containerd[1566]: time="2026-01-20T01:22:01.244113217Z" level=info msg="StartContainer for \"171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4\"" Jan 20 01:22:01.273203 containerd[1566]: time="2026-01-20T01:22:01.266933887Z" level=info msg="connecting to shim 171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4" address="unix:///run/containerd/s/67df6f9f643dff0e5a2de5d8ebba56686cc2aa08237c90040202dd99a7cd6a97" protocol=ttrpc version=3 Jan 20 01:22:01.809869 systemd[1]: Started cri-containerd-171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4.scope - libcontainer container 171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4. Jan 20 01:22:01.988033 kubelet[2673]: E0120 01:22:01.986928 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:22:01.988033 kubelet[2673]: E0120 01:22:01.987206 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:02.589267 kubelet[2673]: E0120 01:22:02.589094 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:22:02.616850 containerd[1566]: time="2026-01-20T01:22:02.614030336Z" level=info msg="StartContainer for \"171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4\" returns successfully" Jan 20 01:22:02.818802 kubelet[2673]: E0120 01:22:02.816947 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:22:03.902948 kubelet[2673]: E0120 01:22:03.902252 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:22:03.934165 kubelet[2673]: E0120 01:22:03.906098 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:22:03.934165 kubelet[2673]: E0120 01:22:03.933101 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:04.027023 kubelet[2673]: E0120 01:22:04.019230 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:04.739901 kubelet[2673]: E0120 01:22:04.736823 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:22:04.739901 kubelet[2673]: E0120 01:22:04.737122 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:09.297456 kubelet[2673]: E0120 01:22:09.297150 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:22:09.297456 kubelet[2673]: E0120 01:22:09.305039 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:22:09.312504 kubelet[2673]: E0120 01:22:09.307724 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:22:09.312504 kubelet[2673]: E0120 01:22:09.308140 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:09.313224 kubelet[2673]: E0120 01:22:09.312918 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:09.855235 kubelet[2673]: E0120 01:22:09.854232 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:22:12.821707 kubelet[2673]: E0120 01:22:12.820495 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:22:14.087048 kubelet[2673]: E0120 01:22:14.077404 2673 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 01:22:14.321966 kubelet[2673]: E0120 01:22:14.313477 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:22:17.130554 kubelet[2673]: E0120 01:22:17.130449 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:22:17.130554 kubelet[2673]: E0120 01:22:17.140387 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:19.375471 kubelet[2673]: E0120 01:22:19.374457 2673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/kube-system/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.188c4bd1a9cb0014 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:37bc278c1c70218bb9ba1b32f2e9b66e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://10.0.0.15:6443/livez\": read tcp 10.0.0.15:38434->10.0.0.15:6443: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:21:57.5186637 +0000 UTC m=+203.857617626,LastTimestamp:2026-01-20 01:21:57.5186637 +0000 UTC m=+203.857617626,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:22:19.439960 kubelet[2673]: E0120 01:22:19.439885 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:22:19.865053 kubelet[2673]: E0120 01:22:19.863254 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.15:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Jan 20 01:22:19.877632 kubelet[2673]: E0120 01:22:19.876538 2673 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:22:22.830668 kubelet[2673]: E0120 01:22:22.823156 2673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:22:24.540560 kubelet[2673]: E0120 01:22:24.540075 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:22:27.336057 kubelet[2673]: E0120 01:22:27.335282 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:22:27.336057 kubelet[2673]: E0120 01:22:27.335876 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:27.818549 kubelet[2673]: E0120 01:22:27.817824 2673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:22:27.818549 kubelet[2673]: E0120 01:22:27.818063 2673 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:28.111190 systemd[1]: Reload requested from client PID 3085 ('systemctl') (unit session-9.scope)... Jan 20 01:22:28.111208 systemd[1]: Reloading... Jan 20 01:22:29.391704 zram_generator::config[3134]: No configuration found. Jan 20 01:22:30.046078 kubelet[2673]: E0120 01:22:30.043624 2673 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:22:30.483210 kubelet[2673]: E0120 01:22:30.477206 2673 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:22:32.233703 systemd[1]: Reloading finished in 4121 ms. Jan 20 01:22:32.597125 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:22:32.691522 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:22:32.700144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:22:32.700232 systemd[1]: kubelet.service: Consumed 29.518s CPU time, 134.1M memory peak. Jan 20 01:22:32.728873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:22:37.924643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:22:38.148247 (kubelet)[3172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:22:39.844222 kubelet[3172]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:22:39.844222 kubelet[3172]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:22:39.868102 kubelet[3172]: I0120 01:22:39.867544 3172 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:22:40.105047 kubelet[3172]: I0120 01:22:40.102636 3172 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 01:22:40.105047 kubelet[3172]: I0120 01:22:40.102749 3172 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:22:40.105047 kubelet[3172]: I0120 01:22:40.103029 3172 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 01:22:40.105047 kubelet[3172]: I0120 01:22:40.103046 3172 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:22:40.115465 kubelet[3172]: I0120 01:22:40.115009 3172 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:22:40.148259 kubelet[3172]: I0120 01:22:40.144120 3172 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 01:22:40.199443 kubelet[3172]: I0120 01:22:40.192651 3172 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:22:40.285584 kubelet[3172]: I0120 01:22:40.278892 3172 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:22:40.445808 kubelet[3172]: I0120 01:22:40.437111 3172 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 01:22:40.445808 kubelet[3172]: I0120 01:22:40.438677 3172 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:22:40.479916 kubelet[3172]: I0120 01:22:40.465595 3172 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:22:40.479916 kubelet[3172]: I0120 01:22:40.477195 3172 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:22:40.479916 kubelet[3172]: I0120 01:22:40.477216 3172 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 01:22:40.479916 kubelet[3172]: I0120 01:22:40.477689 3172 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 01:22:40.522186 kubelet[3172]: I0120 01:22:40.488901 3172 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:22:40.522186 kubelet[3172]: I0120 01:22:40.500642 3172 kubelet.go:475] "Attempting to sync node with API server" Jan 20 01:22:40.522186 kubelet[3172]: I0120 01:22:40.500672 3172 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:22:40.522186 kubelet[3172]: I0120 01:22:40.500708 3172 kubelet.go:387] "Adding apiserver pod source" Jan 20 01:22:40.522186 kubelet[3172]: I0120 01:22:40.500863 3172 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:22:40.530577 kubelet[3172]: I0120 01:22:40.530534 3172 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:22:40.542559 kubelet[3172]: I0120 01:22:40.532611 3172 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:22:40.542559 kubelet[3172]: I0120 01:22:40.532650 3172 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 01:22:40.538684 sudo[3189]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 20 01:22:40.539681 sudo[3189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 20 01:22:40.912658 kubelet[3172]: I0120 01:22:40.910439 3172 server.go:1262] "Started kubelet" Jan 20 01:22:40.959650 kubelet[3172]: I0120 01:22:40.946115 3172 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:22:40.959650 kubelet[3172]: I0120 01:22:40.950142 3172 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:22:40.959650 kubelet[3172]: I0120 01:22:40.950234 3172 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 01:22:40.959650 kubelet[3172]: I0120 01:22:40.950839 3172 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:22:41.017780 kubelet[3172]: I0120 01:22:40.986291 3172 server.go:310] "Adding debug handlers to kubelet server" Jan 20 01:22:41.017780 kubelet[3172]: I0120 01:22:41.004843 3172 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:22:41.017780 kubelet[3172]: I0120 01:22:41.006223 3172 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:22:41.138294 kubelet[3172]: I0120 01:22:41.085899 3172 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 01:22:41.138294 kubelet[3172]: I0120 01:22:41.099566 3172 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 01:22:41.138294 kubelet[3172]: I0120 01:22:41.114146 3172 reconciler.go:29] "Reconciler: start to sync state" Jan 20 01:22:41.270629 kubelet[3172]: I0120 01:22:41.270583 3172 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:22:41.280501 kubelet[3172]: I0120 01:22:41.280289 3172 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:22:41.581715 kubelet[3172]: I0120 01:22:41.569921 3172 apiserver.go:52] "Watching apiserver" Jan 20 01:22:41.593518 kubelet[3172]: W0120 01:22:41.583037 3172 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "error reading server preface: read unix @->/run/containerd/containerd.sock: use of closed network connection" Jan 20 01:22:41.931511 kubelet[3172]: I0120 01:22:41.930786 3172 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:22:42.066205 kubelet[3172]: E0120 01:22:42.058915 3172 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:22:42.525449 kubelet[3172]: I0120 01:22:42.525231 3172 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 01:22:42.559519 kubelet[3172]: I0120 01:22:42.558572 3172 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 01:22:42.559519 kubelet[3172]: I0120 01:22:42.558627 3172 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 01:22:42.559519 kubelet[3172]: I0120 01:22:42.558672 3172 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 01:22:42.559519 kubelet[3172]: E0120 01:22:42.558763 3172 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:22:42.664607 kubelet[3172]: E0120 01:22:42.663787 3172 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:22:42.901733 kubelet[3172]: E0120 01:22:42.898519 3172 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:22:43.433456 kubelet[3172]: E0120 01:22:43.400959 3172 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:22:44.338838 kubelet[3172]: E0120 01:22:44.242690 3172 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:22:45.945277 kubelet[3172]: E0120 01:22:45.939672 3172 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:22:45.995111 containerd[1566]: time="2026-01-20T01:22:45.990803843Z" level=error msg="get state for fbb65d8cec4da6d6200e238fc8cccb5cebeb9e88bd83f772a2d979f0cacd8e42" error="context deadline exceeded" Jan 20 01:22:46.020823 containerd[1566]: time="2026-01-20T01:22:46.002990049Z" level=warning msg="unknown status" status=0 Jan 20 01:22:46.047212 containerd[1566]: time="2026-01-20T01:22:46.032700292Z" level=error msg="ttrpc: received message on inactive stream" stream=19 Jan 20 01:22:46.833942 kubelet[3172]: I0120 01:22:46.833760 3172 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:22:46.833942 kubelet[3172]: I0120 01:22:46.843530 3172 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:22:46.833942 kubelet[3172]: I0120 01:22:46.843656 3172 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:22:46.868006 kubelet[3172]: I0120 01:22:46.861593 3172 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:22:46.868006 kubelet[3172]: I0120 01:22:46.861617 3172 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:22:46.868006 kubelet[3172]: I0120 01:22:46.861732 3172 policy_none.go:49] "None policy: Start" Jan 20 01:22:46.868006 kubelet[3172]: I0120 01:22:46.861750 3172 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 01:22:46.868006 kubelet[3172]: I0120 01:22:46.861769 3172 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 01:22:46.868006 kubelet[3172]: I0120 01:22:46.861926 3172 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 01:22:46.868006 kubelet[3172]: I0120 01:22:46.861939 3172 policy_none.go:47] "Start" Jan 20 01:22:46.966001 kubelet[3172]: E0120 01:22:46.952816 3172 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:22:46.981905 kubelet[3172]: I0120 01:22:46.972577 3172 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:22:46.981905 kubelet[3172]: I0120 01:22:46.972602 3172 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:22:47.020909 kubelet[3172]: I0120 01:22:47.010877 3172 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:22:47.316948 kubelet[3172]: E0120 01:22:47.312703 3172 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:22:52.095453 kubelet[3172]: I0120 01:22:52.089818 3172 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 01:22:52.116028 kubelet[3172]: I0120 01:22:52.104038 3172 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:22:52.136638 kubelet[3172]: I0120 01:22:52.118067 3172 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 01:22:52.814901 kubelet[3172]: I0120 01:22:52.235068 3172 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 01:22:52.814901 kubelet[3172]: I0120 01:22:52.298964 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37bc278c1c70218bb9ba1b32f2e9b66e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"37bc278c1c70218bb9ba1b32f2e9b66e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:22:52.814901 kubelet[3172]: I0120 01:22:52.299027 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37bc278c1c70218bb9ba1b32f2e9b66e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"37bc278c1c70218bb9ba1b32f2e9b66e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:22:52.814901 kubelet[3172]: I0120 01:22:52.299062 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:22:52.814901 kubelet[3172]: I0120 01:22:52.299201 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:22:52.814901 kubelet[3172]: I0120 01:22:52.299240 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37bc278c1c70218bb9ba1b32f2e9b66e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"37bc278c1c70218bb9ba1b32f2e9b66e\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:22:55.918895 kubelet[3172]: I0120 01:22:52.299269 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:22:55.918895 kubelet[3172]: I0120 01:22:52.299291 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:22:55.918895 kubelet[3172]: I0120 01:22:52.299501 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:22:55.918895 kubelet[3172]: I0120 01:22:52.299615 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:22:57.492811 sudo[3189]: pam_unix(sudo:session): session closed for user root Jan 20 01:22:58.590449 kubelet[3172]: E0120 01:22:58.589658 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:58.594769 kubelet[3172]: E0120 01:22:58.594734 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:58.611611 kubelet[3172]: E0120 01:22:58.611004 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:22:58.613526 kubelet[3172]: I0120 01:22:58.613491 3172 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:22:59.198643 kubelet[3172]: I0120 01:22:59.193096 3172 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 01:22:59.198643 kubelet[3172]: I0120 01:22:59.193540 3172 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:22:59.198643 kubelet[3172]: E0120 01:22:59.193941 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.16s" Jan 20 01:23:00.745876 kubelet[3172]: E0120 01:23:00.745503 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:00.790493 kubelet[3172]: E0120 01:23:00.779228 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:00.813610 kubelet[3172]: E0120 01:23:00.813433 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:01.798152 kubelet[3172]: E0120 01:23:01.794976 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:01.817150 kubelet[3172]: E0120 01:23:01.799153 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:03.371953 kubelet[3172]: I0120 01:23:03.294926 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.294907177 podStartE2EDuration="7.294907177s" podCreationTimestamp="2026-01-20 01:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:23:01.867540379 +0000 UTC m=+23.565743960" watchObservedRunningTime="2026-01-20 01:23:03.294907177 +0000 UTC m=+24.993110758" Jan 20 01:23:03.426107 kubelet[3172]: I0120 01:23:03.425557 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.425535688 podStartE2EDuration="8.425535688s" podCreationTimestamp="2026-01-20 01:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:23:03.311123702 +0000 UTC m=+25.009327283" watchObservedRunningTime="2026-01-20 01:23:03.425535688 +0000 UTC m=+25.123739269" Jan 20 01:23:03.431660 kubelet[3172]: I0120 01:23:03.430920 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.430900116 podStartE2EDuration="7.430900116s" podCreationTimestamp="2026-01-20 01:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:23:03.426858495 +0000 UTC m=+25.125062066" watchObservedRunningTime="2026-01-20 01:23:03.430900116 +0000 UTC m=+25.129103708" Jan 20 01:23:08.907240 kubelet[3172]: E0120 01:23:08.907137 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:08.913690 kubelet[3172]: E0120 01:23:08.911653 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:08.977604 kubelet[3172]: E0120 01:23:08.961208 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:11.616892 kubelet[3172]: E0120 01:23:11.616705 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.043s" Jan 20 01:23:13.832902 kubelet[3172]: E0120 01:23:13.831680 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:18.182838 kubelet[3172]: I0120 01:23:18.180922 3172 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:23:18.194240 kubelet[3172]: I0120 01:23:18.186021 3172 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:23:18.194458 containerd[1566]: time="2026-01-20T01:23:18.185774752Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:23:25.734472 kubelet[3172]: I0120 01:23:25.716066 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de7db6e2-fcea-476f-b1ad-00e102084688-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-x9qtk\" (UID: \"de7db6e2-fcea-476f-b1ad-00e102084688\") " pod="kube-system/cilium-operator-6f9c7c5859-x9qtk" Jan 20 01:23:25.734472 kubelet[3172]: I0120 01:23:25.716145 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp2wq\" (UniqueName: \"kubernetes.io/projected/de7db6e2-fcea-476f-b1ad-00e102084688-kube-api-access-zp2wq\") pod \"cilium-operator-6f9c7c5859-x9qtk\" (UID: \"de7db6e2-fcea-476f-b1ad-00e102084688\") " pod="kube-system/cilium-operator-6f9c7c5859-x9qtk" Jan 20 01:23:25.959245 kubelet[3172]: I0120 01:23:25.945288 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-run\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959245 kubelet[3172]: I0120 01:23:25.945625 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-bpf-maps\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959245 kubelet[3172]: I0120 01:23:25.945651 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-etc-cni-netd\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959245 kubelet[3172]: I0120 01:23:25.946910 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-xtables-lock\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959245 kubelet[3172]: I0120 01:23:25.946945 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a58b624c-b508-4b98-9513-c7a4eae38f71-hubble-tls\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959245 kubelet[3172]: I0120 01:23:25.946971 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cni-path\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959982 kubelet[3172]: I0120 01:23:25.946995 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a58b624c-b508-4b98-9513-c7a4eae38f71-clustermesh-secrets\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959982 kubelet[3172]: I0120 01:23:25.947016 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-config-path\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959982 kubelet[3172]: I0120 01:23:25.947039 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-cgroup\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959982 kubelet[3172]: I0120 01:23:25.947058 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-lib-modules\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959982 kubelet[3172]: I0120 01:23:25.947082 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-host-proc-sys-net\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.959982 kubelet[3172]: I0120 01:23:25.947102 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-hostproc\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.960487 kubelet[3172]: I0120 01:23:25.947128 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-host-proc-sys-kernel\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:25.960487 kubelet[3172]: I0120 01:23:25.947147 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gkgn\" (UniqueName: \"kubernetes.io/projected/a58b624c-b508-4b98-9513-c7a4eae38f71-kube-api-access-9gkgn\") pod \"cilium-bchfb\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " pod="kube-system/cilium-bchfb" Jan 20 01:23:26.065072 kubelet[3172]: I0120 01:23:26.061624 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b00b379-48af-4a9b-b0dc-95f8f3d0bbc7-kube-proxy\") pod \"kube-proxy-scdsh\" (UID: \"7b00b379-48af-4a9b-b0dc-95f8f3d0bbc7\") " pod="kube-system/kube-proxy-scdsh" Jan 20 01:23:26.065598 kubelet[3172]: I0120 01:23:26.065563 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b00b379-48af-4a9b-b0dc-95f8f3d0bbc7-xtables-lock\") pod \"kube-proxy-scdsh\" (UID: \"7b00b379-48af-4a9b-b0dc-95f8f3d0bbc7\") " pod="kube-system/kube-proxy-scdsh" Jan 20 01:23:26.083561 kubelet[3172]: I0120 01:23:26.083516 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b00b379-48af-4a9b-b0dc-95f8f3d0bbc7-lib-modules\") pod \"kube-proxy-scdsh\" (UID: \"7b00b379-48af-4a9b-b0dc-95f8f3d0bbc7\") " pod="kube-system/kube-proxy-scdsh" Jan 20 01:23:26.087039 kubelet[3172]: I0120 01:23:26.086925 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r78jw\" (UniqueName: \"kubernetes.io/projected/7b00b379-48af-4a9b-b0dc-95f8f3d0bbc7-kube-api-access-r78jw\") pod \"kube-proxy-scdsh\" (UID: \"7b00b379-48af-4a9b-b0dc-95f8f3d0bbc7\") " pod="kube-system/kube-proxy-scdsh" Jan 20 01:23:26.588922 systemd[1]: Created slice kubepods-besteffort-podde7db6e2_fcea_476f_b1ad_00e102084688.slice - libcontainer container kubepods-besteffort-podde7db6e2_fcea_476f_b1ad_00e102084688.slice. Jan 20 01:23:27.318582 systemd[1]: Created slice kubepods-burstable-poda58b624c_b508_4b98_9513_c7a4eae38f71.slice - libcontainer container kubepods-burstable-poda58b624c_b508_4b98_9513_c7a4eae38f71.slice. Jan 20 01:23:27.824919 systemd[1]: Created slice kubepods-besteffort-pod7b00b379_48af_4a9b_b0dc_95f8f3d0bbc7.slice - libcontainer container kubepods-besteffort-pod7b00b379_48af_4a9b_b0dc_95f8f3d0bbc7.slice. Jan 20 01:23:27.997241 kubelet[3172]: E0120 01:23:27.987653 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:28.419898 containerd[1566]: time="2026-01-20T01:23:28.419142526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bchfb,Uid:a58b624c-b508-4b98-9513-c7a4eae38f71,Namespace:kube-system,Attempt:0,}" Jan 20 01:23:28.485596 containerd[1566]: time="2026-01-20T01:23:28.465033218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-x9qtk,Uid:de7db6e2-fcea-476f-b1ad-00e102084688,Namespace:kube-system,Attempt:0,}" Jan 20 01:23:28.485715 kubelet[3172]: E0120 01:23:28.446670 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:29.080977 kubelet[3172]: E0120 01:23:29.076063 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:29.853587 containerd[1566]: time="2026-01-20T01:23:29.752274457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scdsh,Uid:7b00b379-48af-4a9b-b0dc-95f8f3d0bbc7,Namespace:kube-system,Attempt:0,}" Jan 20 01:23:29.935557 sudo[1799]: pam_unix(sudo:session): session closed for user root Jan 20 01:23:29.986198 kubelet[3172]: E0120 01:23:29.973090 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.091s" Jan 20 01:23:30.006285 sshd[1798]: Connection closed by 10.0.0.1 port 46242 Jan 20 01:23:30.123588 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 20 01:23:30.353990 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:46242.service: Deactivated successfully. Jan 20 01:23:30.367011 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:23:30.384631 systemd[1]: session-9.scope: Consumed 33.423s CPU time, 268M memory peak. Jan 20 01:23:30.583262 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:23:30.940882 systemd-logind[1552]: Removed session 9. Jan 20 01:23:31.725591 containerd[1566]: time="2026-01-20T01:23:31.724230404Z" level=info msg="connecting to shim a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c" address="unix:///run/containerd/s/a2e0cae1905ed5d0927600f03d22acdaeeffefa7c78fd47c144e009d52a1c3e4" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:23:31.743595 containerd[1566]: time="2026-01-20T01:23:31.743294331Z" level=info msg="connecting to shim 4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71" address="unix:///run/containerd/s/d4c1beb26014d71473d13b2eca1784bf90c682924ebf938d17f383007c19ecd5" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:23:31.792005 containerd[1566]: time="2026-01-20T01:23:31.789927060Z" level=info msg="connecting to shim 610c5aca265ad768e93be6cdf1d2b0c5d11108b68795baeb39b61f2bacb8b4c5" address="unix:///run/containerd/s/46d8fb8b5bc23f07007323ed94fbcbf8837f14226e2ff708dd2acf01f6f63a31" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:23:35.238526 systemd[1]: Started cri-containerd-610c5aca265ad768e93be6cdf1d2b0c5d11108b68795baeb39b61f2bacb8b4c5.scope - libcontainer container 610c5aca265ad768e93be6cdf1d2b0c5d11108b68795baeb39b61f2bacb8b4c5. Jan 20 01:23:35.550835 systemd[1]: Started cri-containerd-4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71.scope - libcontainer container 4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71. Jan 20 01:23:35.621020 systemd[1]: Started cri-containerd-a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c.scope - libcontainer container a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c. Jan 20 01:23:38.468110 containerd[1566]: time="2026-01-20T01:23:38.463746222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bchfb,Uid:a58b624c-b508-4b98-9513-c7a4eae38f71,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\"" Jan 20 01:23:38.546893 containerd[1566]: time="2026-01-20T01:23:38.538809093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scdsh,Uid:7b00b379-48af-4a9b-b0dc-95f8f3d0bbc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"610c5aca265ad768e93be6cdf1d2b0c5d11108b68795baeb39b61f2bacb8b4c5\"" Jan 20 01:23:38.559757 kubelet[3172]: E0120 01:23:38.556551 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:38.577730 kubelet[3172]: E0120 01:23:38.575896 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:38.592810 containerd[1566]: time="2026-01-20T01:23:38.592755210Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 01:23:38.666505 containerd[1566]: time="2026-01-20T01:23:38.662853050Z" level=info msg="CreateContainer within sandbox \"610c5aca265ad768e93be6cdf1d2b0c5d11108b68795baeb39b61f2bacb8b4c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:23:39.806200 containerd[1566]: time="2026-01-20T01:23:39.805118260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-x9qtk,Uid:de7db6e2-fcea-476f-b1ad-00e102084688,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\"" Jan 20 01:23:40.027532 kubelet[3172]: E0120 01:23:39.992713 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:40.393785 containerd[1566]: time="2026-01-20T01:23:40.389207045Z" level=info msg="Container d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:23:40.676603 containerd[1566]: time="2026-01-20T01:23:40.673209403Z" level=info msg="CreateContainer within sandbox \"610c5aca265ad768e93be6cdf1d2b0c5d11108b68795baeb39b61f2bacb8b4c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6\"" Jan 20 01:23:40.695593 containerd[1566]: time="2026-01-20T01:23:40.690719015Z" level=info msg="StartContainer for \"d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6\"" Jan 20 01:23:40.697254 containerd[1566]: time="2026-01-20T01:23:40.696270379Z" level=info msg="connecting to shim d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6" address="unix:///run/containerd/s/46d8fb8b5bc23f07007323ed94fbcbf8837f14226e2ff708dd2acf01f6f63a31" protocol=ttrpc version=3 Jan 20 01:23:41.579555 systemd[1]: Started cri-containerd-d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6.scope - libcontainer container d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6. Jan 20 01:23:43.679988 containerd[1566]: time="2026-01-20T01:23:43.679190543Z" level=error msg="get state for d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6" error="context deadline exceeded" Jan 20 01:23:43.679988 containerd[1566]: time="2026-01-20T01:23:43.681748641Z" level=warning msg="unknown status" status=0 Jan 20 01:23:46.089510 containerd[1566]: time="2026-01-20T01:23:46.075567334Z" level=error msg="get state for d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6" error="context deadline exceeded" Jan 20 01:23:46.089510 containerd[1566]: time="2026-01-20T01:23:46.075711786Z" level=warning msg="unknown status" status=0 Jan 20 01:23:47.663505 containerd[1566]: time="2026-01-20T01:23:47.660851318Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:23:47.690004 containerd[1566]: time="2026-01-20T01:23:47.664891175Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Jan 20 01:23:48.148007 kubelet[3172]: E0120 01:23:48.098024 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.534s" Jan 20 01:23:50.280817 kubelet[3172]: E0120 01:23:50.280766 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.658s" Jan 20 01:23:55.376183 containerd[1566]: time="2026-01-20T01:23:55.345851842Z" level=info msg="StartContainer for \"d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6\" returns successfully" Jan 20 01:23:56.056491 kubelet[3172]: E0120 01:23:56.050979 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:56.866779 kubelet[3172]: I0120 01:23:56.853992 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-scdsh" podStartSLOduration=36.853852674 podStartE2EDuration="36.853852674s" podCreationTimestamp="2026-01-20 01:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:23:56.623072023 +0000 UTC m=+78.321275634" watchObservedRunningTime="2026-01-20 01:23:56.853852674 +0000 UTC m=+78.552056255" Jan 20 01:23:57.517134 kubelet[3172]: E0120 01:23:57.516987 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:23:59.795873 containerd[1566]: time="2026-01-20T01:23:59.774998634Z" level=warning msg="container event discarded" container=69e23e24d15b608b00592e42dbacea249a423118b3a43cfff8cc36e09104aa2b type=CONTAINER_CREATED_EVENT Jan 20 01:23:59.795873 containerd[1566]: time="2026-01-20T01:23:59.791859415Z" level=warning msg="container event discarded" container=69e23e24d15b608b00592e42dbacea249a423118b3a43cfff8cc36e09104aa2b type=CONTAINER_STARTED_EVENT Jan 20 01:23:59.889769 containerd[1566]: time="2026-01-20T01:23:59.889498744Z" level=warning msg="container event discarded" container=3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20 type=CONTAINER_CREATED_EVENT Jan 20 01:23:59.889973 containerd[1566]: time="2026-01-20T01:23:59.889924833Z" level=warning msg="container event discarded" container=3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20 type=CONTAINER_STARTED_EVENT Jan 20 01:24:00.724930 containerd[1566]: time="2026-01-20T01:24:00.723660528Z" level=warning msg="container event discarded" container=77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd type=CONTAINER_CREATED_EVENT Jan 20 01:24:00.724930 containerd[1566]: time="2026-01-20T01:24:00.723750907Z" level=warning msg="container event discarded" container=77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd type=CONTAINER_STARTED_EVENT Jan 20 01:24:01.252619 containerd[1566]: time="2026-01-20T01:24:01.252531259Z" level=warning msg="container event discarded" container=8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073 type=CONTAINER_CREATED_EVENT Jan 20 01:24:01.288091 containerd[1566]: time="2026-01-20T01:24:01.287901732Z" level=warning msg="container event discarded" container=8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3 type=CONTAINER_CREATED_EVENT Jan 20 01:24:01.439273 containerd[1566]: time="2026-01-20T01:24:01.439206931Z" level=warning msg="container event discarded" container=2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80 type=CONTAINER_CREATED_EVENT Jan 20 01:24:04.515731 kubelet[3172]: E0120 01:24:04.456111 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.049s" Jan 20 01:24:06.897913 containerd[1566]: time="2026-01-20T01:24:06.874741662Z" level=warning msg="container event discarded" container=2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80 type=CONTAINER_STARTED_EVENT Jan 20 01:24:06.897913 containerd[1566]: time="2026-01-20T01:24:06.874939233Z" level=warning msg="container event discarded" container=8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073 type=CONTAINER_STARTED_EVENT Jan 20 01:24:06.972236 containerd[1566]: time="2026-01-20T01:24:06.971640059Z" level=warning msg="container event discarded" container=8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3 type=CONTAINER_STARTED_EVENT Jan 20 01:24:10.196754 kubelet[3172]: E0120 01:24:10.148024 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.203s" Jan 20 01:24:21.689085 kubelet[3172]: E0120 01:24:21.672994 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:24:24.580570 kubelet[3172]: E0120 01:24:24.580519 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:24:27.585952 kubelet[3172]: E0120 01:24:27.585720 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:24:41.119794 kubelet[3172]: E0120 01:24:41.119728 3172 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Jan 20 01:24:45.519724 kubelet[3172]: E0120 01:24:45.519009 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:24:50.569769 kubelet[3172]: E0120 01:24:50.561087 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:24:55.591971 kubelet[3172]: E0120 01:24:55.589215 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:00.612947 kubelet[3172]: E0120 01:25:00.606878 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:05.642525 kubelet[3172]: E0120 01:25:05.636584 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:10.662708 kubelet[3172]: E0120 01:25:10.662642 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:15.733210 kubelet[3172]: E0120 01:25:15.706127 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:20.829530 kubelet[3172]: E0120 01:25:20.828624 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:25.578283 kubelet[3172]: E0120 01:25:25.561916 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:25:25.862936 kubelet[3172]: E0120 01:25:25.857065 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:26.702036 kubelet[3172]: E0120 01:25:26.656720 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:25:30.869981 kubelet[3172]: E0120 01:25:30.869926 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:35.939957 kubelet[3172]: E0120 01:25:35.939861 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:40.564037 kubelet[3172]: E0120 01:25:40.563988 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:25:40.966162 kubelet[3172]: E0120 01:25:40.958421 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:44.387763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286333640.mount: Deactivated successfully. Jan 20 01:25:46.005747 kubelet[3172]: E0120 01:25:45.980557 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:51.073170 kubelet[3172]: E0120 01:25:51.072961 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:25:51.578249 kubelet[3172]: E0120 01:25:51.578200 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:25:56.401956 kubelet[3172]: E0120 01:25:56.387594 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:01.804612 kubelet[3172]: E0120 01:26:01.783883 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:06.816547 kubelet[3172]: E0120 01:26:06.813743 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:11.860982 kubelet[3172]: E0120 01:26:11.850084 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:16.862425 kubelet[3172]: E0120 01:26:16.862192 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:21.869839 kubelet[3172]: E0120 01:26:21.869685 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:26.959839 kubelet[3172]: E0120 01:26:26.947256 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:31.964772 kubelet[3172]: E0120 01:26:31.963456 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:36.979588 kubelet[3172]: E0120 01:26:36.979172 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:38.674441 kubelet[3172]: E0120 01:26:38.672877 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:26:39.570008 kubelet[3172]: E0120 01:26:39.568733 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:26:42.220626 kubelet[3172]: E0120 01:26:42.214231 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:44.012441 containerd[1566]: time="2026-01-20T01:26:44.009465036Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:26:44.025688 containerd[1566]: time="2026-01-20T01:26:44.024234802Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 01:26:44.038061 containerd[1566]: time="2026-01-20T01:26:44.036544940Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:26:44.083606 containerd[1566]: time="2026-01-20T01:26:44.053754992Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 3m5.456851624s" Jan 20 01:26:44.083606 containerd[1566]: time="2026-01-20T01:26:44.053821258Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 01:26:44.201444 containerd[1566]: time="2026-01-20T01:26:44.198483996Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 01:26:44.374500 containerd[1566]: time="2026-01-20T01:26:44.353652253Z" level=info msg="CreateContainer within sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 01:26:44.764671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212843629.mount: Deactivated successfully. Jan 20 01:26:44.898024 containerd[1566]: time="2026-01-20T01:26:44.895036686Z" level=info msg="Container ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:26:44.939644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085431370.mount: Deactivated successfully. Jan 20 01:26:45.749730 containerd[1566]: time="2026-01-20T01:26:45.747062513Z" level=info msg="CreateContainer within sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517\"" Jan 20 01:26:45.777171 containerd[1566]: time="2026-01-20T01:26:45.776746959Z" level=info msg="StartContainer for \"ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517\"" Jan 20 01:26:45.804181 containerd[1566]: time="2026-01-20T01:26:45.802895800Z" level=info msg="connecting to shim ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517" address="unix:///run/containerd/s/d4c1beb26014d71473d13b2eca1784bf90c682924ebf938d17f383007c19ecd5" protocol=ttrpc version=3 Jan 20 01:26:46.063883 systemd[1]: Started cri-containerd-ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517.scope - libcontainer container ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517. Jan 20 01:26:46.689248 containerd[1566]: time="2026-01-20T01:26:46.689152092Z" level=info msg="StartContainer for \"ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517\" returns successfully" Jan 20 01:26:46.818645 systemd[1]: cri-containerd-ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517.scope: Deactivated successfully. Jan 20 01:26:46.909457 containerd[1566]: time="2026-01-20T01:26:46.905903077Z" level=info msg="received container exit event container_id:\"ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517\" id:\"ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517\" pid:3632 exited_at:{seconds:1768872406 nanos:878514294}" Jan 20 01:26:47.271415 kubelet[3172]: E0120 01:26:47.270532 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:47.516745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517-rootfs.mount: Deactivated successfully. Jan 20 01:26:48.397193 kubelet[3172]: E0120 01:26:48.396501 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:26:49.391681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655779029.mount: Deactivated successfully. Jan 20 01:26:49.426819 kubelet[3172]: E0120 01:26:49.415858 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:26:49.621378 containerd[1566]: time="2026-01-20T01:26:49.617735432Z" level=info msg="CreateContainer within sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 01:26:50.410512 containerd[1566]: time="2026-01-20T01:26:50.389907153Z" level=info msg="Container a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:26:50.611277 containerd[1566]: time="2026-01-20T01:26:50.608671993Z" level=info msg="CreateContainer within sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046\"" Jan 20 01:26:50.633196 containerd[1566]: time="2026-01-20T01:26:50.626106018Z" level=info msg="StartContainer for \"a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046\"" Jan 20 01:26:50.638793 containerd[1566]: time="2026-01-20T01:26:50.638737854Z" level=info msg="connecting to shim a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046" address="unix:///run/containerd/s/d4c1beb26014d71473d13b2eca1784bf90c682924ebf938d17f383007c19ecd5" protocol=ttrpc version=3 Jan 20 01:26:51.432776 systemd[1]: Started cri-containerd-a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046.scope - libcontainer container a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046. Jan 20 01:26:52.263453 containerd[1566]: time="2026-01-20T01:26:52.262608806Z" level=info msg="StartContainer for \"a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046\" returns successfully" Jan 20 01:26:52.347601 kubelet[3172]: E0120 01:26:52.344973 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:52.571758 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:26:52.572809 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:26:52.589755 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:26:52.601671 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:26:52.612697 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 01:26:52.677857 systemd[1]: cri-containerd-a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046.scope: Deactivated successfully. Jan 20 01:26:52.686915 containerd[1566]: time="2026-01-20T01:26:52.683603883Z" level=info msg="received container exit event container_id:\"a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046\" id:\"a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046\" pid:3687 exited_at:{seconds:1768872412 nanos:676287205}" Jan 20 01:26:52.759428 kubelet[3172]: E0120 01:26:52.752270 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:26:53.005808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:26:53.366793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046-rootfs.mount: Deactivated successfully. Jan 20 01:26:53.942034 kubelet[3172]: E0120 01:26:53.938422 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:26:54.195670 containerd[1566]: time="2026-01-20T01:26:54.193897498Z" level=info msg="CreateContainer within sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 01:26:54.521538 containerd[1566]: time="2026-01-20T01:26:54.520784610Z" level=info msg="Container 6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:26:54.684811 containerd[1566]: time="2026-01-20T01:26:54.682714024Z" level=info msg="CreateContainer within sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b\"" Jan 20 01:26:54.706428 containerd[1566]: time="2026-01-20T01:26:54.703236201Z" level=info msg="StartContainer for \"6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b\"" Jan 20 01:26:54.718481 containerd[1566]: time="2026-01-20T01:26:54.716837278Z" level=info msg="connecting to shim 6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b" address="unix:///run/containerd/s/d4c1beb26014d71473d13b2eca1784bf90c682924ebf938d17f383007c19ecd5" protocol=ttrpc version=3 Jan 20 01:26:54.913009 systemd[1]: Started cri-containerd-6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b.scope - libcontainer container 6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b. Jan 20 01:26:55.402681 containerd[1566]: time="2026-01-20T01:26:55.402528831Z" level=info msg="StartContainer for \"6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b\" returns successfully" Jan 20 01:26:55.408633 systemd[1]: cri-containerd-6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b.scope: Deactivated successfully. Jan 20 01:26:55.436434 containerd[1566]: time="2026-01-20T01:26:55.433557410Z" level=info msg="received container exit event container_id:\"6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b\" id:\"6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b\" pid:3742 exited_at:{seconds:1768872415 nanos:430216063}" Jan 20 01:26:56.336516 kubelet[3172]: E0120 01:26:56.336019 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:26:56.488142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b-rootfs.mount: Deactivated successfully. Jan 20 01:26:57.410938 kubelet[3172]: E0120 01:26:57.410837 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:26:57.516084 kubelet[3172]: E0120 01:26:57.515078 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:26:57.640052 containerd[1566]: time="2026-01-20T01:26:57.639993767Z" level=info msg="CreateContainer within sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 01:26:58.142376 containerd[1566]: time="2026-01-20T01:26:58.140839421Z" level=info msg="Container 4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:26:58.291931 containerd[1566]: time="2026-01-20T01:26:58.286501543Z" level=warning msg="container event discarded" container=8c0175d775470c23061d210dda5c9ee8998e079e12b5bd122d68907471a03bf3 type=CONTAINER_STOPPED_EVENT Jan 20 01:26:58.297716 containerd[1566]: time="2026-01-20T01:26:58.287055738Z" level=info msg="CreateContainer within sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86\"" Jan 20 01:26:58.320645 containerd[1566]: time="2026-01-20T01:26:58.307675459Z" level=info msg="StartContainer for \"4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86\"" Jan 20 01:26:58.379245 containerd[1566]: time="2026-01-20T01:26:58.375638498Z" level=info msg="connecting to shim 4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86" address="unix:///run/containerd/s/d4c1beb26014d71473d13b2eca1784bf90c682924ebf938d17f383007c19ecd5" protocol=ttrpc version=3 Jan 20 01:26:58.875688 systemd[1]: Started cri-containerd-4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86.scope - libcontainer container 4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86. Jan 20 01:26:58.984769 containerd[1566]: time="2026-01-20T01:26:58.982292155Z" level=warning msg="container event discarded" container=fbb65d8cec4da6d6200e238fc8cccb5cebeb9e88bd83f772a2d979f0cacd8e42 type=CONTAINER_CREATED_EVENT Jan 20 01:27:00.001762 containerd[1566]: time="2026-01-20T01:27:00.001572994Z" level=warning msg="container event discarded" container=2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80 type=CONTAINER_STOPPED_EVENT Jan 20 01:27:01.191738 containerd[1566]: time="2026-01-20T01:27:01.182613522Z" level=warning msg="container event discarded" container=171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4 type=CONTAINER_CREATED_EVENT Jan 20 01:27:01.211968 systemd[1]: cri-containerd-4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86.scope: Deactivated successfully. Jan 20 01:27:01.228879 containerd[1566]: time="2026-01-20T01:27:01.218794861Z" level=warning msg="container event discarded" container=fbb65d8cec4da6d6200e238fc8cccb5cebeb9e88bd83f772a2d979f0cacd8e42 type=CONTAINER_STARTED_EVENT Jan 20 01:27:01.335269 containerd[1566]: time="2026-01-20T01:27:01.328460144Z" level=info msg="received container exit event container_id:\"4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86\" id:\"4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86\" pid:3783 exited_at:{seconds:1768872421 nanos:220265479}" Jan 20 01:27:01.415624 containerd[1566]: time="2026-01-20T01:27:01.413589848Z" level=info msg="StartContainer for \"4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86\" returns successfully" Jan 20 01:27:01.636150 kubelet[3172]: E0120 01:27:01.609683 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:02.047608 kubelet[3172]: E0120 01:27:02.038501 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:02.530777 kubelet[3172]: E0120 01:27:02.514616 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:27:02.620248 containerd[1566]: time="2026-01-20T01:27:02.619516667Z" level=warning msg="container event discarded" container=171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4 type=CONTAINER_STARTED_EVENT Jan 20 01:27:03.136942 kubelet[3172]: E0120 01:27:03.136893 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:03.360794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86-rootfs.mount: Deactivated successfully. Jan 20 01:27:04.657721 kubelet[3172]: E0120 01:27:04.657678 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:05.171114 containerd[1566]: time="2026-01-20T01:27:05.171044838Z" level=info msg="CreateContainer within sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 01:27:05.525539 containerd[1566]: time="2026-01-20T01:27:05.519022300Z" level=info msg="Container 0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:27:05.582089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2092042607.mount: Deactivated successfully. Jan 20 01:27:05.823899 containerd[1566]: time="2026-01-20T01:27:05.813820468Z" level=info msg="CreateContainer within sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\"" Jan 20 01:27:05.895456 containerd[1566]: time="2026-01-20T01:27:05.894095054Z" level=info msg="StartContainer for \"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\"" Jan 20 01:27:05.921793 containerd[1566]: time="2026-01-20T01:27:05.921660504Z" level=info msg="connecting to shim 0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2" address="unix:///run/containerd/s/d4c1beb26014d71473d13b2eca1784bf90c682924ebf938d17f383007c19ecd5" protocol=ttrpc version=3 Jan 20 01:27:06.529023 systemd[1]: Started cri-containerd-0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2.scope - libcontainer container 0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2. Jan 20 01:27:07.635042 containerd[1566]: time="2026-01-20T01:27:07.631103399Z" level=info msg="StartContainer for \"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\" returns successfully" Jan 20 01:27:07.647234 kubelet[3172]: E0120 01:27:07.645566 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:27:14.112891 kubelet[3172]: E0120 01:27:14.109184 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:14.852238 kubelet[3172]: I0120 01:27:14.850903 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bchfb" podStartSLOduration=49.247474964 podStartE2EDuration="3m54.850876778s" podCreationTimestamp="2026-01-20 01:23:20 +0000 UTC" firstStartedPulling="2026-01-20 01:23:38.585813594 +0000 UTC m=+60.284017164" lastFinishedPulling="2026-01-20 01:26:44.189215397 +0000 UTC m=+245.887418978" observedRunningTime="2026-01-20 01:27:14.833064007 +0000 UTC m=+276.531267588" watchObservedRunningTime="2026-01-20 01:27:14.850876778 +0000 UTC m=+276.549080359" Jan 20 01:27:16.221851 kubelet[3172]: E0120 01:27:16.217222 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:17.722201 kubelet[3172]: E0120 01:27:17.713812 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.115s" Jan 20 01:27:17.782588 kubelet[3172]: E0120 01:27:17.726835 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:19.850559 containerd[1566]: time="2026-01-20T01:27:19.847648226Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:27:20.241781 containerd[1566]: time="2026-01-20T01:27:20.042469067Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 01:27:20.406621 containerd[1566]: time="2026-01-20T01:27:20.406502257Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:27:20.493549 containerd[1566]: time="2026-01-20T01:27:20.449078304Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 36.250473621s" Jan 20 01:27:20.493549 containerd[1566]: time="2026-01-20T01:27:20.449140457Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 01:27:20.910905 containerd[1566]: time="2026-01-20T01:27:20.909649366Z" level=info msg="CreateContainer within sandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 01:27:21.319166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177300363.mount: Deactivated successfully. Jan 20 01:27:21.351074 containerd[1566]: time="2026-01-20T01:27:21.335964825Z" level=info msg="Container 53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:27:21.638901 containerd[1566]: time="2026-01-20T01:27:21.638696567Z" level=info msg="CreateContainer within sandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb\"" Jan 20 01:27:21.706486 containerd[1566]: time="2026-01-20T01:27:21.689651708Z" level=info msg="StartContainer for \"53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb\"" Jan 20 01:27:21.831480 containerd[1566]: time="2026-01-20T01:27:21.822949414Z" level=info msg="connecting to shim 53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb" address="unix:///run/containerd/s/a2e0cae1905ed5d0927600f03d22acdaeeffefa7c78fd47c144e009d52a1c3e4" protocol=ttrpc version=3 Jan 20 01:27:23.472891 systemd[1]: Started cri-containerd-53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb.scope - libcontainer container 53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb. Jan 20 01:27:23.914565 systemd[1]: Created slice kubepods-burstable-pod83193cf1_85b9_4f97_a265_e5c5cec01484.slice - libcontainer container kubepods-burstable-pod83193cf1_85b9_4f97_a265_e5c5cec01484.slice. Jan 20 01:27:24.025786 kubelet[3172]: I0120 01:27:24.022986 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83193cf1-85b9-4f97-a265-e5c5cec01484-config-volume\") pod \"coredns-66bc5c9577-6grwm\" (UID: \"83193cf1-85b9-4f97-a265-e5c5cec01484\") " pod="kube-system/coredns-66bc5c9577-6grwm" Jan 20 01:27:24.079868 kubelet[3172]: I0120 01:27:24.023057 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c595e2f-9f64-4525-a3e7-2f0c2e518b06-config-volume\") pod \"coredns-66bc5c9577-vmpnm\" (UID: \"6c595e2f-9f64-4525-a3e7-2f0c2e518b06\") " pod="kube-system/coredns-66bc5c9577-vmpnm" Jan 20 01:27:24.079868 kubelet[3172]: I0120 01:27:24.079449 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zsx6\" (UniqueName: \"kubernetes.io/projected/83193cf1-85b9-4f97-a265-e5c5cec01484-kube-api-access-4zsx6\") pod \"coredns-66bc5c9577-6grwm\" (UID: \"83193cf1-85b9-4f97-a265-e5c5cec01484\") " pod="kube-system/coredns-66bc5c9577-6grwm" Jan 20 01:27:24.079868 kubelet[3172]: I0120 01:27:24.079493 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdz8x\" (UniqueName: \"kubernetes.io/projected/6c595e2f-9f64-4525-a3e7-2f0c2e518b06-kube-api-access-gdz8x\") pod \"coredns-66bc5c9577-vmpnm\" (UID: \"6c595e2f-9f64-4525-a3e7-2f0c2e518b06\") " pod="kube-system/coredns-66bc5c9577-vmpnm" Jan 20 01:27:24.609914 systemd[1]: Created slice kubepods-burstable-pod6c595e2f_9f64_4525_a3e7_2f0c2e518b06.slice - libcontainer container kubepods-burstable-pod6c595e2f_9f64_4525_a3e7_2f0c2e518b06.slice. Jan 20 01:27:25.578639 kubelet[3172]: E0120 01:27:25.578583 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:25.618832 containerd[1566]: time="2026-01-20T01:27:25.601575563Z" level=error msg="get state for 53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb" error="context deadline exceeded" Jan 20 01:27:25.618832 containerd[1566]: time="2026-01-20T01:27:25.601794623Z" level=warning msg="unknown status" status=0 Jan 20 01:27:25.627837 containerd[1566]: time="2026-01-20T01:27:25.627783150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vmpnm,Uid:6c595e2f-9f64-4525-a3e7-2f0c2e518b06,Namespace:kube-system,Attempt:0,}" Jan 20 01:27:26.009069 kubelet[3172]: E0120 01:27:26.007943 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:26.031629 containerd[1566]: time="2026-01-20T01:27:26.025669274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6grwm,Uid:83193cf1-85b9-4f97-a265-e5c5cec01484,Namespace:kube-system,Attempt:0,}" Jan 20 01:27:26.487763 containerd[1566]: time="2026-01-20T01:27:26.487682663Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:27:26.948540 containerd[1566]: time="2026-01-20T01:27:26.940793729Z" level=info msg="StartContainer for \"53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb\" returns successfully" Jan 20 01:27:27.074788 kubelet[3172]: E0120 01:27:27.074293 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:27.884123 kubelet[3172]: E0120 01:27:27.884084 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:28.122272 kubelet[3172]: E0120 01:27:28.121101 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:44.708909 kubelet[3172]: E0120 01:27:44.696658 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:27:55.527640 kubelet[3172]: E0120 01:27:55.519591 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.942s" Jan 20 01:27:56.612148 systemd-networkd[1462]: cilium_host: Link UP Jan 20 01:27:56.613255 systemd-networkd[1462]: cilium_net: Link UP Jan 20 01:27:56.613870 systemd-networkd[1462]: cilium_net: Gained carrier Jan 20 01:27:56.684779 systemd-networkd[1462]: cilium_host: Gained carrier Jan 20 01:27:57.323595 systemd-networkd[1462]: cilium_net: Gained IPv6LL Jan 20 01:27:57.436140 systemd-networkd[1462]: cilium_host: Gained IPv6LL Jan 20 01:28:03.043095 systemd-networkd[1462]: cilium_vxlan: Link UP Jan 20 01:28:03.043114 systemd-networkd[1462]: cilium_vxlan: Gained carrier Jan 20 01:28:03.586027 kubelet[3172]: E0120 01:28:03.584286 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:28:04.928145 systemd-networkd[1462]: cilium_vxlan: Gained IPv6LL Jan 20 01:28:09.120989 kernel: NET: Registered PF_ALG protocol family Jan 20 01:28:15.584714 kubelet[3172]: E0120 01:28:15.563209 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:28:21.053754 systemd-networkd[1462]: lxc_health: Link UP Jan 20 01:28:21.080454 systemd-networkd[1462]: lxc_health: Gained carrier Jan 20 01:28:21.949781 kubelet[3172]: E0120 01:28:21.949562 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:28:22.194985 kubelet[3172]: I0120 01:28:22.194911 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-x9qtk" podStartSLOduration=79.551269982 podStartE2EDuration="5m0.194891763s" podCreationTimestamp="2026-01-20 01:23:22 +0000 UTC" firstStartedPulling="2026-01-20 01:23:40.093497273 +0000 UTC m=+61.791700834" lastFinishedPulling="2026-01-20 01:27:20.737119044 +0000 UTC m=+282.435322615" observedRunningTime="2026-01-20 01:27:27.189689414 +0000 UTC m=+288.887892995" watchObservedRunningTime="2026-01-20 01:28:22.194891763 +0000 UTC m=+343.893095333" Jan 20 01:28:22.728969 systemd-networkd[1462]: lxc_health: Gained IPv6LL Jan 20 01:28:23.018946 containerd[1566]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jan 20 01:28:23.093556 containerd[1566]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jan 20 01:28:23.153938 kubelet[3172]: E0120 01:28:23.150256 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:28:23.211803 systemd[1]: run-netns-cni\x2d3839b1e9\x2da00c\x2d3c43\x2d9591\x2dd6ff0f143d2c.mount: Deactivated successfully. Jan 20 01:28:23.212089 systemd[1]: run-netns-cni\x2defc6d993\x2d6642\x2d1cbd\x2d6a62\x2dd723b0f047b2.mount: Deactivated successfully. Jan 20 01:28:23.514894 containerd[1566]: time="2026-01-20T01:28:23.360924659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6grwm,Uid:83193cf1-85b9-4f97-a265-e5c5cec01484,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0231165cb07f6251b0a39cbcc9de4e92dc4e6a8b531f3f06542f8d8fae391748\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Jan 20 01:28:23.521528 containerd[1566]: time="2026-01-20T01:28:23.363749084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vmpnm,Uid:6c595e2f-9f64-4525-a3e7-2f0c2e518b06,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59ad06ade94651dd4f59bed3effa380302390d15cc91cb20785365ed2842f0d9\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Jan 20 01:28:23.625677 kubelet[3172]: E0120 01:28:23.625608 3172 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 20 01:28:23.625677 kubelet[3172]: rpc error: code = Unknown desc = failed to setup network for sandbox "59ad06ade94651dd4f59bed3effa380302390d15cc91cb20785365ed2842f0d9": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:28:23.625677 kubelet[3172]: Is the agent running? Jan 20 01:28:23.625677 kubelet[3172]: > Jan 20 01:28:23.639441 kubelet[3172]: E0120 01:28:23.626027 3172 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 20 01:28:23.639441 kubelet[3172]: rpc error: code = Unknown desc = failed to setup network for sandbox "0231165cb07f6251b0a39cbcc9de4e92dc4e6a8b531f3f06542f8d8fae391748": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:28:23.639441 kubelet[3172]: Is the agent running? Jan 20 01:28:23.639441 kubelet[3172]: > Jan 20 01:28:23.639441 kubelet[3172]: E0120 01:28:23.637854 3172 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=< Jan 20 01:28:23.639441 kubelet[3172]: rpc error: code = Unknown desc = failed to setup network for sandbox "59ad06ade94651dd4f59bed3effa380302390d15cc91cb20785365ed2842f0d9": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:28:23.639441 kubelet[3172]: Is the agent running? Jan 20 01:28:23.639441 kubelet[3172]: > pod="kube-system/coredns-66bc5c9577-vmpnm" Jan 20 01:28:23.639441 kubelet[3172]: E0120 01:28:23.637894 3172 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err=< Jan 20 01:28:23.639441 kubelet[3172]: rpc error: code = Unknown desc = failed to setup network for sandbox "59ad06ade94651dd4f59bed3effa380302390d15cc91cb20785365ed2842f0d9": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:28:23.639441 kubelet[3172]: Is the agent running? Jan 20 01:28:23.639441 kubelet[3172]: > pod="kube-system/coredns-66bc5c9577-vmpnm" Jan 20 01:28:23.639928 kubelet[3172]: E0120 01:28:23.637977 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vmpnm_kube-system(6c595e2f-9f64-4525-a3e7-2f0c2e518b06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vmpnm_kube-system(6c595e2f-9f64-4525-a3e7-2f0c2e518b06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59ad06ade94651dd4f59bed3effa380302390d15cc91cb20785365ed2842f0d9\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-66bc5c9577-vmpnm" podUID="6c595e2f-9f64-4525-a3e7-2f0c2e518b06" Jan 20 01:28:23.657825 kubelet[3172]: E0120 01:28:23.657755 3172 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=< Jan 20 01:28:23.657825 kubelet[3172]: rpc error: code = Unknown desc = failed to setup network for sandbox "0231165cb07f6251b0a39cbcc9de4e92dc4e6a8b531f3f06542f8d8fae391748": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:28:23.657825 kubelet[3172]: Is the agent running? Jan 20 01:28:23.657825 kubelet[3172]: > pod="kube-system/coredns-66bc5c9577-6grwm" Jan 20 01:28:23.658913 kubelet[3172]: E0120 01:28:23.658888 3172 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err=< Jan 20 01:28:23.658913 kubelet[3172]: rpc error: code = Unknown desc = failed to setup network for sandbox "0231165cb07f6251b0a39cbcc9de4e92dc4e6a8b531f3f06542f8d8fae391748": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:28:23.658913 kubelet[3172]: Is the agent running? Jan 20 01:28:23.658913 kubelet[3172]: > pod="kube-system/coredns-66bc5c9577-6grwm" Jan 20 01:28:23.670575 kubelet[3172]: E0120 01:28:23.670505 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6grwm_kube-system(83193cf1-85b9-4f97-a265-e5c5cec01484)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6grwm_kube-system(83193cf1-85b9-4f97-a265-e5c5cec01484)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0231165cb07f6251b0a39cbcc9de4e92dc4e6a8b531f3f06542f8d8fae391748\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:28:30.569187 kubelet[3172]: E0120 01:28:30.566876 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:28:36.678605 kubelet[3172]: E0120 01:28:36.677690 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:28:36.693249 containerd[1566]: time="2026-01-20T01:28:36.687141037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6grwm,Uid:83193cf1-85b9-4f97-a265-e5c5cec01484,Namespace:kube-system,Attempt:0,}" Jan 20 01:28:37.709079 kubelet[3172]: E0120 01:28:37.686214 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:28:37.732819 containerd[1566]: time="2026-01-20T01:28:37.694229783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vmpnm,Uid:6c595e2f-9f64-4525-a3e7-2f0c2e518b06,Namespace:kube-system,Attempt:0,}" Jan 20 01:28:37.919625 systemd-networkd[1462]: lxcf9b6bfaaf52d: Link UP Jan 20 01:28:37.977422 kernel: eth0: renamed from tmp4ecc7 Jan 20 01:28:38.289191 systemd-networkd[1462]: lxcf9b6bfaaf52d: Gained carrier Jan 20 01:28:38.528594 containerd[1566]: time="2026-01-20T01:28:38.518428831Z" level=warning msg="container event discarded" container=4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71 type=CONTAINER_CREATED_EVENT Jan 20 01:28:38.528594 containerd[1566]: time="2026-01-20T01:28:38.518505141Z" level=warning msg="container event discarded" container=4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71 type=CONTAINER_STARTED_EVENT Jan 20 01:28:38.587508 containerd[1566]: time="2026-01-20T01:28:38.549888264Z" level=warning msg="container event discarded" container=610c5aca265ad768e93be6cdf1d2b0c5d11108b68795baeb39b61f2bacb8b4c5 type=CONTAINER_CREATED_EVENT Jan 20 01:28:38.587508 containerd[1566]: time="2026-01-20T01:28:38.565675546Z" level=warning msg="container event discarded" container=610c5aca265ad768e93be6cdf1d2b0c5d11108b68795baeb39b61f2bacb8b4c5 type=CONTAINER_STARTED_EVENT Jan 20 01:28:39.325206 systemd-networkd[1462]: lxc28ff3d1ff6dc: Link UP Jan 20 01:28:39.402060 kernel: eth0: renamed from tmp409a9 Jan 20 01:28:39.440203 systemd-networkd[1462]: lxc28ff3d1ff6dc: Gained carrier Jan 20 01:28:39.813492 containerd[1566]: time="2026-01-20T01:28:39.813272600Z" level=warning msg="container event discarded" container=a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c type=CONTAINER_CREATED_EVENT Jan 20 01:28:39.818159 containerd[1566]: time="2026-01-20T01:28:39.818106843Z" level=warning msg="container event discarded" container=a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c type=CONTAINER_STARTED_EVENT Jan 20 01:28:40.080279 systemd-networkd[1462]: lxcf9b6bfaaf52d: Gained IPv6LL Jan 20 01:28:40.644940 containerd[1566]: time="2026-01-20T01:28:40.641157651Z" level=warning msg="container event discarded" container=d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6 type=CONTAINER_CREATED_EVENT Jan 20 01:28:41.492984 systemd-networkd[1462]: lxc28ff3d1ff6dc: Gained IPv6LL Jan 20 01:28:47.393606 kubelet[3172]: E0120 01:28:47.393552 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:28:55.144768 containerd[1566]: time="2026-01-20T01:28:55.140279889Z" level=warning msg="container event discarded" container=d34cc15823b7ff0197ed8832efd322c577bc81ba2a56a9746c007c6dbe8872d6 type=CONTAINER_STARTED_EVENT Jan 20 01:29:04.579576 kubelet[3172]: E0120 01:29:04.563466 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:23.708161 kubelet[3172]: E0120 01:29:23.708034 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:28.111465 systemd-networkd[1462]: lxc_health: Link DOWN Jan 20 01:29:28.113292 systemd-networkd[1462]: lxc_health: Lost carrier Jan 20 01:29:29.227229 systemd-networkd[1462]: lxc_health: Link UP Jan 20 01:29:29.702876 systemd-networkd[1462]: lxc_health: Gained carrier Jan 20 01:29:30.322851 containerd[1566]: time="2026-01-20T01:29:30.322778935Z" level=info msg="connecting to shim 409a9a04eb823664839eeacfe7e9c7cadbc7f8adda97fb5095e29128a170986d" address="unix:///run/containerd/s/7f407c7e560c955f21408f5e3c27d34d2dce8a43e84bcc0b2af959aa68c90abe" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:29:30.406868 containerd[1566]: time="2026-01-20T01:29:30.406794083Z" level=info msg="connecting to shim 4ecc7c5f51a3606059fb0f608b07ec4d11118aedfce68de1fc9c729365e59e85" address="unix:///run/containerd/s/91755c126f2f0177ee47ec38ea2080042b9750d3661209d025633703965c7ab8" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:29:31.033963 systemd-networkd[1462]: lxc_health: Gained IPv6LL Jan 20 01:29:31.930216 systemd[1]: Started cri-containerd-409a9a04eb823664839eeacfe7e9c7cadbc7f8adda97fb5095e29128a170986d.scope - libcontainer container 409a9a04eb823664839eeacfe7e9c7cadbc7f8adda97fb5095e29128a170986d. Jan 20 01:29:33.309974 systemd[1]: Started cri-containerd-4ecc7c5f51a3606059fb0f608b07ec4d11118aedfce68de1fc9c729365e59e85.scope - libcontainer container 4ecc7c5f51a3606059fb0f608b07ec4d11118aedfce68de1fc9c729365e59e85. Jan 20 01:29:34.192954 containerd[1566]: time="2026-01-20T01:29:34.102083710Z" level=error msg="get state for 409a9a04eb823664839eeacfe7e9c7cadbc7f8adda97fb5095e29128a170986d" error="context deadline exceeded" Jan 20 01:29:34.192954 containerd[1566]: time="2026-01-20T01:29:34.102680329Z" level=warning msg="unknown status" status=0 Jan 20 01:29:34.545105 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:29:35.067872 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:29:35.349224 containerd[1566]: time="2026-01-20T01:29:35.325852398Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:29:35.876899 containerd[1566]: time="2026-01-20T01:29:35.875971711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vmpnm,Uid:6c595e2f-9f64-4525-a3e7-2f0c2e518b06,Namespace:kube-system,Attempt:0,} returns sandbox id \"409a9a04eb823664839eeacfe7e9c7cadbc7f8adda97fb5095e29128a170986d\"" Jan 20 01:29:35.897052 kubelet[3172]: E0120 01:29:35.895732 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:35.969159 containerd[1566]: time="2026-01-20T01:29:35.962272825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6grwm,Uid:83193cf1-85b9-4f97-a265-e5c5cec01484,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ecc7c5f51a3606059fb0f608b07ec4d11118aedfce68de1fc9c729365e59e85\"" Jan 20 01:29:35.970020 kubelet[3172]: E0120 01:29:35.965258 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:36.007741 containerd[1566]: time="2026-01-20T01:29:36.007173472Z" level=info msg="CreateContainer within sandbox \"409a9a04eb823664839eeacfe7e9c7cadbc7f8adda97fb5095e29128a170986d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:29:36.047769 containerd[1566]: time="2026-01-20T01:29:36.043243739Z" level=info msg="CreateContainer within sandbox \"4ecc7c5f51a3606059fb0f608b07ec4d11118aedfce68de1fc9c729365e59e85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:29:36.153075 systemd-networkd[1462]: lxc_health: Link DOWN Jan 20 01:29:36.153094 systemd-networkd[1462]: lxc_health: Lost carrier Jan 20 01:29:36.362124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3253598261.mount: Deactivated successfully. Jan 20 01:29:36.513289 containerd[1566]: time="2026-01-20T01:29:36.464708388Z" level=info msg="Container eb6c9889153318d4cda753e2e51857a746ccc962e701a25a3e572c43f86e2df0: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:29:36.512801 systemd-networkd[1462]: lxc_health: Link UP Jan 20 01:29:36.542291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1015554790.mount: Deactivated successfully. Jan 20 01:29:36.629781 kubelet[3172]: E0120 01:29:36.613122 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:36.664889 systemd-networkd[1462]: lxc_health: Gained carrier Jan 20 01:29:36.732919 containerd[1566]: time="2026-01-20T01:29:36.637914953Z" level=info msg="Container 76e5b52d7f5227198a24bcb4990d6777b349fb4e266af52e8f287a90518805b8: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:29:37.096859 containerd[1566]: time="2026-01-20T01:29:37.095243654Z" level=info msg="CreateContainer within sandbox \"4ecc7c5f51a3606059fb0f608b07ec4d11118aedfce68de1fc9c729365e59e85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb6c9889153318d4cda753e2e51857a746ccc962e701a25a3e572c43f86e2df0\"" Jan 20 01:29:37.216074 containerd[1566]: time="2026-01-20T01:29:37.213138435Z" level=info msg="StartContainer for \"eb6c9889153318d4cda753e2e51857a746ccc962e701a25a3e572c43f86e2df0\"" Jan 20 01:29:37.219922 containerd[1566]: time="2026-01-20T01:29:37.219878882Z" level=info msg="connecting to shim eb6c9889153318d4cda753e2e51857a746ccc962e701a25a3e572c43f86e2df0" address="unix:///run/containerd/s/91755c126f2f0177ee47ec38ea2080042b9750d3661209d025633703965c7ab8" protocol=ttrpc version=3 Jan 20 01:29:37.770695 containerd[1566]: time="2026-01-20T01:29:37.765108812Z" level=info msg="CreateContainer within sandbox \"409a9a04eb823664839eeacfe7e9c7cadbc7f8adda97fb5095e29128a170986d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76e5b52d7f5227198a24bcb4990d6777b349fb4e266af52e8f287a90518805b8\"" Jan 20 01:29:37.770695 containerd[1566]: time="2026-01-20T01:29:37.768111774Z" level=info msg="StartContainer for \"76e5b52d7f5227198a24bcb4990d6777b349fb4e266af52e8f287a90518805b8\"" Jan 20 01:29:37.869180 containerd[1566]: time="2026-01-20T01:29:37.869117667Z" level=info msg="connecting to shim 76e5b52d7f5227198a24bcb4990d6777b349fb4e266af52e8f287a90518805b8" address="unix:///run/containerd/s/7f407c7e560c955f21408f5e3c27d34d2dce8a43e84bcc0b2af959aa68c90abe" protocol=ttrpc version=3 Jan 20 01:29:37.992667 systemd-networkd[1462]: lxc_health: Gained IPv6LL Jan 20 01:29:38.320212 systemd[1]: Started cri-containerd-eb6c9889153318d4cda753e2e51857a746ccc962e701a25a3e572c43f86e2df0.scope - libcontainer container eb6c9889153318d4cda753e2e51857a746ccc962e701a25a3e572c43f86e2df0. Jan 20 01:29:38.420731 systemd[1]: Started cri-containerd-76e5b52d7f5227198a24bcb4990d6777b349fb4e266af52e8f287a90518805b8.scope - libcontainer container 76e5b52d7f5227198a24bcb4990d6777b349fb4e266af52e8f287a90518805b8. Jan 20 01:29:39.905828 containerd[1566]: time="2026-01-20T01:29:39.900843151Z" level=info msg="StartContainer for \"eb6c9889153318d4cda753e2e51857a746ccc962e701a25a3e572c43f86e2df0\" returns successfully" Jan 20 01:29:40.086949 containerd[1566]: time="2026-01-20T01:29:40.080101224Z" level=info msg="StartContainer for \"76e5b52d7f5227198a24bcb4990d6777b349fb4e266af52e8f287a90518805b8\" returns successfully" Jan 20 01:29:40.226228 kubelet[3172]: E0120 01:29:40.213244 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:40.310881 kubelet[3172]: E0120 01:29:40.301257 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:40.873858 kubelet[3172]: I0120 01:29:40.872927 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6grwm" podStartSLOduration=377.872905134 podStartE2EDuration="6m17.872905134s" podCreationTimestamp="2026-01-20 01:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:29:40.546167896 +0000 UTC m=+422.244371467" watchObservedRunningTime="2026-01-20 01:29:40.872905134 +0000 UTC m=+422.571108725" Jan 20 01:29:40.885589 kubelet[3172]: I0120 01:29:40.884139 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vmpnm" podStartSLOduration=378.884111511 podStartE2EDuration="6m18.884111511s" podCreationTimestamp="2026-01-20 01:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:29:40.860985413 +0000 UTC m=+422.559188984" watchObservedRunningTime="2026-01-20 01:29:40.884111511 +0000 UTC m=+422.582315172" Jan 20 01:29:41.396682 kubelet[3172]: E0120 01:29:41.395005 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:41.396682 kubelet[3172]: E0120 01:29:41.396179 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:42.631271 kubelet[3172]: E0120 01:29:42.628234 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:42.779013 systemd-networkd[1462]: lxc_health: Link DOWN Jan 20 01:29:42.779029 systemd-networkd[1462]: lxc_health: Lost carrier Jan 20 01:29:43.511187 systemd-networkd[1462]: lxc_health: Link UP Jan 20 01:29:43.579820 systemd-networkd[1462]: lxc_health: Gained carrier Jan 20 01:29:44.573585 kubelet[3172]: E0120 01:29:44.572862 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:45.211999 systemd-networkd[1462]: lxc_health: Gained IPv6LL Jan 20 01:29:48.458751 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:39902.service - OpenSSH per-connection server daemon (10.0.0.1:39902). Jan 20 01:29:51.439634 kubelet[3172]: E0120 01:29:51.437486 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:51.605049 kubelet[3172]: E0120 01:29:51.595089 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:51.932947 sshd[4723]: Accepted publickey for core from 10.0.0.1 port 39902 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:52.395091 sshd-session[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:52.551064 systemd-logind[1552]: New session 10 of user core. Jan 20 01:29:52.603751 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:29:53.598026 systemd-networkd[1462]: lxc_health: Link DOWN Jan 20 01:29:53.598044 systemd-networkd[1462]: lxc_health: Lost carrier Jan 20 01:29:53.918216 kubelet[3172]: E0120 01:29:53.794994 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.227s" Jan 20 01:29:53.918216 kubelet[3172]: E0120 01:29:53.816473 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:54.264654 systemd-networkd[1462]: lxc_health: Link UP Jan 20 01:29:54.274006 systemd-networkd[1462]: lxc_health: Gained carrier Jan 20 01:29:55.918784 systemd-networkd[1462]: lxc_health: Gained IPv6LL Jan 20 01:29:59.310192 sshd[4728]: Connection closed by 10.0.0.1 port 39902 Jan 20 01:29:59.308783 sshd-session[4723]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:59.411460 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:39902.service: Deactivated successfully. Jan 20 01:29:59.435553 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:29:59.455682 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:29:59.486812 systemd-logind[1552]: Removed session 10. Jan 20 01:30:04.420549 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:45570.service - OpenSSH per-connection server daemon (10.0.0.1:45570). Jan 20 01:30:05.280681 sshd[4777]: Accepted publickey for core from 10.0.0.1 port 45570 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:05.319180 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:05.409657 systemd-logind[1552]: New session 11 of user core. Jan 20 01:30:05.481840 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:30:06.997562 sshd[4780]: Connection closed by 10.0.0.1 port 45570 Jan 20 01:30:07.002733 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:07.076187 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:45570.service: Deactivated successfully. Jan 20 01:30:07.116805 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:30:07.135661 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:30:07.155737 systemd-logind[1552]: Removed session 11. Jan 20 01:30:11.574570 kubelet[3172]: E0120 01:30:11.574233 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:30:12.214894 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:42476.service - OpenSSH per-connection server daemon (10.0.0.1:42476). Jan 20 01:30:12.780625 sshd[4794]: Accepted publickey for core from 10.0.0.1 port 42476 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:12.788679 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:13.443506 systemd-logind[1552]: New session 12 of user core. Jan 20 01:30:13.477095 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:30:15.896731 sshd[4797]: Connection closed by 10.0.0.1 port 42476 Jan 20 01:30:15.897128 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:16.013829 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:42476.service: Deactivated successfully. Jan 20 01:30:16.043288 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:30:16.074039 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:30:16.101849 systemd-logind[1552]: Removed session 12. Jan 20 01:30:20.977994 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:51336.service - OpenSSH per-connection server daemon (10.0.0.1:51336). Jan 20 01:30:21.461005 sshd[4813]: Accepted publickey for core from 10.0.0.1 port 51336 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:21.628284 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:21.991850 systemd-logind[1552]: New session 13 of user core. Jan 20 01:30:22.032934 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:30:23.344963 sshd[4816]: Connection closed by 10.0.0.1 port 51336 Jan 20 01:30:23.350955 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:23.408970 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:51336.service: Deactivated successfully. Jan 20 01:30:23.433806 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:30:23.472022 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:30:23.492276 systemd-logind[1552]: Removed session 13. Jan 20 01:30:25.582203 kubelet[3172]: E0120 01:30:25.573168 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:30:28.432885 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:51608.service - OpenSSH per-connection server daemon (10.0.0.1:51608). Jan 20 01:30:29.428869 sshd[4833]: Accepted publickey for core from 10.0.0.1 port 51608 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:29.434285 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:29.560150 systemd-logind[1552]: New session 14 of user core. Jan 20 01:30:29.597951 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:30:30.909967 sshd[4836]: Connection closed by 10.0.0.1 port 51608 Jan 20 01:30:30.903953 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:30.929682 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:51608.service: Deactivated successfully. Jan 20 01:30:30.947998 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:30:30.980416 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:30:30.995887 systemd-logind[1552]: Removed session 14. Jan 20 01:30:36.146995 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:36788.service - OpenSSH per-connection server daemon (10.0.0.1:36788). Jan 20 01:30:36.745088 sshd[4852]: Accepted publickey for core from 10.0.0.1 port 36788 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:36.748974 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:36.842873 systemd-logind[1552]: New session 15 of user core. Jan 20 01:30:36.918619 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:30:41.202439 sshd[4855]: Connection closed by 10.0.0.1 port 36788 Jan 20 01:30:41.195274 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:41.276878 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:36788.service: Deactivated successfully. Jan 20 01:30:41.314221 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:30:41.348930 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:30:41.402590 systemd-logind[1552]: Removed session 15. Jan 20 01:30:43.569240 kubelet[3172]: E0120 01:30:43.562982 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:30:43.569240 kubelet[3172]: E0120 01:30:43.564254 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:30:46.321806 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:35230.service - OpenSSH per-connection server daemon (10.0.0.1:35230). Jan 20 01:30:46.965869 sshd[4873]: Accepted publickey for core from 10.0.0.1 port 35230 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:46.970216 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:47.074259 systemd-logind[1552]: New session 16 of user core. Jan 20 01:30:47.121839 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:30:49.850035 sshd[4876]: Connection closed by 10.0.0.1 port 35230 Jan 20 01:30:49.866186 sshd-session[4873]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:49.979920 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:35230.service: Deactivated successfully. Jan 20 01:30:50.010895 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:30:50.042749 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:30:50.072066 systemd-logind[1552]: Removed session 16. Jan 20 01:30:55.078134 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:57110.service - OpenSSH per-connection server daemon (10.0.0.1:57110). Jan 20 01:30:56.136273 sshd[4890]: Accepted publickey for core from 10.0.0.1 port 57110 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:56.163160 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:56.229104 systemd-logind[1552]: New session 17 of user core. Jan 20 01:30:56.263703 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:30:57.179065 sshd[4894]: Connection closed by 10.0.0.1 port 57110 Jan 20 01:30:57.179472 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:57.210150 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:57110.service: Deactivated successfully. Jan 20 01:30:57.261917 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:30:57.292838 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:30:57.321800 systemd-logind[1552]: Removed session 17. Jan 20 01:31:00.630901 kubelet[3172]: E0120 01:31:00.624633 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:02.896467 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:57114.service - OpenSSH per-connection server daemon (10.0.0.1:57114). Jan 20 01:31:04.312162 sshd[4909]: Accepted publickey for core from 10.0.0.1 port 57114 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:04.469183 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:04.618115 systemd-logind[1552]: New session 18 of user core. Jan 20 01:31:04.703974 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:31:06.779888 sshd[4914]: Connection closed by 10.0.0.1 port 57114 Jan 20 01:31:06.788589 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:06.932691 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:57114.service: Deactivated successfully. Jan 20 01:31:07.001871 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:31:07.106581 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:31:07.140079 systemd-logind[1552]: Removed session 18. Jan 20 01:31:09.623597 kubelet[3172]: E0120 01:31:09.622796 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:11.928795 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:46586.service - OpenSSH per-connection server daemon (10.0.0.1:46586). Jan 20 01:31:12.439270 sshd[4928]: Accepted publickey for core from 10.0.0.1 port 46586 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:12.455902 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:12.641744 systemd-logind[1552]: New session 19 of user core. Jan 20 01:31:12.735688 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:31:13.568554 kubelet[3172]: E0120 01:31:13.562673 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:14.267464 sshd[4931]: Connection closed by 10.0.0.1 port 46586 Jan 20 01:31:14.275899 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:14.315274 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:46586.service: Deactivated successfully. Jan 20 01:31:14.356655 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:31:14.372285 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:31:14.415876 systemd-logind[1552]: Removed session 19. Jan 20 01:31:14.584888 kubelet[3172]: E0120 01:31:14.583009 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:19.412857 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:48376.service - OpenSSH per-connection server daemon (10.0.0.1:48376). Jan 20 01:31:20.110703 sshd[4947]: Accepted publickey for core from 10.0.0.1 port 48376 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:20.115740 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:20.226647 systemd-logind[1552]: New session 20 of user core. Jan 20 01:31:20.279737 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:31:23.891538 sshd[4950]: Connection closed by 10.0.0.1 port 48376 Jan 20 01:31:23.941821 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:24.059258 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:48376.service: Deactivated successfully. Jan 20 01:31:24.096043 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:31:24.189882 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:31:24.220772 systemd-logind[1552]: Removed session 20. Jan 20 01:31:28.570738 kubelet[3172]: E0120 01:31:28.567898 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:29.090952 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). Jan 20 01:31:29.722399 sshd[4966]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:29.744574 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:29.816588 systemd-logind[1552]: New session 21 of user core. Jan 20 01:31:29.895813 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:31:31.517617 sshd[4969]: Connection closed by 10.0.0.1 port 33768 Jan 20 01:31:31.545952 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:31.582060 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:33768.service: Deactivated successfully. Jan 20 01:31:31.585729 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:31:31.648648 systemd-logind[1552]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:31:31.683754 systemd-logind[1552]: Removed session 21. Jan 20 01:31:36.803945 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:57082.service - OpenSSH per-connection server daemon (10.0.0.1:57082). Jan 20 01:31:39.611917 sshd[4983]: Accepted publickey for core from 10.0.0.1 port 57082 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:39.911900 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:40.693184 kubelet[3172]: E0120 01:31:40.683248 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.979s" Jan 20 01:31:48.062935 kernel: sched: DL replenish lagged too much Jan 20 01:31:48.513490 containerd[1566]: time="2026-01-20T01:31:48.388864032Z" level=warning msg="container event discarded" container=ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517 type=CONTAINER_CREATED_EVENT Jan 20 01:31:48.827660 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:31:48.950892 systemd-logind[1552]: New session 22 of user core. Jan 20 01:31:49.678916 systemd[1]: cri-containerd-171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4.scope: Deactivated successfully. Jan 20 01:31:49.679728 systemd[1]: cri-containerd-171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4.scope: Consumed 32.266s CPU time, 53.2M memory peak, 348K read from disk. Jan 20 01:31:49.853848 containerd[1566]: time="2026-01-20T01:31:49.845732871Z" level=info msg="received container exit event container_id:\"171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4\" id:\"171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4\" pid:3057 exit_status:1 exited_at:{seconds:1768872709 nanos:786590715}" Jan 20 01:31:50.279573 kubelet[3172]: E0120 01:31:50.279497 3172 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 01:31:50.422877 containerd[1566]: time="2026-01-20T01:31:50.422686128Z" level=warning msg="container event discarded" container=ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517 type=CONTAINER_STARTED_EVENT Jan 20 01:31:50.444705 containerd[1566]: time="2026-01-20T01:31:50.444643646Z" level=warning msg="container event discarded" container=ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517 type=CONTAINER_STOPPED_EVENT Jan 20 01:31:50.583667 containerd[1566]: time="2026-01-20T01:31:50.583476414Z" level=warning msg="container event discarded" container=a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046 type=CONTAINER_CREATED_EVENT Jan 20 01:31:52.037547 kubelet[3172]: E0120 01:31:52.019253 3172 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Jan 20 01:31:52.279620 containerd[1566]: time="2026-01-20T01:31:52.273256896Z" level=warning msg="container event discarded" container=a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046 type=CONTAINER_STARTED_EVENT Jan 20 01:31:53.301263 systemd[1]: cri-containerd-53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb.scope: Deactivated successfully. Jan 20 01:31:53.386874 systemd[1]: cri-containerd-53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb.scope: Consumed 4.988s CPU time, 28.1M memory peak, 4K written to disk. Jan 20 01:31:53.629165 containerd[1566]: time="2026-01-20T01:31:53.573589179Z" level=warning msg="container event discarded" container=a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046 type=CONTAINER_STOPPED_EVENT Jan 20 01:31:54.376681 systemd[1]: cri-containerd-8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073.scope: Deactivated successfully. Jan 20 01:31:54.477820 systemd[1]: cri-containerd-8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073.scope: Consumed 37.800s CPU time, 26.5M memory peak, 260K read from disk. Jan 20 01:31:55.122251 containerd[1566]: time="2026-01-20T01:31:55.109201663Z" level=warning msg="container event discarded" container=6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b type=CONTAINER_CREATED_EVENT Jan 20 01:31:55.203773 kubelet[3172]: E0120 01:31:55.203712 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.512s" Jan 20 01:31:55.316564 containerd[1566]: time="2026-01-20T01:31:55.316274667Z" level=info msg="received container exit event container_id:\"53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb\" id:\"53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb\" pid:3924 exit_status:1 exited_at:{seconds:1768872715 nanos:223559100}" Jan 20 01:31:55.454686 kubelet[3172]: E0120 01:31:55.390537 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:55.790842 containerd[1566]: time="2026-01-20T01:31:55.632464870Z" level=warning msg="container event discarded" container=6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b type=CONTAINER_STARTED_EVENT Jan 20 01:31:56.144183 kubelet[3172]: E0120 01:31:56.098778 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:56.144183 kubelet[3172]: E0120 01:31:56.132933 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:56.144598 containerd[1566]: time="2026-01-20T01:31:56.099550307Z" level=info msg="received container exit event container_id:\"8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073\" id:\"8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073\" pid:2893 exit_status:1 exited_at:{seconds:1768872715 nanos:871913027}" Jan 20 01:31:56.208676 kubelet[3172]: E0120 01:31:56.207811 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:56.807739 sshd[4990]: Connection closed by 10.0.0.1 port 57082 Jan 20 01:31:56.779935 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:56.875528 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:57082.service: Deactivated successfully. Jan 20 01:31:56.919897 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:31:57.115290 containerd[1566]: time="2026-01-20T01:31:57.037615018Z" level=warning msg="container event discarded" container=6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b type=CONTAINER_STOPPED_EVENT Jan 20 01:31:57.053779 systemd-logind[1552]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:31:57.099823 systemd-logind[1552]: Removed session 22. Jan 20 01:31:57.741228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4-rootfs.mount: Deactivated successfully. Jan 20 01:31:57.838837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073-rootfs.mount: Deactivated successfully. Jan 20 01:31:58.014251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb-rootfs.mount: Deactivated successfully. Jan 20 01:31:58.261839 containerd[1566]: time="2026-01-20T01:31:58.259782754Z" level=warning msg="container event discarded" container=4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86 type=CONTAINER_CREATED_EVENT Jan 20 01:31:58.353876 kubelet[3172]: I0120 01:31:58.334232 3172 scope.go:117] "RemoveContainer" containerID="53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb" Jan 20 01:31:58.353876 kubelet[3172]: E0120 01:31:58.334583 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:58.438486 containerd[1566]: time="2026-01-20T01:31:58.437762548Z" level=info msg="CreateContainer within sandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 20 01:31:58.498475 kubelet[3172]: I0120 01:31:58.467771 3172 scope.go:117] "RemoveContainer" containerID="2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80" Jan 20 01:31:58.711600 kubelet[3172]: I0120 01:31:58.704141 3172 scope.go:117] "RemoveContainer" containerID="171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4" Jan 20 01:31:58.847527 kubelet[3172]: E0120 01:31:58.835534 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:58.869837 containerd[1566]: time="2026-01-20T01:31:58.869787723Z" level=info msg="RemoveContainer for \"2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80\"" Jan 20 01:31:59.045250 containerd[1566]: time="2026-01-20T01:31:59.044892435Z" level=info msg="CreateContainer within sandbox \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jan 20 01:31:59.087509 kubelet[3172]: I0120 01:31:59.084219 3172 scope.go:117] "RemoveContainer" containerID="8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073" Jan 20 01:31:59.087509 kubelet[3172]: E0120 01:31:59.084634 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:59.213084 containerd[1566]: time="2026-01-20T01:31:59.207847052Z" level=info msg="CreateContainer within sandbox \"3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 01:31:59.280649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1565880221.mount: Deactivated successfully. Jan 20 01:31:59.400584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1670631391.mount: Deactivated successfully. Jan 20 01:31:59.426784 containerd[1566]: time="2026-01-20T01:31:59.423925739Z" level=info msg="Container 9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:31:59.616595 containerd[1566]: time="2026-01-20T01:31:59.613622620Z" level=info msg="Container dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:31:59.675936 containerd[1566]: time="2026-01-20T01:31:59.675869515Z" level=info msg="RemoveContainer for \"2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80\" returns successfully" Jan 20 01:31:59.741911 containerd[1566]: time="2026-01-20T01:31:59.741848848Z" level=info msg="CreateContainer within sandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a\"" Jan 20 01:31:59.768656 containerd[1566]: time="2026-01-20T01:31:59.765727176Z" level=info msg="StartContainer for \"9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a\"" Jan 20 01:31:59.835794 containerd[1566]: time="2026-01-20T01:31:59.815917840Z" level=info msg="CreateContainer within sandbox \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b\"" Jan 20 01:31:59.880510 containerd[1566]: time="2026-01-20T01:31:59.874118653Z" level=info msg="connecting to shim 9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a" address="unix:///run/containerd/s/a2e0cae1905ed5d0927600f03d22acdaeeffefa7c78fd47c144e009d52a1c3e4" protocol=ttrpc version=3 Jan 20 01:31:59.883257 containerd[1566]: time="2026-01-20T01:31:59.876151326Z" level=info msg="Container 852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:31:59.894555 containerd[1566]: time="2026-01-20T01:31:59.894272696Z" level=info msg="StartContainer for \"dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b\"" Jan 20 01:31:59.901841 containerd[1566]: time="2026-01-20T01:31:59.901796694Z" level=info msg="connecting to shim dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b" address="unix:///run/containerd/s/67df6f9f643dff0e5a2de5d8ebba56686cc2aa08237c90040202dd99a7cd6a97" protocol=ttrpc version=3 Jan 20 01:32:00.278759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564876252.mount: Deactivated successfully. Jan 20 01:32:00.615540 containerd[1566]: time="2026-01-20T01:32:00.606092808Z" level=info msg="CreateContainer within sandbox \"3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03\"" Jan 20 01:32:01.143144 containerd[1566]: time="2026-01-20T01:32:01.122910461Z" level=info msg="StartContainer for \"852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03\"" Jan 20 01:32:01.290094 containerd[1566]: time="2026-01-20T01:32:01.290039181Z" level=info msg="connecting to shim 852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03" address="unix:///run/containerd/s/b300e0b873644f498b396527866ecbf526e8b806b595a74b8bd537fbc88b091f" protocol=ttrpc version=3 Jan 20 01:32:01.340744 systemd[1]: Started cri-containerd-9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a.scope - libcontainer container 9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a. Jan 20 01:32:01.367672 systemd[1]: Started cri-containerd-dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b.scope - libcontainer container dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b. Jan 20 01:32:01.462898 containerd[1566]: time="2026-01-20T01:32:01.456815706Z" level=warning msg="container event discarded" container=4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86 type=CONTAINER_STARTED_EVENT Jan 20 01:32:01.904807 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:57290.service - OpenSSH per-connection server daemon (10.0.0.1:57290). Jan 20 01:32:02.271276 systemd[1]: Started cri-containerd-852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03.scope - libcontainer container 852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03. Jan 20 01:32:03.011602 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 57290 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:03.023895 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:03.095895 systemd-logind[1552]: New session 23 of user core. Jan 20 01:32:03.131928 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:32:03.400279 containerd[1566]: time="2026-01-20T01:32:03.399687149Z" level=info msg="StartContainer for \"9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a\" returns successfully" Jan 20 01:32:03.602116 kubelet[3172]: E0120 01:32:03.588899 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:03.896840 containerd[1566]: time="2026-01-20T01:32:03.890533202Z" level=warning msg="container event discarded" container=4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86 type=CONTAINER_STOPPED_EVENT Jan 20 01:32:04.095158 containerd[1566]: time="2026-01-20T01:32:04.094203762Z" level=info msg="StartContainer for \"dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b\" returns successfully" Jan 20 01:32:04.698793 sshd[5108]: Connection closed by 10.0.0.1 port 57290 Jan 20 01:32:04.777635 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:04.935270 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:57290.service: Deactivated successfully. Jan 20 01:32:04.997467 containerd[1566]: time="2026-01-20T01:32:04.995668426Z" level=info msg="StartContainer for \"852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03\" returns successfully" Jan 20 01:32:05.013655 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:32:05.109735 systemd-logind[1552]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:32:05.166120 systemd-logind[1552]: Removed session 23. Jan 20 01:32:05.476073 kubelet[3172]: E0120 01:32:05.475887 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:05.816798 containerd[1566]: time="2026-01-20T01:32:05.816524153Z" level=warning msg="container event discarded" container=0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2 type=CONTAINER_CREATED_EVENT Jan 20 01:32:06.169818 kubelet[3172]: E0120 01:32:06.169773 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:07.019814 kubelet[3172]: E0120 01:32:07.019773 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:07.598502 containerd[1566]: time="2026-01-20T01:32:07.594258744Z" level=warning msg="container event discarded" container=0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2 type=CONTAINER_STARTED_EVENT Jan 20 01:32:08.874490 kubelet[3172]: E0120 01:32:08.871174 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:08.888853 kubelet[3172]: E0120 01:32:08.879533 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:09.799563 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:39180.service - OpenSSH per-connection server daemon (10.0.0.1:39180). Jan 20 01:32:10.218156 sshd[5158]: Accepted publickey for core from 10.0.0.1 port 39180 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:10.226134 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:10.296470 systemd-logind[1552]: New session 24 of user core. Jan 20 01:32:10.324191 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:32:11.361155 sshd[5162]: Connection closed by 10.0.0.1 port 39180 Jan 20 01:32:11.365615 sshd-session[5158]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:11.393552 systemd-logind[1552]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:32:11.403128 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:39180.service: Deactivated successfully. Jan 20 01:32:11.441677 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:32:11.505573 systemd-logind[1552]: Removed session 24. Jan 20 01:32:13.595722 kubelet[3172]: E0120 01:32:13.588211 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:16.440805 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:51580.service - OpenSSH per-connection server daemon (10.0.0.1:51580). Jan 20 01:32:16.622438 kubelet[3172]: E0120 01:32:16.620807 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:17.081198 sshd[5182]: Accepted publickey for core from 10.0.0.1 port 51580 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:17.102650 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:17.174840 systemd-logind[1552]: New session 25 of user core. Jan 20 01:32:17.194549 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:32:17.580288 kubelet[3172]: E0120 01:32:17.578698 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:18.750287 kubelet[3172]: E0120 01:32:18.750240 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:18.763518 kubelet[3172]: E0120 01:32:18.758192 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:21.805033 containerd[1566]: time="2026-01-20T01:32:21.797570821Z" level=warning msg="container event discarded" container=53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb type=CONTAINER_CREATED_EVENT Jan 20 01:32:24.079072 kubelet[3172]: E0120 01:32:24.073646 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:24.286142 sshd[5185]: Connection closed by 10.0.0.1 port 51580 Jan 20 01:32:24.323816 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:24.495188 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:51580.service: Deactivated successfully. Jan 20 01:32:24.520844 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:32:24.552738 systemd-logind[1552]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:32:24.606261 systemd-logind[1552]: Removed session 25. Jan 20 01:32:26.927654 containerd[1566]: time="2026-01-20T01:32:26.925450966Z" level=warning msg="container event discarded" container=53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb type=CONTAINER_STARTED_EVENT Jan 20 01:32:29.384729 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:47520.service - OpenSSH per-connection server daemon (10.0.0.1:47520). Jan 20 01:32:29.980073 sshd[5201]: Accepted publickey for core from 10.0.0.1 port 47520 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:29.994857 sshd-session[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:30.143485 systemd-logind[1552]: New session 26 of user core. Jan 20 01:32:30.236664 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:32:31.235754 sshd[5204]: Connection closed by 10.0.0.1 port 47520 Jan 20 01:32:31.238777 sshd-session[5201]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:31.288060 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:47520.service: Deactivated successfully. Jan 20 01:32:31.310118 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:32:31.331517 systemd-logind[1552]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:32:31.345919 systemd-logind[1552]: Removed session 26. Jan 20 01:32:36.323127 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:37408.service - OpenSSH per-connection server daemon (10.0.0.1:37408). Jan 20 01:32:36.953211 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 37408 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:36.987193 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:37.097082 systemd-logind[1552]: New session 27 of user core. Jan 20 01:32:37.107767 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 01:32:39.020064 sshd[5223]: Connection closed by 10.0.0.1 port 37408 Jan 20 01:32:39.034677 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:39.070680 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:37408.service: Deactivated successfully. Jan 20 01:32:39.089745 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 01:32:39.102593 systemd-logind[1552]: Session 27 logged out. Waiting for processes to exit. Jan 20 01:32:39.117471 systemd-logind[1552]: Removed session 27. Jan 20 01:32:44.101799 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:37424.service - OpenSSH per-connection server daemon (10.0.0.1:37424). Jan 20 01:32:44.603494 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 37424 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:44.625725 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:44.717602 systemd-logind[1552]: New session 28 of user core. Jan 20 01:32:44.779162 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 01:32:46.033418 sshd[5244]: Connection closed by 10.0.0.1 port 37424 Jan 20 01:32:46.036073 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:46.127474 systemd-logind[1552]: Session 28 logged out. Waiting for processes to exit. Jan 20 01:32:46.137647 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:37424.service: Deactivated successfully. Jan 20 01:32:46.171630 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 01:32:46.245292 systemd-logind[1552]: Removed session 28. Jan 20 01:32:51.158822 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:47436.service - OpenSSH per-connection server daemon (10.0.0.1:47436). Jan 20 01:32:52.147967 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 47436 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:52.170525 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:52.270287 systemd-logind[1552]: New session 29 of user core. Jan 20 01:32:52.309628 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 01:32:53.945411 sshd[5261]: Connection closed by 10.0.0.1 port 47436 Jan 20 01:32:53.962660 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:54.011572 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:47436.service: Deactivated successfully. Jan 20 01:32:54.024086 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 01:32:54.068119 systemd-logind[1552]: Session 29 logged out. Waiting for processes to exit. Jan 20 01:32:54.093026 systemd-logind[1552]: Removed session 29. Jan 20 01:32:59.052650 systemd[1]: Started sshd@29-10.0.0.15:22-10.0.0.1:35562.service - OpenSSH per-connection server daemon (10.0.0.1:35562). Jan 20 01:32:59.833638 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 35562 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:59.840112 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:59.898169 systemd-logind[1552]: New session 30 of user core. Jan 20 01:32:59.947766 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 01:33:01.147174 sshd[5279]: Connection closed by 10.0.0.1 port 35562 Jan 20 01:33:01.144600 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:01.229099 systemd[1]: sshd@29-10.0.0.15:22-10.0.0.1:35562.service: Deactivated successfully. Jan 20 01:33:01.287513 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 01:33:01.310261 systemd-logind[1552]: Session 30 logged out. Waiting for processes to exit. Jan 20 01:33:01.345151 systemd-logind[1552]: Removed session 30. Jan 20 01:33:06.150758 kubelet[3172]: E0120 01:33:06.139512 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:06.298718 systemd[1]: Started sshd@30-10.0.0.15:22-10.0.0.1:55284.service - OpenSSH per-connection server daemon (10.0.0.1:55284). Jan 20 01:33:07.241814 sshd[5293]: Accepted publickey for core from 10.0.0.1 port 55284 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:07.287010 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:07.394484 systemd-logind[1552]: New session 31 of user core. Jan 20 01:33:07.830193 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 01:33:09.500195 sshd[5296]: Connection closed by 10.0.0.1 port 55284 Jan 20 01:33:09.503550 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:09.534210 systemd[1]: sshd@30-10.0.0.15:22-10.0.0.1:55284.service: Deactivated successfully. Jan 20 01:33:09.581570 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 01:33:09.626670 systemd-logind[1552]: Session 31 logged out. Waiting for processes to exit. Jan 20 01:33:09.629801 systemd-logind[1552]: Removed session 31. Jan 20 01:33:14.638028 systemd[1]: Started sshd@31-10.0.0.15:22-10.0.0.1:45178.service - OpenSSH per-connection server daemon (10.0.0.1:45178). Jan 20 01:33:15.463174 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 45178 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:15.473696 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:15.585512 systemd-logind[1552]: New session 32 of user core. Jan 20 01:33:15.650515 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 01:33:17.089402 sshd[5315]: Connection closed by 10.0.0.1 port 45178 Jan 20 01:33:17.096951 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:17.138754 systemd[1]: sshd@31-10.0.0.15:22-10.0.0.1:45178.service: Deactivated successfully. Jan 20 01:33:17.143524 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 01:33:17.145679 systemd-logind[1552]: Session 32 logged out. Waiting for processes to exit. Jan 20 01:33:17.295008 systemd-logind[1552]: Removed session 32. Jan 20 01:33:19.603495 kubelet[3172]: E0120 01:33:19.596854 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:22.213631 systemd[1]: Started sshd@32-10.0.0.15:22-10.0.0.1:45184.service - OpenSSH per-connection server daemon (10.0.0.1:45184). Jan 20 01:33:22.960509 sshd[5330]: Accepted publickey for core from 10.0.0.1 port 45184 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:22.977818 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:23.297502 systemd-logind[1552]: New session 33 of user core. Jan 20 01:33:23.377689 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 01:33:25.260545 sshd[5333]: Connection closed by 10.0.0.1 port 45184 Jan 20 01:33:25.262620 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:25.374064 systemd[1]: sshd@32-10.0.0.15:22-10.0.0.1:45184.service: Deactivated successfully. Jan 20 01:33:25.463868 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 01:33:25.485516 systemd-logind[1552]: Session 33 logged out. Waiting for processes to exit. Jan 20 01:33:25.503513 systemd-logind[1552]: Removed session 33. Jan 20 01:33:26.786227 kubelet[3172]: E0120 01:33:26.784570 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:26.827715 kubelet[3172]: E0120 01:33:26.827651 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:30.428470 systemd[1]: Started sshd@33-10.0.0.15:22-10.0.0.1:58608.service - OpenSSH per-connection server daemon (10.0.0.1:58608). Jan 20 01:33:31.564739 sshd[5347]: Accepted publickey for core from 10.0.0.1 port 58608 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:31.551264 sshd-session[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:31.648853 systemd-logind[1552]: New session 34 of user core. Jan 20 01:33:31.682440 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 01:33:34.113593 sshd[5351]: Connection closed by 10.0.0.1 port 58608 Jan 20 01:33:34.123738 sshd-session[5347]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:34.374553 systemd[1]: sshd@33-10.0.0.15:22-10.0.0.1:58608.service: Deactivated successfully. Jan 20 01:33:34.399474 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 01:33:34.412526 systemd-logind[1552]: Session 34 logged out. Waiting for processes to exit. Jan 20 01:33:34.577282 systemd-logind[1552]: Removed session 34. Jan 20 01:33:34.594417 kubelet[3172]: E0120 01:33:34.589709 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:36.606527 kubelet[3172]: E0120 01:33:36.605455 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:38.568586 kubelet[3172]: E0120 01:33:38.568533 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:39.365677 systemd[1]: Started sshd@34-10.0.0.15:22-10.0.0.1:44022.service - OpenSSH per-connection server daemon (10.0.0.1:44022). Jan 20 01:33:39.565272 kubelet[3172]: E0120 01:33:39.565230 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:40.511229 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 44022 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:40.535417 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:40.630548 systemd-logind[1552]: New session 35 of user core. Jan 20 01:33:40.703030 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 01:33:42.229583 sshd[5369]: Connection closed by 10.0.0.1 port 44022 Jan 20 01:33:42.246803 sshd-session[5366]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:42.302745 systemd[1]: sshd@34-10.0.0.15:22-10.0.0.1:44022.service: Deactivated successfully. Jan 20 01:33:42.334854 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 01:33:42.364583 systemd-logind[1552]: Session 35 logged out. Waiting for processes to exit. Jan 20 01:33:42.420508 systemd-logind[1552]: Removed session 35. Jan 20 01:33:47.383656 systemd[1]: Started sshd@35-10.0.0.15:22-10.0.0.1:54342.service - OpenSSH per-connection server daemon (10.0.0.1:54342). Jan 20 01:33:48.223194 sshd[5387]: Accepted publickey for core from 10.0.0.1 port 54342 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:48.265842 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:48.406116 systemd-logind[1552]: New session 36 of user core. Jan 20 01:33:48.466452 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 01:33:49.602206 sshd[5390]: Connection closed by 10.0.0.1 port 54342 Jan 20 01:33:49.600751 sshd-session[5387]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:49.661756 systemd[1]: sshd@35-10.0.0.15:22-10.0.0.1:54342.service: Deactivated successfully. Jan 20 01:33:49.693798 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 01:33:49.720148 systemd-logind[1552]: Session 36 logged out. Waiting for processes to exit. Jan 20 01:33:49.766529 systemd-logind[1552]: Removed session 36. Jan 20 01:33:54.871155 systemd[1]: Started sshd@36-10.0.0.15:22-10.0.0.1:49462.service - OpenSSH per-connection server daemon (10.0.0.1:49462). Jan 20 01:33:55.876059 sshd[5405]: Accepted publickey for core from 10.0.0.1 port 49462 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:55.888879 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:55.954184 systemd-logind[1552]: New session 37 of user core. Jan 20 01:33:56.002140 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 01:33:57.610612 sshd[5408]: Connection closed by 10.0.0.1 port 49462 Jan 20 01:33:57.607792 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:57.672663 systemd[1]: sshd@36-10.0.0.15:22-10.0.0.1:49462.service: Deactivated successfully. Jan 20 01:33:57.782596 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 01:33:57.858739 systemd-logind[1552]: Session 37 logged out. Waiting for processes to exit. Jan 20 01:33:57.929820 systemd-logind[1552]: Removed session 37. Jan 20 01:34:02.794815 systemd[1]: Started sshd@37-10.0.0.15:22-10.0.0.1:49474.service - OpenSSH per-connection server daemon (10.0.0.1:49474). Jan 20 01:34:04.010955 sshd[5423]: Accepted publickey for core from 10.0.0.1 port 49474 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:04.018672 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:04.130259 systemd-logind[1552]: New session 38 of user core. Jan 20 01:34:04.204416 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 01:34:06.234480 sshd[5426]: Connection closed by 10.0.0.1 port 49474 Jan 20 01:34:06.241041 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:06.321763 systemd[1]: sshd@37-10.0.0.15:22-10.0.0.1:49474.service: Deactivated successfully. Jan 20 01:34:06.339594 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 01:34:06.409259 systemd-logind[1552]: Session 38 logged out. Waiting for processes to exit. Jan 20 01:34:06.422209 systemd-logind[1552]: Removed session 38. Jan 20 01:34:11.434778 systemd[1]: Started sshd@38-10.0.0.15:22-10.0.0.1:38354.service - OpenSSH per-connection server daemon (10.0.0.1:38354). Jan 20 01:34:11.590403 kubelet[3172]: E0120 01:34:11.583729 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:12.061558 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 38354 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:12.073078 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:12.108519 systemd-logind[1552]: New session 39 of user core. Jan 20 01:34:12.140646 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 01:34:13.493772 sshd[5443]: Connection closed by 10.0.0.1 port 38354 Jan 20 01:34:13.490635 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:13.594690 systemd[1]: sshd@38-10.0.0.15:22-10.0.0.1:38354.service: Deactivated successfully. Jan 20 01:34:13.638790 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 01:34:13.694487 systemd-logind[1552]: Session 39 logged out. Waiting for processes to exit. Jan 20 01:34:13.724833 systemd-logind[1552]: Removed session 39. Jan 20 01:34:18.747179 systemd[1]: Started sshd@39-10.0.0.15:22-10.0.0.1:51948.service - OpenSSH per-connection server daemon (10.0.0.1:51948). Jan 20 01:34:19.308427 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 51948 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:19.324894 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:19.367667 systemd-logind[1552]: New session 40 of user core. Jan 20 01:34:19.404936 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 01:34:20.738242 sshd[5463]: Connection closed by 10.0.0.1 port 51948 Jan 20 01:34:20.735022 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:20.838728 systemd[1]: sshd@39-10.0.0.15:22-10.0.0.1:51948.service: Deactivated successfully. Jan 20 01:34:20.848150 systemd-logind[1552]: Session 40 logged out. Waiting for processes to exit. Jan 20 01:34:20.917864 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 01:34:21.023127 systemd-logind[1552]: Removed session 40. Jan 20 01:34:25.814917 systemd[1]: Started sshd@40-10.0.0.15:22-10.0.0.1:53114.service - OpenSSH per-connection server daemon (10.0.0.1:53114). Jan 20 01:34:26.262917 sshd[5477]: Accepted publickey for core from 10.0.0.1 port 53114 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:26.278541 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:26.342960 systemd-logind[1552]: New session 41 of user core. Jan 20 01:34:26.388015 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 01:34:28.010829 sshd[5480]: Connection closed by 10.0.0.1 port 53114 Jan 20 01:34:28.012494 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:28.103735 systemd[1]: sshd@40-10.0.0.15:22-10.0.0.1:53114.service: Deactivated successfully. Jan 20 01:34:28.125796 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 01:34:28.171897 systemd-logind[1552]: Session 41 logged out. Waiting for processes to exit. Jan 20 01:34:28.182844 systemd-logind[1552]: Removed session 41. Jan 20 01:34:31.565428 kubelet[3172]: E0120 01:34:31.561047 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:33.235876 systemd[1]: Started sshd@41-10.0.0.15:22-10.0.0.1:53124.service - OpenSSH per-connection server daemon (10.0.0.1:53124). Jan 20 01:34:34.120629 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 53124 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:34.173685 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:34.239851 systemd-logind[1552]: New session 42 of user core. Jan 20 01:34:34.271292 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 01:34:35.890451 containerd[1566]: time="2026-01-20T01:34:35.888623055Z" level=warning msg="container event discarded" container=409a9a04eb823664839eeacfe7e9c7cadbc7f8adda97fb5095e29128a170986d type=CONTAINER_CREATED_EVENT Jan 20 01:34:35.890451 containerd[1566]: time="2026-01-20T01:34:35.888780028Z" level=warning msg="container event discarded" container=409a9a04eb823664839eeacfe7e9c7cadbc7f8adda97fb5095e29128a170986d type=CONTAINER_STARTED_EVENT Jan 20 01:34:35.977446 containerd[1566]: time="2026-01-20T01:34:35.974063912Z" level=warning msg="container event discarded" container=4ecc7c5f51a3606059fb0f608b07ec4d11118aedfce68de1fc9c729365e59e85 type=CONTAINER_CREATED_EVENT Jan 20 01:34:35.977446 containerd[1566]: time="2026-01-20T01:34:35.974243145Z" level=warning msg="container event discarded" container=4ecc7c5f51a3606059fb0f608b07ec4d11118aedfce68de1fc9c729365e59e85 type=CONTAINER_STARTED_EVENT Jan 20 01:34:36.027511 sshd[5498]: Connection closed by 10.0.0.1 port 53124 Jan 20 01:34:36.034003 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:36.138454 systemd-logind[1552]: Session 42 logged out. Waiting for processes to exit. Jan 20 01:34:36.147072 systemd[1]: sshd@41-10.0.0.15:22-10.0.0.1:53124.service: Deactivated successfully. Jan 20 01:34:36.190808 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 01:34:36.210535 systemd-logind[1552]: Removed session 42. Jan 20 01:34:36.828971 containerd[1566]: time="2026-01-20T01:34:36.828281803Z" level=warning msg="container event discarded" container=eb6c9889153318d4cda753e2e51857a746ccc962e701a25a3e572c43f86e2df0 type=CONTAINER_CREATED_EVENT Jan 20 01:34:37.692630 containerd[1566]: time="2026-01-20T01:34:37.691217237Z" level=warning msg="container event discarded" container=76e5b52d7f5227198a24bcb4990d6777b349fb4e266af52e8f287a90518805b8 type=CONTAINER_CREATED_EVENT Jan 20 01:34:38.614387 kubelet[3172]: E0120 01:34:38.611903 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:41.804495 kubelet[3172]: E0120 01:34:41.788110 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:41.889896 systemd[1]: Started sshd@42-10.0.0.15:22-10.0.0.1:36846.service - OpenSSH per-connection server daemon (10.0.0.1:36846). Jan 20 01:34:41.925028 containerd[1566]: time="2026-01-20T01:34:41.924054386Z" level=warning msg="container event discarded" container=eb6c9889153318d4cda753e2e51857a746ccc962e701a25a3e572c43f86e2df0 type=CONTAINER_STARTED_EVENT Jan 20 01:34:42.400455 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 36846 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:42.428854 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:42.487938 systemd-logind[1552]: New session 43 of user core. Jan 20 01:34:42.500829 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 01:34:44.001845 containerd[1566]: time="2026-01-20T01:34:43.992835464Z" level=warning msg="container event discarded" container=76e5b52d7f5227198a24bcb4990d6777b349fb4e266af52e8f287a90518805b8 type=CONTAINER_STARTED_EVENT Jan 20 01:34:44.135614 kubelet[3172]: E0120 01:34:44.126094 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:45.587375 kubelet[3172]: E0120 01:34:45.581940 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:45.792898 sshd[5517]: Connection closed by 10.0.0.1 port 36846 Jan 20 01:34:45.796802 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:45.875292 systemd-logind[1552]: Session 43 logged out. Waiting for processes to exit. Jan 20 01:34:45.884702 systemd[1]: sshd@42-10.0.0.15:22-10.0.0.1:36846.service: Deactivated successfully. Jan 20 01:34:45.903787 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 01:34:45.921665 systemd-logind[1552]: Removed session 43. Jan 20 01:34:50.585414 kubelet[3172]: E0120 01:34:50.585100 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:50.941159 systemd[1]: Started sshd@43-10.0.0.15:22-10.0.0.1:43284.service - OpenSSH per-connection server daemon (10.0.0.1:43284). Jan 20 01:34:53.120488 sshd[5536]: Accepted publickey for core from 10.0.0.1 port 43284 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:53.133482 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:53.247636 systemd-logind[1552]: New session 44 of user core. Jan 20 01:34:53.298767 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 01:34:54.600461 sshd[5539]: Connection closed by 10.0.0.1 port 43284 Jan 20 01:34:54.606758 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:54.693678 systemd[1]: sshd@43-10.0.0.15:22-10.0.0.1:43284.service: Deactivated successfully. Jan 20 01:34:54.734826 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 01:34:54.749509 systemd-logind[1552]: Session 44 logged out. Waiting for processes to exit. Jan 20 01:34:54.806636 systemd-logind[1552]: Removed session 44. Jan 20 01:34:59.835021 systemd[1]: Started sshd@44-10.0.0.15:22-10.0.0.1:60728.service - OpenSSH per-connection server daemon (10.0.0.1:60728). Jan 20 01:35:00.327910 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 60728 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:35:00.329955 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:00.396472 systemd-logind[1552]: New session 45 of user core. Jan 20 01:35:00.410821 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 01:35:01.514645 sshd[5556]: Connection closed by 10.0.0.1 port 60728 Jan 20 01:35:01.519013 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:01.691594 systemd[1]: Started sshd@45-10.0.0.15:22-10.0.0.1:60744.service - OpenSSH per-connection server daemon (10.0.0.1:60744). Jan 20 01:35:01.806760 systemd[1]: sshd@44-10.0.0.15:22-10.0.0.1:60728.service: Deactivated successfully. Jan 20 01:35:01.847740 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 01:35:01.940055 systemd-logind[1552]: Session 45 logged out. Waiting for processes to exit. Jan 20 01:35:02.023818 systemd-logind[1552]: Removed session 45. Jan 20 01:35:02.336117 sshd[5567]: Accepted publickey for core from 10.0.0.1 port 60744 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:35:02.376092 sshd-session[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:02.455170 systemd-logind[1552]: New session 46 of user core. Jan 20 01:35:02.513770 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 01:35:02.808669 kubelet[3172]: E0120 01:35:02.792539 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:04.544249 sshd[5573]: Connection closed by 10.0.0.1 port 60744 Jan 20 01:35:04.564708 sshd-session[5567]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:04.684235 systemd[1]: sshd@45-10.0.0.15:22-10.0.0.1:60744.service: Deactivated successfully. Jan 20 01:35:04.711621 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 01:35:04.734753 systemd-logind[1552]: Session 46 logged out. Waiting for processes to exit. Jan 20 01:35:04.789070 systemd[1]: Started sshd@46-10.0.0.15:22-10.0.0.1:51060.service - OpenSSH per-connection server daemon (10.0.0.1:51060). Jan 20 01:35:04.843797 systemd-logind[1552]: Removed session 46. Jan 20 01:35:05.821220 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 51060 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:35:05.831624 sshd-session[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:05.896009 systemd-logind[1552]: New session 47 of user core. Jan 20 01:35:05.914269 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 01:35:08.393548 sshd[5590]: Connection closed by 10.0.0.1 port 51060 Jan 20 01:35:08.391020 sshd-session[5587]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:08.483140 systemd[1]: sshd@46-10.0.0.15:22-10.0.0.1:51060.service: Deactivated successfully. Jan 20 01:35:08.516253 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 01:35:08.578732 systemd-logind[1552]: Session 47 logged out. Waiting for processes to exit. Jan 20 01:35:08.665850 systemd-logind[1552]: Removed session 47. Jan 20 01:35:13.522674 systemd[1]: Started sshd@47-10.0.0.15:22-10.0.0.1:51076.service - OpenSSH per-connection server daemon (10.0.0.1:51076). Jan 20 01:35:14.091561 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 51076 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:35:14.089799 sshd-session[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:14.198719 systemd-logind[1552]: New session 48 of user core. Jan 20 01:35:14.259996 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 01:35:15.293820 sshd[5608]: Connection closed by 10.0.0.1 port 51076 Jan 20 01:35:15.296637 sshd-session[5603]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:15.330250 systemd[1]: sshd@47-10.0.0.15:22-10.0.0.1:51076.service: Deactivated successfully. Jan 20 01:35:15.360233 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 01:35:15.389055 systemd-logind[1552]: Session 48 logged out. Waiting for processes to exit. Jan 20 01:35:15.409891 systemd-logind[1552]: Removed session 48. Jan 20 01:35:20.421274 systemd[1]: Started sshd@48-10.0.0.15:22-10.0.0.1:53698.service - OpenSSH per-connection server daemon (10.0.0.1:53698). Jan 20 01:35:21.176025 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 53698 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:35:21.177218 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:21.226253 systemd-logind[1552]: New session 49 of user core. Jan 20 01:35:21.300254 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 01:35:22.798063 sshd[5625]: Connection closed by 10.0.0.1 port 53698 Jan 20 01:35:22.838578 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:22.908031 systemd[1]: sshd@48-10.0.0.15:22-10.0.0.1:53698.service: Deactivated successfully. Jan 20 01:35:22.958925 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 01:35:22.983758 systemd-logind[1552]: Session 49 logged out. Waiting for processes to exit. Jan 20 01:35:23.044886 systemd-logind[1552]: Removed session 49. Jan 20 01:35:27.905183 systemd[1]: Started sshd@49-10.0.0.15:22-10.0.0.1:39688.service - OpenSSH per-connection server daemon (10.0.0.1:39688). Jan 20 01:35:29.862043 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 39688 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:35:29.879289 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:30.315044 systemd-logind[1552]: New session 50 of user core. Jan 20 01:35:30.392938 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 01:35:31.585690 kubelet[3172]: E0120 01:35:31.584978 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.004s" Jan 20 01:35:31.764290 kubelet[3172]: E0120 01:35:31.737714 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:46.894998 systemd[1]: cri-containerd-dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b.scope: Deactivated successfully. Jan 20 01:35:46.908205 systemd[1]: cri-containerd-dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b.scope: Consumed 12.438s CPU time, 55.3M memory peak, 4.7M read from disk. Jan 20 01:35:47.615533 containerd[1566]: time="2026-01-20T01:35:47.615266612Z" level=info msg="received container exit event container_id:\"dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b\" id:\"dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b\" pid:5074 exit_status:1 exited_at:{seconds:1768872947 nanos:505553799}" Jan 20 01:35:48.232482 sshd[5642]: Connection closed by 10.0.0.1 port 39688 Jan 20 01:35:48.316772 kubelet[3172]: E0120 01:35:48.243799 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.218s" Jan 20 01:35:48.282695 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:48.489796 systemd[1]: sshd@49-10.0.0.15:22-10.0.0.1:39688.service: Deactivated successfully. Jan 20 01:35:48.494078 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 01:35:48.495539 systemd-logind[1552]: Session 50 logged out. Waiting for processes to exit. Jan 20 01:35:48.496230 systemd[1]: session-50.scope: Consumed 1.138s CPU time, 15.8M memory peak. Jan 20 01:35:48.606891 kubelet[3172]: E0120 01:35:48.605237 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:48.606891 kubelet[3172]: E0120 01:35:48.606504 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:48.624522 systemd-logind[1552]: Removed session 50. Jan 20 01:35:48.658007 kubelet[3172]: E0120 01:35:48.640247 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:48.862791 systemd[1]: cri-containerd-852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03.scope: Deactivated successfully. Jan 20 01:35:48.863512 systemd[1]: cri-containerd-852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03.scope: Consumed 8.448s CPU time, 22.9M memory peak, 684K read from disk. Jan 20 01:35:48.884484 containerd[1566]: time="2026-01-20T01:35:48.883164111Z" level=info msg="received container exit event container_id:\"852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03\" id:\"852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03\" pid:5096 exit_status:1 exited_at:{seconds:1768872948 nanos:876977025}" Jan 20 01:35:49.468050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b-rootfs.mount: Deactivated successfully. Jan 20 01:35:49.621292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03-rootfs.mount: Deactivated successfully. Jan 20 01:35:50.076513 kubelet[3172]: I0120 01:35:50.046904 3172 scope.go:117] "RemoveContainer" containerID="171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4" Jan 20 01:35:50.076513 kubelet[3172]: I0120 01:35:50.062462 3172 scope.go:117] "RemoveContainer" containerID="dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b" Jan 20 01:35:50.076513 kubelet[3172]: E0120 01:35:50.062757 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:50.076513 kubelet[3172]: E0120 01:35:50.062986 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(5bbfee13ce9e07281eca876a0b8067f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="5bbfee13ce9e07281eca876a0b8067f2" Jan 20 01:35:50.152026 containerd[1566]: time="2026-01-20T01:35:50.151963921Z" level=info msg="RemoveContainer for \"171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4\"" Jan 20 01:35:50.280234 kubelet[3172]: I0120 01:35:50.260548 3172 scope.go:117] "RemoveContainer" containerID="852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03" Jan 20 01:35:50.280234 kubelet[3172]: E0120 01:35:50.271200 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:50.280234 kubelet[3172]: E0120 01:35:50.272554 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 20 01:35:50.438718 containerd[1566]: time="2026-01-20T01:35:50.438542432Z" level=info msg="RemoveContainer for \"171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4\" returns successfully" Jan 20 01:35:50.446219 kubelet[3172]: I0120 01:35:50.440006 3172 scope.go:117] "RemoveContainer" containerID="8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073" Jan 20 01:35:50.540746 containerd[1566]: time="2026-01-20T01:35:50.540573126Z" level=info msg="RemoveContainer for \"8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073\"" Jan 20 01:35:50.840095 containerd[1566]: time="2026-01-20T01:35:50.827942142Z" level=info msg="RemoveContainer for \"8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073\" returns successfully" Jan 20 01:35:51.386488 kubelet[3172]: I0120 01:35:51.386105 3172 scope.go:117] "RemoveContainer" containerID="852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03" Jan 20 01:35:51.386488 kubelet[3172]: E0120 01:35:51.386207 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:51.418734 kubelet[3172]: E0120 01:35:51.387230 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 20 01:35:53.319292 systemd[1]: Started sshd@50-10.0.0.15:22-10.0.0.1:55554.service - OpenSSH per-connection server daemon (10.0.0.1:55554). Jan 20 01:35:53.841533 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 55554 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:35:53.872002 sshd-session[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:35:53.939491 systemd-logind[1552]: New session 51 of user core. Jan 20 01:35:53.986006 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 01:35:55.622812 sshd[5690]: Connection closed by 10.0.0.1 port 55554 Jan 20 01:35:55.618945 sshd-session[5687]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:55.701032 systemd[1]: sshd@50-10.0.0.15:22-10.0.0.1:55554.service: Deactivated successfully. Jan 20 01:35:55.723004 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 01:35:55.743248 systemd-logind[1552]: Session 51 logged out. Waiting for processes to exit. Jan 20 01:35:55.790158 systemd-logind[1552]: Removed session 51. Jan 20 01:35:58.642629 kubelet[3172]: I0120 01:35:58.638032 3172 scope.go:117] "RemoveContainer" containerID="852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03" Jan 20 01:35:58.642629 kubelet[3172]: E0120 01:35:58.640593 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:58.642629 kubelet[3172]: E0120 01:35:58.640825 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 20 01:35:58.645813 kubelet[3172]: I0120 01:35:58.644040 3172 scope.go:117] "RemoveContainer" containerID="dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b" Jan 20 01:35:58.645813 kubelet[3172]: E0120 01:35:58.644119 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:58.792932 containerd[1566]: time="2026-01-20T01:35:58.783828108Z" level=info msg="CreateContainer within sandbox \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Jan 20 01:35:58.972829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2077449917.mount: Deactivated successfully. Jan 20 01:35:59.086532 containerd[1566]: time="2026-01-20T01:35:59.085973260Z" level=info msg="Container fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:35:59.424951 containerd[1566]: time="2026-01-20T01:35:59.423200444Z" level=info msg="CreateContainer within sandbox \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64\"" Jan 20 01:35:59.523069 containerd[1566]: time="2026-01-20T01:35:59.481610003Z" level=info msg="StartContainer for \"fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64\"" Jan 20 01:35:59.618792 containerd[1566]: time="2026-01-20T01:35:59.618616735Z" level=info msg="connecting to shim fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64" address="unix:///run/containerd/s/67df6f9f643dff0e5a2de5d8ebba56686cc2aa08237c90040202dd99a7cd6a97" protocol=ttrpc version=3 Jan 20 01:36:00.581635 systemd[1]: Started cri-containerd-fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64.scope - libcontainer container fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64. Jan 20 01:36:01.030595 systemd[1]: Started sshd@51-10.0.0.15:22-10.0.0.1:42992.service - OpenSSH per-connection server daemon (10.0.0.1:42992). Jan 20 01:36:01.579474 kubelet[3172]: E0120 01:36:01.578034 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:02.303254 sshd[5724]: Accepted publickey for core from 10.0.0.1 port 42992 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:36:02.370646 sshd-session[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:02.803254 systemd-logind[1552]: New session 52 of user core. Jan 20 01:36:02.939660 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 01:36:03.184875 containerd[1566]: time="2026-01-20T01:36:03.175219361Z" level=info msg="StartContainer for \"fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64\" returns successfully" Jan 20 01:36:04.727059 kubelet[3172]: E0120 01:36:04.725170 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:05.469265 sshd[5737]: Connection closed by 10.0.0.1 port 42992 Jan 20 01:36:05.478189 sshd-session[5724]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:05.579819 systemd[1]: sshd@51-10.0.0.15:22-10.0.0.1:42992.service: Deactivated successfully. Jan 20 01:36:05.634468 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 01:36:05.673250 systemd-logind[1552]: Session 52 logged out. Waiting for processes to exit. Jan 20 01:36:05.777182 systemd-logind[1552]: Removed session 52. Jan 20 01:36:05.954595 kubelet[3172]: E0120 01:36:05.954158 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:06.597981 kubelet[3172]: E0120 01:36:06.597910 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:08.611225 kubelet[3172]: E0120 01:36:08.600525 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:10.719055 systemd[1]: Started sshd@52-10.0.0.15:22-10.0.0.1:43438.service - OpenSSH per-connection server daemon (10.0.0.1:43438). Jan 20 01:36:11.534449 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 43438 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:36:11.550218 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:11.715515 systemd-logind[1552]: New session 53 of user core. Jan 20 01:36:11.750092 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 01:36:12.597502 kubelet[3172]: I0120 01:36:12.579590 3172 scope.go:117] "RemoveContainer" containerID="852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03" Jan 20 01:36:12.597502 kubelet[3172]: E0120 01:36:12.579860 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:12.982999 containerd[1566]: time="2026-01-20T01:36:12.982937113Z" level=info msg="CreateContainer within sandbox \"3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jan 20 01:36:13.370638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount299652799.mount: Deactivated successfully. Jan 20 01:36:13.422258 sshd[5757]: Connection closed by 10.0.0.1 port 43438 Jan 20 01:36:13.434504 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:13.533518 containerd[1566]: time="2026-01-20T01:36:13.530696654Z" level=info msg="Container 575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:36:13.594268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843268255.mount: Deactivated successfully. Jan 20 01:36:13.613736 systemd[1]: sshd@52-10.0.0.15:22-10.0.0.1:43438.service: Deactivated successfully. Jan 20 01:36:13.669009 containerd[1566]: time="2026-01-20T01:36:13.652194512Z" level=info msg="CreateContainer within sandbox \"3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b\"" Jan 20 01:36:13.663479 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 01:36:13.676000 containerd[1566]: time="2026-01-20T01:36:13.675941615Z" level=info msg="StartContainer for \"575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b\"" Jan 20 01:36:13.698126 containerd[1566]: time="2026-01-20T01:36:13.694773092Z" level=info msg="connecting to shim 575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b" address="unix:///run/containerd/s/b300e0b873644f498b396527866ecbf526e8b806b595a74b8bd537fbc88b091f" protocol=ttrpc version=3 Jan 20 01:36:13.712501 systemd-logind[1552]: Session 53 logged out. Waiting for processes to exit. Jan 20 01:36:13.916085 systemd-logind[1552]: Removed session 53. Jan 20 01:36:14.031662 systemd[1]: Started cri-containerd-575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b.scope - libcontainer container 575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b. Jan 20 01:36:14.991568 containerd[1566]: time="2026-01-20T01:36:14.988254943Z" level=info msg="StartContainer for \"575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b\" returns successfully" Jan 20 01:36:15.832884 kubelet[3172]: E0120 01:36:15.828219 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:16.895890 kubelet[3172]: E0120 01:36:16.895765 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:18.533071 systemd[1]: Started sshd@53-10.0.0.15:22-10.0.0.1:38772.service - OpenSSH per-connection server daemon (10.0.0.1:38772). Jan 20 01:36:18.655726 kubelet[3172]: E0120 01:36:18.649049 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:18.913994 kubelet[3172]: E0120 01:36:18.913944 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:19.271685 sshd[5806]: Accepted publickey for core from 10.0.0.1 port 38772 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:36:19.292749 sshd-session[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:19.393790 systemd-logind[1552]: New session 54 of user core. Jan 20 01:36:19.445920 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 01:36:21.167647 sshd[5811]: Connection closed by 10.0.0.1 port 38772 Jan 20 01:36:21.177032 sshd-session[5806]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:21.240022 systemd[1]: sshd@53-10.0.0.15:22-10.0.0.1:38772.service: Deactivated successfully. Jan 20 01:36:21.265149 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 01:36:21.288537 systemd-logind[1552]: Session 54 logged out. Waiting for processes to exit. Jan 20 01:36:21.313216 systemd-logind[1552]: Removed session 54. Jan 20 01:36:26.301715 systemd[1]: Started sshd@54-10.0.0.15:22-10.0.0.1:57078.service - OpenSSH per-connection server daemon (10.0.0.1:57078). Jan 20 01:36:26.998082 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 57078 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:36:27.030515 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:27.165145 systemd-logind[1552]: New session 55 of user core. Jan 20 01:36:27.191545 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 01:36:29.210092 sshd[5828]: Connection closed by 10.0.0.1 port 57078 Jan 20 01:36:29.213676 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:29.271510 systemd[1]: sshd@54-10.0.0.15:22-10.0.0.1:57078.service: Deactivated successfully. Jan 20 01:36:29.300091 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 01:36:29.322872 systemd-logind[1552]: Session 55 logged out. Waiting for processes to exit. Jan 20 01:36:29.486900 systemd-logind[1552]: Removed session 55. Jan 20 01:36:29.530462 kubelet[3172]: E0120 01:36:29.524662 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:29.784655 kubelet[3172]: E0120 01:36:29.783767 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:34.540841 systemd[1]: Started sshd@55-10.0.0.15:22-10.0.0.1:57092.service - OpenSSH per-connection server daemon (10.0.0.1:57092). Jan 20 01:36:36.507547 sshd[5843]: Accepted publickey for core from 10.0.0.1 port 57092 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:36:36.547126 sshd-session[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:36.720584 systemd-logind[1552]: New session 56 of user core. Jan 20 01:36:36.765721 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 01:36:37.776738 sshd[5846]: Connection closed by 10.0.0.1 port 57092 Jan 20 01:36:37.773590 sshd-session[5843]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:37.900979 systemd[1]: sshd@55-10.0.0.15:22-10.0.0.1:57092.service: Deactivated successfully. Jan 20 01:36:37.933930 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 01:36:38.000794 systemd-logind[1552]: Session 56 logged out. Waiting for processes to exit. Jan 20 01:36:38.017504 systemd-logind[1552]: Removed session 56. Jan 20 01:36:43.201838 systemd[1]: Started sshd@56-10.0.0.15:22-10.0.0.1:45420.service - OpenSSH per-connection server daemon (10.0.0.1:45420). Jan 20 01:36:44.070147 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 45420 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:36:44.098667 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:44.187138 systemd-logind[1552]: New session 57 of user core. Jan 20 01:36:44.267241 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 01:36:45.596477 sshd[5868]: Connection closed by 10.0.0.1 port 45420 Jan 20 01:36:45.598633 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:45.667593 systemd[1]: sshd@56-10.0.0.15:22-10.0.0.1:45420.service: Deactivated successfully. Jan 20 01:36:45.717916 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 01:36:45.749516 systemd-logind[1552]: Session 57 logged out. Waiting for processes to exit. Jan 20 01:36:45.799550 systemd-logind[1552]: Removed session 57. Jan 20 01:36:50.990051 systemd[1]: Started sshd@57-10.0.0.15:22-10.0.0.1:42402.service - OpenSSH per-connection server daemon (10.0.0.1:42402). Jan 20 01:36:51.856812 sshd[5883]: Accepted publickey for core from 10.0.0.1 port 42402 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:36:51.906684 sshd-session[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:52.034936 systemd-logind[1552]: New session 58 of user core. Jan 20 01:36:52.058670 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 01:36:54.360650 sshd[5886]: Connection closed by 10.0.0.1 port 42402 Jan 20 01:36:54.358730 sshd-session[5883]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:54.445613 systemd-logind[1552]: Session 58 logged out. Waiting for processes to exit. Jan 20 01:36:54.476639 systemd[1]: sshd@57-10.0.0.15:22-10.0.0.1:42402.service: Deactivated successfully. Jan 20 01:36:54.525877 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 01:36:54.584430 systemd-logind[1552]: Removed session 58. Jan 20 01:36:57.989093 containerd[1566]: time="2026-01-20T01:36:57.988844765Z" level=warning msg="container event discarded" container=171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4 type=CONTAINER_STOPPED_EVENT Jan 20 01:36:58.084010 containerd[1566]: time="2026-01-20T01:36:58.076992088Z" level=warning msg="container event discarded" container=8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073 type=CONTAINER_STOPPED_EVENT Jan 20 01:36:58.187991 containerd[1566]: time="2026-01-20T01:36:58.186132412Z" level=warning msg="container event discarded" container=53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb type=CONTAINER_STOPPED_EVENT Jan 20 01:36:59.537663 systemd[1]: Started sshd@58-10.0.0.15:22-10.0.0.1:52326.service - OpenSSH per-connection server daemon (10.0.0.1:52326). Jan 20 01:36:59.703606 containerd[1566]: time="2026-01-20T01:36:59.702897260Z" level=warning msg="container event discarded" container=2ff8496051e49766bc5a965b58b53e73b24db5aab899b27108edef61373aee80 type=CONTAINER_DELETED_EVENT Jan 20 01:36:59.786686 containerd[1566]: time="2026-01-20T01:36:59.786061599Z" level=warning msg="container event discarded" container=9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a type=CONTAINER_CREATED_EVENT Jan 20 01:36:59.872621 containerd[1566]: time="2026-01-20T01:36:59.872538479Z" level=warning msg="container event discarded" container=dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b type=CONTAINER_CREATED_EVENT Jan 20 01:37:00.370794 containerd[1566]: time="2026-01-20T01:37:00.329965333Z" level=warning msg="container event discarded" container=852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03 type=CONTAINER_CREATED_EVENT Jan 20 01:37:00.619551 kubelet[3172]: E0120 01:37:00.575167 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:00.686130 sshd[5900]: Accepted publickey for core from 10.0.0.1 port 52326 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:37:00.693620 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:00.761782 systemd-logind[1552]: New session 59 of user core. Jan 20 01:37:00.816729 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 01:37:02.510162 sshd[5903]: Connection closed by 10.0.0.1 port 52326 Jan 20 01:37:02.509751 sshd-session[5900]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:02.525833 systemd-logind[1552]: Session 59 logged out. Waiting for processes to exit. Jan 20 01:37:02.526556 systemd[1]: sshd@58-10.0.0.15:22-10.0.0.1:52326.service: Deactivated successfully. Jan 20 01:37:02.534796 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 01:37:02.549848 systemd-logind[1552]: Removed session 59. Jan 20 01:37:02.567201 kubelet[3172]: E0120 01:37:02.567149 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:03.396663 containerd[1566]: time="2026-01-20T01:37:03.391634186Z" level=warning msg="container event discarded" container=9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a type=CONTAINER_STARTED_EVENT Jan 20 01:37:04.150217 kubelet[3172]: E0120 01:37:04.126092 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:04.238142 containerd[1566]: time="2026-01-20T01:37:04.183633114Z" level=warning msg="container event discarded" container=dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b type=CONTAINER_STARTED_EVENT Jan 20 01:37:04.739524 containerd[1566]: time="2026-01-20T01:37:04.736023708Z" level=warning msg="container event discarded" container=852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03 type=CONTAINER_STARTED_EVENT Jan 20 01:37:07.692082 systemd[1]: Started sshd@59-10.0.0.15:22-10.0.0.1:48576.service - OpenSSH per-connection server daemon (10.0.0.1:48576). Jan 20 01:37:08.310970 sshd[5918]: Accepted publickey for core from 10.0.0.1 port 48576 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:37:08.337075 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:08.596122 systemd-logind[1552]: New session 60 of user core. Jan 20 01:37:08.672122 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 01:37:11.863498 sshd[5921]: Connection closed by 10.0.0.1 port 48576 Jan 20 01:37:11.870565 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:11.927004 systemd[1]: sshd@59-10.0.0.15:22-10.0.0.1:48576.service: Deactivated successfully. Jan 20 01:37:12.110278 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 01:37:12.189001 systemd-logind[1552]: Session 60 logged out. Waiting for processes to exit. Jan 20 01:37:12.229625 systemd-logind[1552]: Removed session 60. Jan 20 01:37:13.733553 kubelet[3172]: E0120 01:37:13.733111 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:17.014051 systemd[1]: Started sshd@60-10.0.0.15:22-10.0.0.1:42108.service - OpenSSH per-connection server daemon (10.0.0.1:42108). Jan 20 01:37:18.181139 sshd[5937]: Accepted publickey for core from 10.0.0.1 port 42108 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:37:18.187209 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:18.418551 systemd-logind[1552]: New session 61 of user core. Jan 20 01:37:18.473208 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 01:37:19.576027 kubelet[3172]: E0120 01:37:19.575976 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:23.341833 kubelet[3172]: E0120 01:37:23.335884 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.582s" Jan 20 01:37:23.469542 kubelet[3172]: E0120 01:37:23.455881 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:24.398756 sshd[5940]: Connection closed by 10.0.0.1 port 42108 Jan 20 01:37:24.415726 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:24.528116 systemd[1]: sshd@60-10.0.0.15:22-10.0.0.1:42108.service: Deactivated successfully. Jan 20 01:37:24.620546 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 01:37:24.640632 systemd-logind[1552]: Session 61 logged out. Waiting for processes to exit. Jan 20 01:37:24.708877 systemd-logind[1552]: Removed session 61. Jan 20 01:37:25.634691 kubelet[3172]: E0120 01:37:25.574578 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:29.540947 systemd[1]: Started sshd@61-10.0.0.15:22-10.0.0.1:55446.service - OpenSSH per-connection server daemon (10.0.0.1:55446). Jan 20 01:37:30.705395 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 55446 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:37:30.819994 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:31.191969 systemd-logind[1552]: New session 62 of user core. Jan 20 01:37:31.269042 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 01:37:36.173124 sshd[5958]: Connection closed by 10.0.0.1 port 55446 Jan 20 01:37:36.181964 sshd-session[5955]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:36.241720 systemd[1]: sshd@61-10.0.0.15:22-10.0.0.1:55446.service: Deactivated successfully. Jan 20 01:37:36.253847 systemd-logind[1552]: Session 62 logged out. Waiting for processes to exit. Jan 20 01:37:36.317855 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 01:37:36.366856 systemd-logind[1552]: Removed session 62. Jan 20 01:37:40.611239 kubelet[3172]: E0120 01:37:40.597241 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:41.537687 systemd[1]: Started sshd@62-10.0.0.15:22-10.0.0.1:37126.service - OpenSSH per-connection server daemon (10.0.0.1:37126). Jan 20 01:37:43.434741 sshd[5972]: Accepted publickey for core from 10.0.0.1 port 37126 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:37:43.491141 sshd-session[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:43.833551 systemd-logind[1552]: New session 63 of user core. Jan 20 01:37:43.928058 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 01:37:48.448888 sshd[5979]: Connection closed by 10.0.0.1 port 37126 Jan 20 01:37:48.437101 sshd-session[5972]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:48.714672 systemd[1]: sshd@62-10.0.0.15:22-10.0.0.1:37126.service: Deactivated successfully. Jan 20 01:37:48.797620 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 01:37:48.853697 systemd-logind[1552]: Session 63 logged out. Waiting for processes to exit. Jan 20 01:37:48.921091 systemd-logind[1552]: Removed session 63. Jan 20 01:37:53.571621 systemd[1]: Started sshd@63-10.0.0.15:22-10.0.0.1:33338.service - OpenSSH per-connection server daemon (10.0.0.1:33338). Jan 20 01:37:56.009241 sshd[5994]: Accepted publickey for core from 10.0.0.1 port 33338 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:37:56.040168 sshd-session[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:56.285122 systemd-logind[1552]: New session 64 of user core. Jan 20 01:37:56.331995 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 20 01:37:59.995999 sshd[5997]: Connection closed by 10.0.0.1 port 33338 Jan 20 01:38:00.002229 sshd-session[5994]: pam_unix(sshd:session): session closed for user core Jan 20 01:38:00.123079 systemd[1]: sshd@63-10.0.0.15:22-10.0.0.1:33338.service: Deactivated successfully. Jan 20 01:38:00.174100 systemd[1]: session-64.scope: Deactivated successfully. Jan 20 01:38:00.232766 systemd-logind[1552]: Session 64 logged out. Waiting for processes to exit. Jan 20 01:38:00.299600 systemd-logind[1552]: Removed session 64. Jan 20 01:38:05.172936 systemd[1]: Started sshd@64-10.0.0.15:22-10.0.0.1:54742.service - OpenSSH per-connection server daemon (10.0.0.1:54742). Jan 20 01:38:06.569288 sshd[6011]: Accepted publickey for core from 10.0.0.1 port 54742 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:38:06.622248 sshd-session[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:38:06.933720 systemd-logind[1552]: New session 65 of user core. Jan 20 01:38:07.011273 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 20 01:38:07.581249 kubelet[3172]: E0120 01:38:07.569154 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:38:08.632277 kubelet[3172]: E0120 01:38:08.618552 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:38:09.840521 sshd[6014]: Connection closed by 10.0.0.1 port 54742 Jan 20 01:38:09.842496 sshd-session[6011]: pam_unix(sshd:session): session closed for user core Jan 20 01:38:09.888010 systemd[1]: sshd@64-10.0.0.15:22-10.0.0.1:54742.service: Deactivated successfully. Jan 20 01:38:09.901734 systemd[1]: session-65.scope: Deactivated successfully. Jan 20 01:38:09.912078 systemd-logind[1552]: Session 65 logged out. Waiting for processes to exit. Jan 20 01:38:09.921281 systemd-logind[1552]: Removed session 65. Jan 20 01:38:16.277812 systemd[1]: Started sshd@65-10.0.0.15:22-10.0.0.1:36492.service - OpenSSH per-connection server daemon (10.0.0.1:36492). Jan 20 01:38:18.015247 kubelet[3172]: E0120 01:38:18.015191 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:38:18.043279 kubelet[3172]: E0120 01:38:18.022073 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:38:18.529669 sshd[6028]: Accepted publickey for core from 10.0.0.1 port 36492 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:38:18.649614 sshd-session[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:38:18.829851 systemd-logind[1552]: New session 66 of user core. Jan 20 01:38:18.903933 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 20 01:38:21.110952 sshd[6033]: Connection closed by 10.0.0.1 port 36492 Jan 20 01:38:21.119995 sshd-session[6028]: pam_unix(sshd:session): session closed for user core Jan 20 01:38:21.183971 systemd[1]: sshd@65-10.0.0.15:22-10.0.0.1:36492.service: Deactivated successfully. Jan 20 01:38:21.223832 systemd[1]: session-66.scope: Deactivated successfully. Jan 20 01:38:21.239889 systemd-logind[1552]: Session 66 logged out. Waiting for processes to exit. Jan 20 01:38:21.313972 systemd-logind[1552]: Removed session 66. Jan 20 01:38:26.442647 systemd[1]: Started sshd@66-10.0.0.15:22-10.0.0.1:55178.service - OpenSSH per-connection server daemon (10.0.0.1:55178). Jan 20 01:38:27.282292 sshd[6048]: Accepted publickey for core from 10.0.0.1 port 55178 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:38:27.304600 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:38:27.456916 systemd-logind[1552]: New session 67 of user core. Jan 20 01:38:27.486655 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 20 01:38:28.563178 kubelet[3172]: E0120 01:38:28.563074 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:38:28.580036 kubelet[3172]: E0120 01:38:28.563097 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:38:28.691707 kubelet[3172]: E0120 01:38:28.690629 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:38:29.533855 sshd[6051]: Connection closed by 10.0.0.1 port 55178 Jan 20 01:38:29.529676 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Jan 20 01:38:29.632981 systemd-logind[1552]: Session 67 logged out. Waiting for processes to exit. Jan 20 01:38:29.641587 systemd[1]: sshd@66-10.0.0.15:22-10.0.0.1:55178.service: Deactivated successfully. Jan 20 01:38:29.678640 systemd[1]: session-67.scope: Deactivated successfully. Jan 20 01:38:29.705095 systemd-logind[1552]: Removed session 67. Jan 20 01:38:34.618697 systemd[1]: Started sshd@67-10.0.0.15:22-10.0.0.1:40926.service - OpenSSH per-connection server daemon (10.0.0.1:40926). Jan 20 01:38:35.017174 sshd[6064]: Accepted publickey for core from 10.0.0.1 port 40926 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:38:35.035907 sshd-session[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:38:35.097034 systemd-logind[1552]: New session 68 of user core. Jan 20 01:38:35.153967 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 20 01:38:36.320564 sshd[6067]: Connection closed by 10.0.0.1 port 40926 Jan 20 01:38:36.319593 sshd-session[6064]: pam_unix(sshd:session): session closed for user core Jan 20 01:38:36.342012 systemd[1]: sshd@67-10.0.0.15:22-10.0.0.1:40926.service: Deactivated successfully. Jan 20 01:38:36.346162 systemd[1]: session-68.scope: Deactivated successfully. Jan 20 01:38:36.385149 systemd-logind[1552]: Session 68 logged out. Waiting for processes to exit. Jan 20 01:38:36.405391 systemd-logind[1552]: Removed session 68. Jan 20 01:38:41.518268 systemd[1]: Started sshd@68-10.0.0.15:22-10.0.0.1:40938.service - OpenSSH per-connection server daemon (10.0.0.1:40938). Jan 20 01:38:42.626273 sshd[6080]: Accepted publickey for core from 10.0.0.1 port 40938 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:38:42.683792 sshd-session[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:38:42.793700 systemd-logind[1552]: New session 69 of user core. Jan 20 01:38:42.875886 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 20 01:38:44.274674 sshd[6084]: Connection closed by 10.0.0.1 port 40938 Jan 20 01:38:44.278068 sshd-session[6080]: pam_unix(sshd:session): session closed for user core Jan 20 01:38:44.347586 systemd[1]: sshd@68-10.0.0.15:22-10.0.0.1:40938.service: Deactivated successfully. Jan 20 01:38:44.421759 systemd[1]: session-69.scope: Deactivated successfully. Jan 20 01:38:44.445157 systemd-logind[1552]: Session 69 logged out. Waiting for processes to exit. Jan 20 01:38:44.505419 systemd-logind[1552]: Removed session 69. Jan 20 01:38:49.432696 systemd[1]: Started sshd@69-10.0.0.15:22-10.0.0.1:46340.service - OpenSSH per-connection server daemon (10.0.0.1:46340). Jan 20 01:38:49.998182 sshd[6100]: Accepted publickey for core from 10.0.0.1 port 46340 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:38:50.057232 sshd-session[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:38:50.269886 systemd-logind[1552]: New session 70 of user core. Jan 20 01:38:50.331882 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 20 01:38:52.148072 sshd[6103]: Connection closed by 10.0.0.1 port 46340 Jan 20 01:38:52.192566 sshd-session[6100]: pam_unix(sshd:session): session closed for user core Jan 20 01:38:52.226942 systemd[1]: sshd@69-10.0.0.15:22-10.0.0.1:46340.service: Deactivated successfully. Jan 20 01:38:52.302072 systemd[1]: session-70.scope: Deactivated successfully. Jan 20 01:38:52.382518 systemd-logind[1552]: Session 70 logged out. Waiting for processes to exit. Jan 20 01:38:52.399450 systemd-logind[1552]: Removed session 70. Jan 20 01:38:57.314798 systemd[1]: Started sshd@70-10.0.0.15:22-10.0.0.1:51130.service - OpenSSH per-connection server daemon (10.0.0.1:51130). Jan 20 01:38:58.222260 sshd[6117]: Accepted publickey for core from 10.0.0.1 port 51130 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:38:58.229607 sshd-session[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:38:58.327511 systemd-logind[1552]: New session 71 of user core. Jan 20 01:38:58.432132 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 20 01:39:01.266150 sshd[6120]: Connection closed by 10.0.0.1 port 51130 Jan 20 01:39:01.271084 sshd-session[6117]: pam_unix(sshd:session): session closed for user core Jan 20 01:39:01.491418 systemd[1]: sshd@70-10.0.0.15:22-10.0.0.1:51130.service: Deactivated successfully. Jan 20 01:39:01.628117 systemd[1]: session-71.scope: Deactivated successfully. Jan 20 01:39:01.706602 systemd-logind[1552]: Session 71 logged out. Waiting for processes to exit. Jan 20 01:39:01.738701 systemd-logind[1552]: Removed session 71. Jan 20 01:39:05.565437 kubelet[3172]: E0120 01:39:05.565226 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:39:06.610744 systemd[1]: Started sshd@71-10.0.0.15:22-10.0.0.1:53456.service - OpenSSH per-connection server daemon (10.0.0.1:53456). Jan 20 01:39:07.746560 sshd[6135]: Accepted publickey for core from 10.0.0.1 port 53456 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:39:07.764820 sshd-session[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:39:07.898573 systemd-logind[1552]: New session 72 of user core. Jan 20 01:39:07.932499 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 20 01:39:09.529268 sshd[6138]: Connection closed by 10.0.0.1 port 53456 Jan 20 01:39:09.519257 sshd-session[6135]: pam_unix(sshd:session): session closed for user core Jan 20 01:39:09.595672 systemd[1]: sshd@71-10.0.0.15:22-10.0.0.1:53456.service: Deactivated successfully. Jan 20 01:39:09.623053 systemd[1]: session-72.scope: Deactivated successfully. Jan 20 01:39:09.631191 systemd-logind[1552]: Session 72 logged out. Waiting for processes to exit. Jan 20 01:39:09.635871 systemd-logind[1552]: Removed session 72. Jan 20 01:39:14.753753 systemd[1]: Started sshd@72-10.0.0.15:22-10.0.0.1:35064.service - OpenSSH per-connection server daemon (10.0.0.1:35064). Jan 20 01:39:16.280523 sshd[6155]: Accepted publickey for core from 10.0.0.1 port 35064 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:39:16.301928 sshd-session[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:39:16.375265 systemd-logind[1552]: New session 73 of user core. Jan 20 01:39:16.439766 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 20 01:39:19.305181 sshd[6158]: Connection closed by 10.0.0.1 port 35064 Jan 20 01:39:19.325638 sshd-session[6155]: pam_unix(sshd:session): session closed for user core Jan 20 01:39:19.677590 systemd[1]: sshd@72-10.0.0.15:22-10.0.0.1:35064.service: Deactivated successfully. Jan 20 01:39:19.714236 systemd[1]: session-73.scope: Deactivated successfully. Jan 20 01:39:19.750860 systemd-logind[1552]: Session 73 logged out. Waiting for processes to exit. Jan 20 01:39:19.803633 systemd-logind[1552]: Removed session 73. Jan 20 01:39:20.598886 kubelet[3172]: E0120 01:39:20.577288 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:39:24.435990 systemd[1]: Started sshd@73-10.0.0.15:22-10.0.0.1:35078.service - OpenSSH per-connection server daemon (10.0.0.1:35078). Jan 20 01:39:25.430633 sshd[6173]: Accepted publickey for core from 10.0.0.1 port 35078 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:39:25.488632 sshd-session[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:39:25.585262 kubelet[3172]: E0120 01:39:25.583523 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:39:25.624664 systemd-logind[1552]: New session 74 of user core. Jan 20 01:39:25.748743 systemd[1]: Started session-74.scope - Session 74 of User core. Jan 20 01:39:26.797662 kubelet[3172]: E0120 01:39:26.794082 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:39:27.823566 sshd[6176]: Connection closed by 10.0.0.1 port 35078 Jan 20 01:39:27.834540 sshd-session[6173]: pam_unix(sshd:session): session closed for user core Jan 20 01:39:27.895165 systemd[1]: sshd@73-10.0.0.15:22-10.0.0.1:35078.service: Deactivated successfully. Jan 20 01:39:27.940094 systemd[1]: session-74.scope: Deactivated successfully. Jan 20 01:39:28.019982 systemd-logind[1552]: Session 74 logged out. Waiting for processes to exit. Jan 20 01:39:28.037492 systemd-logind[1552]: Removed session 74. Jan 20 01:39:32.970664 systemd[1]: Started sshd@74-10.0.0.15:22-10.0.0.1:49684.service - OpenSSH per-connection server daemon (10.0.0.1:49684). Jan 20 01:39:34.071877 sshd[6189]: Accepted publickey for core from 10.0.0.1 port 49684 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:39:34.116876 sshd-session[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:39:34.252980 systemd-logind[1552]: New session 75 of user core. Jan 20 01:39:34.302596 systemd[1]: Started session-75.scope - Session 75 of User core. Jan 20 01:39:35.591819 kubelet[3172]: E0120 01:39:35.579856 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:39:36.446536 sshd[6192]: Connection closed by 10.0.0.1 port 49684 Jan 20 01:39:36.440110 sshd-session[6189]: pam_unix(sshd:session): session closed for user core Jan 20 01:39:36.664248 systemd[1]: sshd@74-10.0.0.15:22-10.0.0.1:49684.service: Deactivated successfully. Jan 20 01:39:36.688564 systemd[1]: session-75.scope: Deactivated successfully. Jan 20 01:39:36.703490 systemd-logind[1552]: Session 75 logged out. Waiting for processes to exit. Jan 20 01:39:36.708151 systemd-logind[1552]: Removed session 75. Jan 20 01:39:41.732273 systemd[1]: Started sshd@75-10.0.0.15:22-10.0.0.1:60352.service - OpenSSH per-connection server daemon (10.0.0.1:60352). Jan 20 01:39:42.970628 sshd[6206]: Accepted publickey for core from 10.0.0.1 port 60352 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:39:42.982008 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:39:43.110265 systemd-logind[1552]: New session 76 of user core. Jan 20 01:39:43.177632 systemd[1]: Started session-76.scope - Session 76 of User core. Jan 20 01:39:44.552966 sshd[6211]: Connection closed by 10.0.0.1 port 60352 Jan 20 01:39:44.547984 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Jan 20 01:39:44.608973 systemd[1]: sshd@75-10.0.0.15:22-10.0.0.1:60352.service: Deactivated successfully. Jan 20 01:39:44.639116 systemd[1]: session-76.scope: Deactivated successfully. Jan 20 01:39:44.782965 systemd-logind[1552]: Session 76 logged out. Waiting for processes to exit. Jan 20 01:39:44.800836 systemd-logind[1552]: Removed session 76. Jan 20 01:39:47.637697 kubelet[3172]: E0120 01:39:47.636739 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:39:47.662848 kubelet[3172]: E0120 01:39:47.647782 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:39:49.639773 systemd[1]: Started sshd@76-10.0.0.15:22-10.0.0.1:57056.service - OpenSSH per-connection server daemon (10.0.0.1:57056). Jan 20 01:39:50.593577 sshd[6226]: Accepted publickey for core from 10.0.0.1 port 57056 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:39:50.671992 sshd-session[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:39:50.899820 systemd-logind[1552]: New session 77 of user core. Jan 20 01:39:51.116556 systemd[1]: Started session-77.scope - Session 77 of User core. Jan 20 01:39:53.039459 sshd[6231]: Connection closed by 10.0.0.1 port 57056 Jan 20 01:39:53.044790 sshd-session[6226]: pam_unix(sshd:session): session closed for user core Jan 20 01:39:53.095862 systemd-logind[1552]: Session 77 logged out. Waiting for processes to exit. Jan 20 01:39:53.106063 systemd[1]: sshd@76-10.0.0.15:22-10.0.0.1:57056.service: Deactivated successfully. Jan 20 01:39:53.137611 systemd[1]: session-77.scope: Deactivated successfully. Jan 20 01:39:53.201075 systemd-logind[1552]: Removed session 77. Jan 20 01:39:54.615450 kubelet[3172]: E0120 01:39:54.614916 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:39:58.969903 systemd[1]: Started sshd@77-10.0.0.15:22-10.0.0.1:50820.service - OpenSSH per-connection server daemon (10.0.0.1:50820). Jan 20 01:40:00.294470 sshd[6244]: Accepted publickey for core from 10.0.0.1 port 50820 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:40:00.363667 sshd-session[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:00.496217 systemd-logind[1552]: New session 78 of user core. Jan 20 01:40:00.564997 systemd[1]: Started session-78.scope - Session 78 of User core. Jan 20 01:40:03.807007 sshd[6247]: Connection closed by 10.0.0.1 port 50820 Jan 20 01:40:03.804557 sshd-session[6244]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:03.891751 systemd[1]: sshd@77-10.0.0.15:22-10.0.0.1:50820.service: Deactivated successfully. Jan 20 01:40:04.009170 systemd[1]: session-78.scope: Deactivated successfully. Jan 20 01:40:04.038035 systemd-logind[1552]: Session 78 logged out. Waiting for processes to exit. Jan 20 01:40:04.047658 systemd-logind[1552]: Removed session 78. Jan 20 01:40:09.017822 systemd[1]: Started sshd@78-10.0.0.15:22-10.0.0.1:51896.service - OpenSSH per-connection server daemon (10.0.0.1:51896). Jan 20 01:40:09.979882 sshd[6262]: Accepted publickey for core from 10.0.0.1 port 51896 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:40:10.002557 sshd-session[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:10.171232 systemd-logind[1552]: New session 79 of user core. Jan 20 01:40:10.188688 systemd[1]: Started session-79.scope - Session 79 of User core. Jan 20 01:40:12.047995 sshd[6265]: Connection closed by 10.0.0.1 port 51896 Jan 20 01:40:12.040735 sshd-session[6262]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:12.125608 systemd-logind[1552]: Session 79 logged out. Waiting for processes to exit. Jan 20 01:40:12.142767 systemd[1]: sshd@78-10.0.0.15:22-10.0.0.1:51896.service: Deactivated successfully. Jan 20 01:40:12.188550 systemd[1]: session-79.scope: Deactivated successfully. Jan 20 01:40:12.284142 systemd-logind[1552]: Removed session 79. Jan 20 01:40:17.158011 systemd[1]: Started sshd@79-10.0.0.15:22-10.0.0.1:58644.service - OpenSSH per-connection server daemon (10.0.0.1:58644). Jan 20 01:40:18.304661 sshd[6281]: Accepted publickey for core from 10.0.0.1 port 58644 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:40:18.393043 sshd-session[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:18.575437 systemd-logind[1552]: New session 80 of user core. Jan 20 01:40:18.615815 systemd[1]: Started session-80.scope - Session 80 of User core. Jan 20 01:40:20.268015 sshd[6284]: Connection closed by 10.0.0.1 port 58644 Jan 20 01:40:20.275138 sshd-session[6281]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:20.373133 systemd[1]: sshd@79-10.0.0.15:22-10.0.0.1:58644.service: Deactivated successfully. Jan 20 01:40:20.381965 systemd-logind[1552]: Session 80 logged out. Waiting for processes to exit. Jan 20 01:40:20.409610 systemd[1]: session-80.scope: Deactivated successfully. Jan 20 01:40:20.498587 systemd-logind[1552]: Removed session 80. Jan 20 01:40:25.942181 systemd[1]: Started sshd@80-10.0.0.15:22-10.0.0.1:42558.service - OpenSSH per-connection server daemon (10.0.0.1:42558). Jan 20 01:40:26.753955 kubelet[3172]: E0120 01:40:26.708200 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:40:27.017927 sshd[6298]: Accepted publickey for core from 10.0.0.1 port 42558 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:40:27.047924 sshd-session[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:27.202716 systemd-logind[1552]: New session 81 of user core. Jan 20 01:40:27.229887 systemd[1]: Started session-81.scope - Session 81 of User core. Jan 20 01:40:28.334701 sshd[6302]: Connection closed by 10.0.0.1 port 42558 Jan 20 01:40:28.333694 sshd-session[6298]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:28.402794 systemd[1]: sshd@80-10.0.0.15:22-10.0.0.1:42558.service: Deactivated successfully. Jan 20 01:40:28.408233 systemd[1]: session-81.scope: Deactivated successfully. Jan 20 01:40:28.412120 systemd-logind[1552]: Session 81 logged out. Waiting for processes to exit. Jan 20 01:40:28.416558 systemd-logind[1552]: Removed session 81. Jan 20 01:40:29.592274 kubelet[3172]: E0120 01:40:29.564162 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:40:33.543223 systemd[1]: Started sshd@81-10.0.0.15:22-10.0.0.1:42572.service - OpenSSH per-connection server daemon (10.0.0.1:42572). Jan 20 01:40:34.302723 sshd[6316]: Accepted publickey for core from 10.0.0.1 port 42572 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:40:34.330863 sshd-session[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:34.418143 systemd-logind[1552]: New session 82 of user core. Jan 20 01:40:34.460022 systemd[1]: Started session-82.scope - Session 82 of User core. Jan 20 01:40:37.106239 sshd[6321]: Connection closed by 10.0.0.1 port 42572 Jan 20 01:40:37.135921 sshd-session[6316]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:37.242076 systemd[1]: sshd@81-10.0.0.15:22-10.0.0.1:42572.service: Deactivated successfully. Jan 20 01:40:37.271717 systemd[1]: session-82.scope: Deactivated successfully. Jan 20 01:40:37.284798 systemd-logind[1552]: Session 82 logged out. Waiting for processes to exit. Jan 20 01:40:37.324659 systemd[1]: Started sshd@82-10.0.0.15:22-10.0.0.1:51444.service - OpenSSH per-connection server daemon (10.0.0.1:51444). Jan 20 01:40:37.327691 systemd-logind[1552]: Removed session 82. Jan 20 01:40:38.185164 sshd[6335]: Accepted publickey for core from 10.0.0.1 port 51444 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:40:38.224240 sshd-session[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:38.352027 systemd-logind[1552]: New session 83 of user core. Jan 20 01:40:38.405590 systemd[1]: Started session-83.scope - Session 83 of User core. Jan 20 01:40:43.814497 sshd[6338]: Connection closed by 10.0.0.1 port 51444 Jan 20 01:40:43.814104 sshd-session[6335]: pam_unix(sshd:session): session closed for user core Jan 20 01:40:43.919912 systemd[1]: Started sshd@83-10.0.0.15:22-10.0.0.1:51450.service - OpenSSH per-connection server daemon (10.0.0.1:51450). Jan 20 01:40:43.997500 systemd[1]: sshd@82-10.0.0.15:22-10.0.0.1:51444.service: Deactivated successfully. Jan 20 01:40:44.126918 systemd[1]: session-83.scope: Deactivated successfully. Jan 20 01:40:44.224076 systemd-logind[1552]: Session 83 logged out. Waiting for processes to exit. Jan 20 01:40:44.321244 systemd-logind[1552]: Removed session 83. Jan 20 01:40:45.400244 sshd[6351]: Accepted publickey for core from 10.0.0.1 port 51450 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:40:45.442948 sshd-session[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:45.594636 systemd-logind[1552]: New session 84 of user core. Jan 20 01:40:45.680154 systemd[1]: Started session-84.scope - Session 84 of User core. Jan 20 01:40:48.492113 kubelet[3172]: E0120 01:40:48.486179 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.916s" Jan 20 01:40:48.535006 kubelet[3172]: E0120 01:40:48.512489 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:40:49.183677 kubelet[3172]: E0120 01:40:48.536141 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:40:49.880532 containerd[1566]: time="2026-01-20T01:40:49.879739632Z" level=warning msg="container event discarded" container=852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03 type=CONTAINER_STOPPED_EVENT Jan 20 01:40:49.943466 containerd[1566]: time="2026-01-20T01:40:49.923853942Z" level=warning msg="container event discarded" container=dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b type=CONTAINER_STOPPED_EVENT Jan 20 01:40:50.451761 containerd[1566]: time="2026-01-20T01:40:50.451291930Z" level=warning msg="container event discarded" container=171064a745d9bf8f2b104990ea9fbd78594f9c51a8bcde6f918af36920cc0ab4 type=CONTAINER_DELETED_EVENT Jan 20 01:40:50.930227 containerd[1566]: time="2026-01-20T01:40:50.910050733Z" level=warning msg="container event discarded" container=8d843f3ae9a35fba5704750ef0d247bd18b266c6a601d8fd76aadbfb1cabe073 type=CONTAINER_DELETED_EVENT Jan 20 01:40:58.596248 kubelet[3172]: E0120 01:40:58.586557 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:40:59.398670 containerd[1566]: time="2026-01-20T01:40:59.397636029Z" level=warning msg="container event discarded" container=fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64 type=CONTAINER_CREATED_EVENT Jan 20 01:41:00.974907 sshd[6357]: Connection closed by 10.0.0.1 port 51450 Jan 20 01:41:00.976083 sshd-session[6351]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:01.096199 systemd[1]: sshd@83-10.0.0.15:22-10.0.0.1:51450.service: Deactivated successfully. Jan 20 01:41:01.195136 systemd[1]: session-84.scope: Deactivated successfully. Jan 20 01:41:01.202454 systemd[1]: session-84.scope: Consumed 2.340s CPU time, 48.9M memory peak. Jan 20 01:41:01.316790 systemd-logind[1552]: Session 84 logged out. Waiting for processes to exit. Jan 20 01:41:01.367958 systemd[1]: Started sshd@84-10.0.0.15:22-10.0.0.1:36250.service - OpenSSH per-connection server daemon (10.0.0.1:36250). Jan 20 01:41:01.721240 systemd-logind[1552]: Removed session 84. Jan 20 01:41:02.917154 containerd[1566]: time="2026-01-20T01:41:02.911047987Z" level=warning msg="container event discarded" container=fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64 type=CONTAINER_STARTED_EVENT Jan 20 01:41:03.436196 sshd[6375]: Accepted publickey for core from 10.0.0.1 port 36250 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:41:03.517784 sshd-session[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:03.581821 kubelet[3172]: E0120 01:41:03.573695 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:41:03.651119 systemd-logind[1552]: New session 85 of user core. Jan 20 01:41:03.773055 systemd[1]: Started session-85.scope - Session 85 of User core. Jan 20 01:41:09.141244 sshd[6380]: Connection closed by 10.0.0.1 port 36250 Jan 20 01:41:09.151854 sshd-session[6375]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:09.248262 systemd[1]: sshd@84-10.0.0.15:22-10.0.0.1:36250.service: Deactivated successfully. Jan 20 01:41:09.264888 systemd[1]: session-85.scope: Deactivated successfully. Jan 20 01:41:09.265668 systemd[1]: session-85.scope: Consumed 1.054s CPU time, 34.6M memory peak. Jan 20 01:41:09.285653 systemd-logind[1552]: Session 85 logged out. Waiting for processes to exit. Jan 20 01:41:09.320800 systemd[1]: Started sshd@85-10.0.0.15:22-10.0.0.1:47492.service - OpenSSH per-connection server daemon (10.0.0.1:47492). Jan 20 01:41:09.362590 systemd-logind[1552]: Removed session 85. Jan 20 01:41:09.569446 kubelet[3172]: E0120 01:41:09.568522 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:41:10.540226 sshd[6395]: Accepted publickey for core from 10.0.0.1 port 47492 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:41:10.617826 sshd-session[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:10.792460 systemd-logind[1552]: New session 86 of user core. Jan 20 01:41:10.873701 systemd[1]: Started session-86.scope - Session 86 of User core. Jan 20 01:41:12.680513 sshd[6398]: Connection closed by 10.0.0.1 port 47492 Jan 20 01:41:12.676002 sshd-session[6395]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:12.771003 systemd[1]: sshd@85-10.0.0.15:22-10.0.0.1:47492.service: Deactivated successfully. Jan 20 01:41:12.821577 systemd[1]: session-86.scope: Deactivated successfully. Jan 20 01:41:12.833921 systemd-logind[1552]: Session 86 logged out. Waiting for processes to exit. Jan 20 01:41:12.839169 systemd-logind[1552]: Removed session 86. Jan 20 01:41:13.664847 containerd[1566]: time="2026-01-20T01:41:13.660796790Z" level=warning msg="container event discarded" container=575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b type=CONTAINER_CREATED_EVENT Jan 20 01:41:15.035872 containerd[1566]: time="2026-01-20T01:41:15.035574233Z" level=warning msg="container event discarded" container=575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b type=CONTAINER_STARTED_EVENT Jan 20 01:41:17.582537 kubelet[3172]: E0120 01:41:17.569787 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:41:17.894245 systemd[1]: Started sshd@86-10.0.0.15:22-10.0.0.1:39362.service - OpenSSH per-connection server daemon (10.0.0.1:39362). Jan 20 01:41:19.889224 sshd[6414]: Accepted publickey for core from 10.0.0.1 port 39362 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:41:19.899661 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:20.033125 systemd-logind[1552]: New session 87 of user core. Jan 20 01:41:20.241682 systemd[1]: Started session-87.scope - Session 87 of User core. Jan 20 01:41:22.489652 sshd[6417]: Connection closed by 10.0.0.1 port 39362 Jan 20 01:41:22.500699 sshd-session[6414]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:22.604961 systemd[1]: sshd@86-10.0.0.15:22-10.0.0.1:39362.service: Deactivated successfully. Jan 20 01:41:22.680749 systemd[1]: session-87.scope: Deactivated successfully. Jan 20 01:41:22.741591 systemd-logind[1552]: Session 87 logged out. Waiting for processes to exit. Jan 20 01:41:22.794733 systemd-logind[1552]: Removed session 87. Jan 20 01:41:27.554655 systemd[1]: Started sshd@87-10.0.0.15:22-10.0.0.1:37124.service - OpenSSH per-connection server daemon (10.0.0.1:37124). Jan 20 01:41:28.144830 sshd[6430]: Accepted publickey for core from 10.0.0.1 port 37124 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:41:28.195727 sshd-session[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:28.309035 systemd-logind[1552]: New session 88 of user core. Jan 20 01:41:28.351112 systemd[1]: Started session-88.scope - Session 88 of User core. Jan 20 01:41:30.338743 sshd[6433]: Connection closed by 10.0.0.1 port 37124 Jan 20 01:41:30.363290 sshd-session[6430]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:30.505009 systemd[1]: sshd@87-10.0.0.15:22-10.0.0.1:37124.service: Deactivated successfully. Jan 20 01:41:30.592754 systemd[1]: session-88.scope: Deactivated successfully. Jan 20 01:41:30.605066 systemd-logind[1552]: Session 88 logged out. Waiting for processes to exit. Jan 20 01:41:30.610266 systemd-logind[1552]: Removed session 88. Jan 20 01:41:34.639381 kubelet[3172]: E0120 01:41:34.634046 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:41:35.607032 systemd[1]: Started sshd@88-10.0.0.15:22-10.0.0.1:49542.service - OpenSSH per-connection server daemon (10.0.0.1:49542). Jan 20 01:41:36.292866 sshd[6448]: Accepted publickey for core from 10.0.0.1 port 49542 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:41:36.332545 sshd-session[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:36.454989 systemd-logind[1552]: New session 89 of user core. Jan 20 01:41:36.629046 systemd[1]: Started session-89.scope - Session 89 of User core. Jan 20 01:41:39.509908 sshd[6451]: Connection closed by 10.0.0.1 port 49542 Jan 20 01:41:39.511170 sshd-session[6448]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:39.638867 systemd-logind[1552]: Session 89 logged out. Waiting for processes to exit. Jan 20 01:41:39.724231 systemd[1]: sshd@88-10.0.0.15:22-10.0.0.1:49542.service: Deactivated successfully. Jan 20 01:41:39.849684 systemd[1]: session-89.scope: Deactivated successfully. Jan 20 01:41:40.000772 systemd-logind[1552]: Removed session 89. Jan 20 01:41:48.638261 systemd[1]: Started sshd@89-10.0.0.15:22-10.0.0.1:40656.service - OpenSSH per-connection server daemon (10.0.0.1:40656). Jan 20 01:41:49.770259 kubelet[3172]: E0120 01:41:49.711263 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.845s" Jan 20 01:41:53.299462 kubelet[3172]: E0120 01:41:53.272098 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.441s" Jan 20 01:41:53.434605 kubelet[3172]: E0120 01:41:53.434562 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:41:56.598430 sshd[6467]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:41:56.630253 sshd-session[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:56.809660 systemd-logind[1552]: New session 90 of user core. Jan 20 01:41:56.810292 systemd[1]: Started session-90.scope - Session 90 of User core. Jan 20 01:41:58.625252 kubelet[3172]: E0120 01:41:58.618728 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:00.122907 sshd[6472]: Connection closed by 10.0.0.1 port 40656 Jan 20 01:42:00.120780 sshd-session[6467]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:00.287661 systemd[1]: sshd@89-10.0.0.15:22-10.0.0.1:40656.service: Deactivated successfully. Jan 20 01:42:00.320194 systemd[1]: session-90.scope: Deactivated successfully. Jan 20 01:42:00.495582 systemd-logind[1552]: Session 90 logged out. Waiting for processes to exit. Jan 20 01:42:00.706488 systemd-logind[1552]: Removed session 90. Jan 20 01:42:05.236276 systemd[1]: Started sshd@90-10.0.0.15:22-10.0.0.1:43888.service - OpenSSH per-connection server daemon (10.0.0.1:43888). Jan 20 01:42:06.193206 sshd[6486]: Accepted publickey for core from 10.0.0.1 port 43888 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:42:06.214284 sshd-session[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:42:06.366543 systemd-logind[1552]: New session 91 of user core. Jan 20 01:42:06.425666 systemd[1]: Started session-91.scope - Session 91 of User core. Jan 20 01:42:06.594847 kubelet[3172]: E0120 01:42:06.594200 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:09.654782 sshd[6489]: Connection closed by 10.0.0.1 port 43888 Jan 20 01:42:09.691529 sshd-session[6486]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:09.817190 systemd[1]: sshd@90-10.0.0.15:22-10.0.0.1:43888.service: Deactivated successfully. Jan 20 01:42:09.877713 systemd[1]: session-91.scope: Deactivated successfully. Jan 20 01:42:09.907926 systemd-logind[1552]: Session 91 logged out. Waiting for processes to exit. Jan 20 01:42:09.956238 systemd-logind[1552]: Removed session 91. Jan 20 01:42:14.797487 systemd[1]: Started sshd@91-10.0.0.15:22-10.0.0.1:50744.service - OpenSSH per-connection server daemon (10.0.0.1:50744). Jan 20 01:42:15.710761 sshd[6504]: Accepted publickey for core from 10.0.0.1 port 50744 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:42:15.716014 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:42:15.881424 systemd-logind[1552]: New session 92 of user core. Jan 20 01:42:15.930584 systemd[1]: Started session-92.scope - Session 92 of User core. Jan 20 01:42:17.891457 sshd[6507]: Connection closed by 10.0.0.1 port 50744 Jan 20 01:42:17.894781 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:17.944036 systemd[1]: sshd@91-10.0.0.15:22-10.0.0.1:50744.service: Deactivated successfully. Jan 20 01:42:18.013891 systemd[1]: session-92.scope: Deactivated successfully. Jan 20 01:42:18.083979 systemd-logind[1552]: Session 92 logged out. Waiting for processes to exit. Jan 20 01:42:18.118850 systemd-logind[1552]: Removed session 92. Jan 20 01:42:20.571507 kubelet[3172]: E0120 01:42:20.570829 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:23.000424 systemd[1]: Started sshd@92-10.0.0.15:22-10.0.0.1:50752.service - OpenSSH per-connection server daemon (10.0.0.1:50752). Jan 20 01:42:23.890857 sshd[6520]: Accepted publickey for core from 10.0.0.1 port 50752 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:42:23.890722 sshd-session[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:42:24.127661 systemd-logind[1552]: New session 93 of user core. Jan 20 01:42:24.272919 systemd[1]: Started session-93.scope - Session 93 of User core. Jan 20 01:42:25.609786 kubelet[3172]: E0120 01:42:25.606037 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:26.049439 sshd[6523]: Connection closed by 10.0.0.1 port 50752 Jan 20 01:42:26.055078 sshd-session[6520]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:26.110514 systemd[1]: sshd@92-10.0.0.15:22-10.0.0.1:50752.service: Deactivated successfully. Jan 20 01:42:26.121085 systemd[1]: session-93.scope: Deactivated successfully. Jan 20 01:42:26.131801 systemd-logind[1552]: Session 93 logged out. Waiting for processes to exit. Jan 20 01:42:26.147986 systemd-logind[1552]: Removed session 93. Jan 20 01:42:30.603490 kubelet[3172]: E0120 01:42:30.601734 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:31.241646 systemd[1]: Started sshd@93-10.0.0.15:22-10.0.0.1:53164.service - OpenSSH per-connection server daemon (10.0.0.1:53164). Jan 20 01:42:31.691837 sshd[6538]: Accepted publickey for core from 10.0.0.1 port 53164 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:42:31.713060 sshd-session[6538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:42:31.852245 systemd-logind[1552]: New session 94 of user core. Jan 20 01:42:31.873161 systemd[1]: Started session-94.scope - Session 94 of User core. Jan 20 01:42:33.891607 sshd[6541]: Connection closed by 10.0.0.1 port 53164 Jan 20 01:42:33.893227 sshd-session[6538]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:33.989530 systemd[1]: sshd@93-10.0.0.15:22-10.0.0.1:53164.service: Deactivated successfully. Jan 20 01:42:34.038157 systemd[1]: session-94.scope: Deactivated successfully. Jan 20 01:42:34.056802 systemd-logind[1552]: Session 94 logged out. Waiting for processes to exit. Jan 20 01:42:34.062251 systemd-logind[1552]: Removed session 94. Jan 20 01:42:36.603078 kubelet[3172]: E0120 01:42:36.555094 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.983s" Jan 20 01:42:46.726070 systemd[1]: Started sshd@94-10.0.0.15:22-10.0.0.1:55112.service - OpenSSH per-connection server daemon (10.0.0.1:55112). Jan 20 01:42:47.384532 systemd[1]: cri-containerd-fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64.scope: Deactivated successfully. Jan 20 01:42:47.385193 systemd[1]: cri-containerd-fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64.scope: Consumed 19.961s CPU time, 52.7M memory peak, 1.6M read from disk. Jan 20 01:42:48.406882 containerd[1566]: time="2026-01-20T01:42:48.392841759Z" level=info msg="received container exit event container_id:\"fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64\" id:\"fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64\" pid:5715 exit_status:1 exited_at:{seconds:1768873368 nanos:163281008}" Jan 20 01:42:49.613481 kubelet[3172]: E0120 01:42:49.602515 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.963s" Jan 20 01:42:49.730960 kubelet[3172]: E0120 01:42:49.709152 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:49.784918 systemd[1]: cri-containerd-9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a.scope: Deactivated successfully. Jan 20 01:42:49.839268 sshd-session[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:42:49.874844 sshd[6554]: Accepted publickey for core from 10.0.0.1 port 55112 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:42:49.802775 systemd[1]: cri-containerd-9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a.scope: Consumed 7.461s CPU time, 32.4M memory peak, 4K written to disk. Jan 20 01:42:49.912878 kubelet[3172]: E0120 01:42:49.852989 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:49.918570 systemd-logind[1552]: New session 95 of user core. Jan 20 01:42:49.984700 systemd[1]: Started session-95.scope - Session 95 of User core. Jan 20 01:42:50.134263 containerd[1566]: time="2026-01-20T01:42:50.122203030Z" level=info msg="received container exit event container_id:\"9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a\" id:\"9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a\" pid:5068 exit_status:1 exited_at:{seconds:1768873370 nanos:178631}" Jan 20 01:42:50.308998 systemd[1]: cri-containerd-575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b.scope: Deactivated successfully. Jan 20 01:42:50.314205 systemd[1]: cri-containerd-575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b.scope: Consumed 17.514s CPU time, 25M memory peak, 712K read from disk. Jan 20 01:42:50.407817 containerd[1566]: time="2026-01-20T01:42:50.392905756Z" level=info msg="received container exit event container_id:\"575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b\" id:\"575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b\" pid:5784 exit_status:1 exited_at:{seconds:1768873370 nanos:372270099}" Jan 20 01:42:51.447950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64-rootfs.mount: Deactivated successfully. Jan 20 01:42:52.276061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a-rootfs.mount: Deactivated successfully. Jan 20 01:42:53.006422 kubelet[3172]: I0120 01:42:52.946605 3172 scope.go:117] "RemoveContainer" containerID="dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b" Jan 20 01:42:53.006422 kubelet[3172]: I0120 01:42:52.982932 3172 scope.go:117] "RemoveContainer" containerID="fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64" Jan 20 01:42:53.006422 kubelet[3172]: E0120 01:42:52.983033 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:53.006422 kubelet[3172]: E0120 01:42:52.983224 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(5bbfee13ce9e07281eca876a0b8067f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="5bbfee13ce9e07281eca876a0b8067f2" Jan 20 01:42:53.050202 sshd[6566]: Connection closed by 10.0.0.1 port 55112 Jan 20 01:42:53.044995 sshd-session[6554]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:53.200293 systemd[1]: sshd@94-10.0.0.15:22-10.0.0.1:55112.service: Deactivated successfully. Jan 20 01:42:53.249640 systemd[1]: session-95.scope: Deactivated successfully. Jan 20 01:42:53.282469 containerd[1566]: time="2026-01-20T01:42:53.282156445Z" level=info msg="RemoveContainer for \"dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b\"" Jan 20 01:42:53.322033 systemd-logind[1552]: Session 95 logged out. Waiting for processes to exit. Jan 20 01:42:53.437807 systemd-logind[1552]: Removed session 95. Jan 20 01:42:53.682764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b-rootfs.mount: Deactivated successfully. Jan 20 01:42:53.702161 containerd[1566]: time="2026-01-20T01:42:53.702108577Z" level=info msg="RemoveContainer for \"dac8e004281550f75eeac9a2f462a8031a3e64549c8e1c10de85ce40375db89b\" returns successfully" Jan 20 01:42:53.912981 kubelet[3172]: I0120 01:42:53.912063 3172 scope.go:117] "RemoveContainer" containerID="852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03" Jan 20 01:42:53.912981 kubelet[3172]: I0120 01:42:53.912589 3172 scope.go:117] "RemoveContainer" containerID="575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b" Jan 20 01:42:53.916533 kubelet[3172]: E0120 01:42:53.912677 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:53.916533 kubelet[3172]: E0120 01:42:53.915936 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 20 01:42:53.940588 containerd[1566]: time="2026-01-20T01:42:53.940432785Z" level=info msg="RemoveContainer for \"852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03\"" Jan 20 01:42:53.997431 kubelet[3172]: I0120 01:42:53.997258 3172 scope.go:117] "RemoveContainer" containerID="9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a" Jan 20 01:42:54.012477 kubelet[3172]: E0120 01:42:54.006670 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:54.038426 containerd[1566]: time="2026-01-20T01:42:54.031087735Z" level=info msg="CreateContainer within sandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:2,}" Jan 20 01:42:54.045058 containerd[1566]: time="2026-01-20T01:42:54.045006702Z" level=info msg="RemoveContainer for \"852c10c0106d4cd238aeddc8b687b1d7d8b2f7c5a522d5dfc2289eb6a7ef7d03\" returns successfully" Jan 20 01:42:54.057061 kubelet[3172]: I0120 01:42:54.057017 3172 scope.go:117] "RemoveContainer" containerID="53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb" Jan 20 01:42:54.125877 containerd[1566]: time="2026-01-20T01:42:54.124938544Z" level=info msg="RemoveContainer for \"53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb\"" Jan 20 01:42:54.246465 containerd[1566]: time="2026-01-20T01:42:54.246222778Z" level=info msg="RemoveContainer for \"53d275edf9adfa3d4eb46156455dffa1ba9fa19fb8f6114ec57139155587efeb\" returns successfully" Jan 20 01:42:54.386043 containerd[1566]: time="2026-01-20T01:42:54.379098502Z" level=info msg="Container f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:42:54.497078 containerd[1566]: time="2026-01-20T01:42:54.495508530Z" level=info msg="CreateContainer within sandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" for &ContainerMetadata{Name:cilium-operator,Attempt:2,} returns container id \"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\"" Jan 20 01:42:54.508271 containerd[1566]: time="2026-01-20T01:42:54.506269319Z" level=info msg="StartContainer for \"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\"" Jan 20 01:42:54.547912 containerd[1566]: time="2026-01-20T01:42:54.542860229Z" level=info msg="connecting to shim f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce" address="unix:///run/containerd/s/a2e0cae1905ed5d0927600f03d22acdaeeffefa7c78fd47c144e009d52a1c3e4" protocol=ttrpc version=3 Jan 20 01:42:54.993984 systemd[1]: Started cri-containerd-f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce.scope - libcontainer container f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce. Jan 20 01:42:55.544920 containerd[1566]: time="2026-01-20T01:42:55.544574451Z" level=info msg="StartContainer for \"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\" returns successfully" Jan 20 01:42:56.410537 kubelet[3172]: E0120 01:42:56.410048 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:57.578428 kubelet[3172]: E0120 01:42:57.567014 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:58.193545 systemd[1]: Started sshd@95-10.0.0.15:22-10.0.0.1:52584.service - OpenSSH per-connection server daemon (10.0.0.1:52584). Jan 20 01:42:59.025433 kubelet[3172]: I0120 01:42:59.023608 3172 scope.go:117] "RemoveContainer" containerID="fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64" Jan 20 01:42:59.025433 kubelet[3172]: E0120 01:42:59.023850 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:59.025433 kubelet[3172]: E0120 01:42:59.023994 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(5bbfee13ce9e07281eca876a0b8067f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="5bbfee13ce9e07281eca876a0b8067f2" Jan 20 01:42:59.025433 kubelet[3172]: I0120 01:42:59.024611 3172 scope.go:117] "RemoveContainer" containerID="575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b" Jan 20 01:42:59.025433 kubelet[3172]: E0120 01:42:59.024681 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:42:59.025433 kubelet[3172]: E0120 01:42:59.024839 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 20 01:42:59.437101 sshd[6642]: Accepted publickey for core from 10.0.0.1 port 52584 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:42:59.441842 sshd-session[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:42:59.450499 systemd-logind[1552]: New session 96 of user core. Jan 20 01:42:59.486042 systemd[1]: Started session-96.scope - Session 96 of User core. Jan 20 01:43:01.546048 sshd[6645]: Connection closed by 10.0.0.1 port 52584 Jan 20 01:43:01.561284 sshd-session[6642]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:01.622964 systemd[1]: sshd@95-10.0.0.15:22-10.0.0.1:52584.service: Deactivated successfully. Jan 20 01:43:01.692603 systemd[1]: session-96.scope: Deactivated successfully. Jan 20 01:43:01.727249 systemd-logind[1552]: Session 96 logged out. Waiting for processes to exit. Jan 20 01:43:01.840233 systemd-logind[1552]: Removed session 96. Jan 20 01:43:06.741674 systemd[1]: Started sshd@96-10.0.0.15:22-10.0.0.1:48978.service - OpenSSH per-connection server daemon (10.0.0.1:48978). Jan 20 01:43:07.142440 sshd[6658]: Accepted publickey for core from 10.0.0.1 port 48978 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:43:07.159249 sshd-session[6658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:08.536526 systemd-logind[1552]: New session 97 of user core. Jan 20 01:43:08.690762 systemd[1]: Started session-97.scope - Session 97 of User core. Jan 20 01:43:09.587489 kubelet[3172]: I0120 01:43:09.584934 3172 scope.go:117] "RemoveContainer" containerID="fef3001ed8a3c9e3ff48424acdef62f249bc121e1279e6e542d7938d9690ea64" Jan 20 01:43:09.615424 kubelet[3172]: E0120 01:43:09.601472 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:09.655506 containerd[1566]: time="2026-01-20T01:43:09.654562604Z" level=info msg="CreateContainer within sandbox \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Jan 20 01:43:10.024231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056764348.mount: Deactivated successfully. Jan 20 01:43:10.051163 containerd[1566]: time="2026-01-20T01:43:10.047517622Z" level=info msg="Container 822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:43:10.152177 containerd[1566]: time="2026-01-20T01:43:10.151631175Z" level=info msg="CreateContainer within sandbox \"77a1588463be070b175ec340f59c83300b14fceb96812c5363d33ffeb6072ddd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0\"" Jan 20 01:43:10.185613 containerd[1566]: time="2026-01-20T01:43:10.157557383Z" level=info msg="StartContainer for \"822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0\"" Jan 20 01:43:10.236170 containerd[1566]: time="2026-01-20T01:43:10.235621349Z" level=info msg="connecting to shim 822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0" address="unix:///run/containerd/s/67df6f9f643dff0e5a2de5d8ebba56686cc2aa08237c90040202dd99a7cd6a97" protocol=ttrpc version=3 Jan 20 01:43:10.381689 sshd[6661]: Connection closed by 10.0.0.1 port 48978 Jan 20 01:43:10.384782 sshd-session[6658]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:10.422922 systemd[1]: sshd@96-10.0.0.15:22-10.0.0.1:48978.service: Deactivated successfully. Jan 20 01:43:10.511697 systemd[1]: session-97.scope: Deactivated successfully. Jan 20 01:43:10.602179 systemd-logind[1552]: Session 97 logged out. Waiting for processes to exit. Jan 20 01:43:10.671560 systemd[1]: Started cri-containerd-822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0.scope - libcontainer container 822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0. Jan 20 01:43:10.682922 systemd-logind[1552]: Removed session 97. Jan 20 01:43:11.745193 containerd[1566]: time="2026-01-20T01:43:11.742796847Z" level=info msg="StartContainer for \"822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0\" returns successfully" Jan 20 01:43:12.874958 kubelet[3172]: E0120 01:43:12.873256 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:14.565777 kubelet[3172]: I0120 01:43:14.565653 3172 scope.go:117] "RemoveContainer" containerID="575d7ff32135ee7f7410370d9ae5aef8cb7cf44a249951eb7edc6be2c4d92b7b" Jan 20 01:43:14.567268 kubelet[3172]: E0120 01:43:14.566604 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:14.716950 containerd[1566]: time="2026-01-20T01:43:14.703553463Z" level=info msg="CreateContainer within sandbox \"3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Jan 20 01:43:15.187528 containerd[1566]: time="2026-01-20T01:43:15.186841670Z" level=info msg="Container eb95466697671468c867edd2958c8c5c257c2ea907343c82c764de2e18cf24a3: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:43:15.504999 containerd[1566]: time="2026-01-20T01:43:15.499652160Z" level=info msg="CreateContainer within sandbox \"3e99c7aa4aa9d0e067a82a258af6133e67ddeff32530aae29e81c04c9bbdda20\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"eb95466697671468c867edd2958c8c5c257c2ea907343c82c764de2e18cf24a3\"" Jan 20 01:43:15.521103 systemd[1]: Started sshd@97-10.0.0.15:22-10.0.0.1:40258.service - OpenSSH per-connection server daemon (10.0.0.1:40258). Jan 20 01:43:15.593582 containerd[1566]: time="2026-01-20T01:43:15.591243262Z" level=info msg="StartContainer for \"eb95466697671468c867edd2958c8c5c257c2ea907343c82c764de2e18cf24a3\"" Jan 20 01:43:15.669406 containerd[1566]: time="2026-01-20T01:43:15.646275501Z" level=info msg="connecting to shim eb95466697671468c867edd2958c8c5c257c2ea907343c82c764de2e18cf24a3" address="unix:///run/containerd/s/b300e0b873644f498b396527866ecbf526e8b806b595a74b8bd537fbc88b091f" protocol=ttrpc version=3 Jan 20 01:43:16.802040 systemd[1]: Started cri-containerd-eb95466697671468c867edd2958c8c5c257c2ea907343c82c764de2e18cf24a3.scope - libcontainer container eb95466697671468c867edd2958c8c5c257c2ea907343c82c764de2e18cf24a3. Jan 20 01:43:17.124735 sshd[6708]: Accepted publickey for core from 10.0.0.1 port 40258 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:43:17.123866 sshd-session[6708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:17.387284 systemd-logind[1552]: New session 98 of user core. Jan 20 01:43:17.445026 systemd[1]: Started session-98.scope - Session 98 of User core. Jan 20 01:43:18.639866 kubelet[3172]: E0120 01:43:18.634911 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:18.791681 containerd[1566]: time="2026-01-20T01:43:18.787574348Z" level=error msg="get state for eb95466697671468c867edd2958c8c5c257c2ea907343c82c764de2e18cf24a3" error="context deadline exceeded" Jan 20 01:43:18.791681 containerd[1566]: time="2026-01-20T01:43:18.787708089Z" level=warning msg="unknown status" status=0 Jan 20 01:43:18.899092 containerd[1566]: time="2026-01-20T01:43:18.893536563Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:43:19.420532 containerd[1566]: time="2026-01-20T01:43:19.418953327Z" level=info msg="StartContainer for \"eb95466697671468c867edd2958c8c5c257c2ea907343c82c764de2e18cf24a3\" returns successfully" Jan 20 01:43:19.479098 kubelet[3172]: E0120 01:43:19.475530 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:19.963963 sshd[6730]: Connection closed by 10.0.0.1 port 40258 Jan 20 01:43:19.971489 sshd-session[6708]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:20.031705 systemd[1]: sshd@97-10.0.0.15:22-10.0.0.1:40258.service: Deactivated successfully. Jan 20 01:43:20.096866 systemd[1]: session-98.scope: Deactivated successfully. Jan 20 01:43:20.151866 systemd-logind[1552]: Session 98 logged out. Waiting for processes to exit. Jan 20 01:43:20.176571 systemd-logind[1552]: Removed session 98. Jan 20 01:43:20.543480 kubelet[3172]: E0120 01:43:20.537925 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:21.685513 kubelet[3172]: E0120 01:43:21.633025 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:25.191676 systemd[1]: Started sshd@98-10.0.0.15:22-10.0.0.1:36172.service - OpenSSH per-connection server daemon (10.0.0.1:36172). Jan 20 01:43:25.609886 kubelet[3172]: E0120 01:43:25.605080 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:26.551747 sshd[6759]: Accepted publickey for core from 10.0.0.1 port 36172 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:43:26.604076 sshd-session[6759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:26.704041 systemd-logind[1552]: New session 99 of user core. Jan 20 01:43:26.741010 systemd[1]: Started session-99.scope - Session 99 of User core. Jan 20 01:43:28.716236 sshd[6762]: Connection closed by 10.0.0.1 port 36172 Jan 20 01:43:28.715781 sshd-session[6759]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:28.735232 kubelet[3172]: E0120 01:43:28.730155 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:28.852948 systemd[1]: sshd@98-10.0.0.15:22-10.0.0.1:36172.service: Deactivated successfully. Jan 20 01:43:28.896920 systemd[1]: session-99.scope: Deactivated successfully. Jan 20 01:43:29.004972 systemd-logind[1552]: Session 99 logged out. Waiting for processes to exit. Jan 20 01:43:29.060478 systemd-logind[1552]: Removed session 99. Jan 20 01:43:32.641620 kubelet[3172]: E0120 01:43:32.640038 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:33.880857 systemd[1]: Started sshd@99-10.0.0.15:22-10.0.0.1:36176.service - OpenSSH per-connection server daemon (10.0.0.1:36176). Jan 20 01:43:34.455752 sshd[6777]: Accepted publickey for core from 10.0.0.1 port 36176 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:43:34.481668 sshd-session[6777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:34.625242 systemd-logind[1552]: New session 100 of user core. Jan 20 01:43:34.704141 systemd[1]: Started session-100.scope - Session 100 of User core. Jan 20 01:43:36.159760 kubelet[3172]: E0120 01:43:36.158825 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:36.484018 sshd[6780]: Connection closed by 10.0.0.1 port 36176 Jan 20 01:43:36.485940 sshd-session[6777]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:36.684125 systemd[1]: sshd@99-10.0.0.15:22-10.0.0.1:36176.service: Deactivated successfully. Jan 20 01:43:36.796212 systemd[1]: session-100.scope: Deactivated successfully. Jan 20 01:43:36.823880 systemd-logind[1552]: Session 100 logged out. Waiting for processes to exit. Jan 20 01:43:37.029181 systemd-logind[1552]: Removed session 100. Jan 20 01:43:38.931795 kubelet[3172]: E0120 01:43:38.931131 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:39.896278 kubelet[3172]: E0120 01:43:39.891898 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:40.800917 kubelet[3172]: E0120 01:43:40.791245 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:41.731469 systemd[1]: Started sshd@100-10.0.0.15:22-10.0.0.1:54258.service - OpenSSH per-connection server daemon (10.0.0.1:54258). Jan 20 01:43:43.295467 sshd[6795]: Accepted publickey for core from 10.0.0.1 port 54258 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:43:43.312097 sshd-session[6795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:43.431839 systemd-logind[1552]: New session 101 of user core. Jan 20 01:43:43.471923 systemd[1]: Started session-101.scope - Session 101 of User core. Jan 20 01:43:45.542605 sshd[6800]: Connection closed by 10.0.0.1 port 54258 Jan 20 01:43:45.550948 sshd-session[6795]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:45.621241 systemd[1]: sshd@100-10.0.0.15:22-10.0.0.1:54258.service: Deactivated successfully. Jan 20 01:43:45.648273 systemd[1]: session-101.scope: Deactivated successfully. Jan 20 01:43:45.653890 systemd-logind[1552]: Session 101 logged out. Waiting for processes to exit. Jan 20 01:43:45.723042 systemd-logind[1552]: Removed session 101. Jan 20 01:43:51.000916 systemd[1]: Started sshd@101-10.0.0.15:22-10.0.0.1:44606.service - OpenSSH per-connection server daemon (10.0.0.1:44606). Jan 20 01:43:52.346641 sshd[6815]: Accepted publickey for core from 10.0.0.1 port 44606 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:43:52.403156 sshd-session[6815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:52.627520 systemd-logind[1552]: New session 102 of user core. Jan 20 01:43:52.694748 systemd[1]: Started session-102.scope - Session 102 of User core. Jan 20 01:43:55.636052 sshd[6818]: Connection closed by 10.0.0.1 port 44606 Jan 20 01:43:55.614073 sshd-session[6815]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:55.761207 systemd[1]: sshd@101-10.0.0.15:22-10.0.0.1:44606.service: Deactivated successfully. Jan 20 01:43:55.816505 systemd[1]: session-102.scope: Deactivated successfully. Jan 20 01:43:55.998026 systemd-logind[1552]: Session 102 logged out. Waiting for processes to exit. Jan 20 01:43:56.244613 systemd-logind[1552]: Removed session 102. Jan 20 01:43:58.577626 kubelet[3172]: E0120 01:43:58.575198 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:01.058230 systemd[1]: Started sshd@102-10.0.0.15:22-10.0.0.1:34324.service - OpenSSH per-connection server daemon (10.0.0.1:34324). Jan 20 01:44:03.375146 sshd[6832]: Accepted publickey for core from 10.0.0.1 port 34324 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:44:03.397512 sshd-session[6832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:03.601737 systemd-logind[1552]: New session 103 of user core. Jan 20 01:44:03.694702 systemd[1]: Started session-103.scope - Session 103 of User core. Jan 20 01:44:06.402021 sshd[6835]: Connection closed by 10.0.0.1 port 34324 Jan 20 01:44:06.439822 sshd-session[6832]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:06.506011 systemd-logind[1552]: Session 103 logged out. Waiting for processes to exit. Jan 20 01:44:06.506577 systemd[1]: sshd@102-10.0.0.15:22-10.0.0.1:34324.service: Deactivated successfully. Jan 20 01:44:06.572814 systemd[1]: session-103.scope: Deactivated successfully. Jan 20 01:44:06.685569 systemd-logind[1552]: Removed session 103. Jan 20 01:44:09.730171 kubelet[3172]: E0120 01:44:09.653629 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:11.501710 systemd[1]: Started sshd@103-10.0.0.15:22-10.0.0.1:49654.service - OpenSSH per-connection server daemon (10.0.0.1:49654). Jan 20 01:44:12.230566 sshd[6849]: Accepted publickey for core from 10.0.0.1 port 49654 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:44:12.238266 sshd-session[6849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:12.389480 systemd-logind[1552]: New session 104 of user core. Jan 20 01:44:12.504194 systemd[1]: Started session-104.scope - Session 104 of User core. Jan 20 01:44:13.999091 sshd[6852]: Connection closed by 10.0.0.1 port 49654 Jan 20 01:44:14.012655 sshd-session[6849]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:14.085289 systemd[1]: sshd@103-10.0.0.15:22-10.0.0.1:49654.service: Deactivated successfully. Jan 20 01:44:14.122132 systemd[1]: session-104.scope: Deactivated successfully. Jan 20 01:44:14.154860 systemd-logind[1552]: Session 104 logged out. Waiting for processes to exit. Jan 20 01:44:14.176724 systemd-logind[1552]: Removed session 104. Jan 20 01:44:19.105737 systemd[1]: Started sshd@104-10.0.0.15:22-10.0.0.1:60068.service - OpenSSH per-connection server daemon (10.0.0.1:60068). Jan 20 01:44:19.555455 sshd[6869]: Accepted publickey for core from 10.0.0.1 port 60068 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:44:19.571893 sshd-session[6869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:19.915705 systemd[1]: Started session-105.scope - Session 105 of User core. Jan 20 01:44:19.919869 systemd-logind[1552]: New session 105 of user core. Jan 20 01:44:21.033661 sshd[6872]: Connection closed by 10.0.0.1 port 60068 Jan 20 01:44:21.047739 sshd-session[6869]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:21.105989 systemd[1]: sshd@104-10.0.0.15:22-10.0.0.1:60068.service: Deactivated successfully. Jan 20 01:44:21.143951 systemd[1]: session-105.scope: Deactivated successfully. Jan 20 01:44:21.146222 systemd-logind[1552]: Session 105 logged out. Waiting for processes to exit. Jan 20 01:44:21.182098 systemd-logind[1552]: Removed session 105. Jan 20 01:44:22.597714 kubelet[3172]: E0120 01:44:22.590220 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:26.113196 systemd[1]: Started sshd@105-10.0.0.15:22-10.0.0.1:45708.service - OpenSSH per-connection server daemon (10.0.0.1:45708). Jan 20 01:44:26.917697 sshd[6887]: Accepted publickey for core from 10.0.0.1 port 45708 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:44:26.928910 sshd-session[6887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:27.021106 systemd-logind[1552]: New session 106 of user core. Jan 20 01:44:27.043729 systemd[1]: Started session-106.scope - Session 106 of User core. Jan 20 01:44:27.894484 sshd[6890]: Connection closed by 10.0.0.1 port 45708 Jan 20 01:44:27.896985 sshd-session[6887]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:27.925130 systemd[1]: sshd@105-10.0.0.15:22-10.0.0.1:45708.service: Deactivated successfully. Jan 20 01:44:27.948080 systemd[1]: session-106.scope: Deactivated successfully. Jan 20 01:44:27.980267 systemd-logind[1552]: Session 106 logged out. Waiting for processes to exit. Jan 20 01:44:27.996795 systemd-logind[1552]: Removed session 106. Jan 20 01:44:33.018956 systemd[1]: Started sshd@106-10.0.0.15:22-10.0.0.1:45712.service - OpenSSH per-connection server daemon (10.0.0.1:45712). Jan 20 01:44:33.357086 sshd[6903]: Accepted publickey for core from 10.0.0.1 port 45712 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:44:33.379812 sshd-session[6903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:33.736959 systemd-logind[1552]: New session 107 of user core. Jan 20 01:44:33.785128 systemd[1]: Started session-107.scope - Session 107 of User core. Jan 20 01:44:34.934898 sshd[6906]: Connection closed by 10.0.0.1 port 45712 Jan 20 01:44:34.938821 sshd-session[6903]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:35.029794 systemd[1]: sshd@106-10.0.0.15:22-10.0.0.1:45712.service: Deactivated successfully. Jan 20 01:44:35.090569 systemd[1]: session-107.scope: Deactivated successfully. Jan 20 01:44:35.101727 systemd-logind[1552]: Session 107 logged out. Waiting for processes to exit. Jan 20 01:44:35.154468 systemd-logind[1552]: Removed session 107. Jan 20 01:44:40.064952 systemd[1]: Started sshd@107-10.0.0.15:22-10.0.0.1:58780.service - OpenSSH per-connection server daemon (10.0.0.1:58780). Jan 20 01:44:40.526569 sshd[6919]: Accepted publickey for core from 10.0.0.1 port 58780 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:44:40.535455 sshd-session[6919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:40.633846 systemd-logind[1552]: New session 108 of user core. Jan 20 01:44:40.725040 systemd[1]: Started session-108.scope - Session 108 of User core. Jan 20 01:44:42.170745 sshd[6922]: Connection closed by 10.0.0.1 port 58780 Jan 20 01:44:42.175228 sshd-session[6919]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:42.286019 systemd[1]: sshd@107-10.0.0.15:22-10.0.0.1:58780.service: Deactivated successfully. Jan 20 01:44:42.339612 systemd[1]: session-108.scope: Deactivated successfully. Jan 20 01:44:42.408196 systemd-logind[1552]: Session 108 logged out. Waiting for processes to exit. Jan 20 01:44:42.490487 systemd-logind[1552]: Removed session 108. Jan 20 01:44:47.332754 systemd[1]: Started sshd@108-10.0.0.15:22-10.0.0.1:60704.service - OpenSSH per-connection server daemon (10.0.0.1:60704). Jan 20 01:44:47.573510 kubelet[3172]: E0120 01:44:47.571565 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:47.925109 sshd[6939]: Accepted publickey for core from 10.0.0.1 port 60704 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:44:47.930629 sshd-session[6939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:48.121107 systemd-logind[1552]: New session 109 of user core. Jan 20 01:44:48.177738 systemd[1]: Started session-109.scope - Session 109 of User core. Jan 20 01:44:49.585148 kubelet[3172]: E0120 01:44:49.579844 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:49.607794 sshd[6942]: Connection closed by 10.0.0.1 port 60704 Jan 20 01:44:49.613740 sshd-session[6939]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:49.728419 systemd[1]: sshd@108-10.0.0.15:22-10.0.0.1:60704.service: Deactivated successfully. Jan 20 01:44:49.816422 systemd[1]: session-109.scope: Deactivated successfully. Jan 20 01:44:49.857616 systemd-logind[1552]: Session 109 logged out. Waiting for processes to exit. Jan 20 01:44:49.910486 systemd-logind[1552]: Removed session 109. Jan 20 01:44:54.696626 kubelet[3172]: E0120 01:44:54.691065 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:54.981812 systemd[1]: Started sshd@109-10.0.0.15:22-10.0.0.1:51742.service - OpenSSH per-connection server daemon (10.0.0.1:51742). Jan 20 01:44:56.042915 sshd[6956]: Accepted publickey for core from 10.0.0.1 port 51742 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:44:56.072737 sshd-session[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:56.178908 systemd-logind[1552]: New session 110 of user core. Jan 20 01:44:56.235200 systemd[1]: Started session-110.scope - Session 110 of User core. Jan 20 01:44:57.837136 sshd[6959]: Connection closed by 10.0.0.1 port 51742 Jan 20 01:44:57.865927 sshd-session[6956]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:57.916924 systemd[1]: sshd@109-10.0.0.15:22-10.0.0.1:51742.service: Deactivated successfully. Jan 20 01:44:57.940915 systemd[1]: session-110.scope: Deactivated successfully. Jan 20 01:44:57.960175 systemd-logind[1552]: Session 110 logged out. Waiting for processes to exit. Jan 20 01:44:57.996586 systemd[1]: Started sshd@110-10.0.0.15:22-10.0.0.1:51756.service - OpenSSH per-connection server daemon (10.0.0.1:51756). Jan 20 01:44:58.017542 systemd-logind[1552]: Removed session 110. Jan 20 01:44:58.355887 sshd[6973]: Accepted publickey for core from 10.0.0.1 port 51756 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:44:58.370710 sshd-session[6973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:58.433607 systemd-logind[1552]: New session 111 of user core. Jan 20 01:44:58.479594 systemd[1]: Started session-111.scope - Session 111 of User core. Jan 20 01:45:04.579825 kubelet[3172]: E0120 01:45:04.579205 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:05.798548 containerd[1566]: time="2026-01-20T01:45:05.794978439Z" level=info msg="StopContainer for \"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\" with timeout 30 (s)" Jan 20 01:45:05.968428 containerd[1566]: time="2026-01-20T01:45:05.955063243Z" level=info msg="StopContainer for \"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\" with timeout 2 (s)" Jan 20 01:45:06.019957 containerd[1566]: time="2026-01-20T01:45:06.012195206Z" level=info msg="Stop container \"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\" with signal terminated" Jan 20 01:45:06.098035 containerd[1566]: time="2026-01-20T01:45:06.075496240Z" level=info msg="Stop container \"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\" with signal terminated" Jan 20 01:45:06.426955 systemd-networkd[1462]: lxc_health: Link DOWN Jan 20 01:45:06.427035 systemd-networkd[1462]: lxc_health: Lost carrier Jan 20 01:45:06.612913 kubelet[3172]: E0120 01:45:06.567583 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:06.742243 systemd[1]: cri-containerd-f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce.scope: Deactivated successfully. Jan 20 01:45:06.742927 systemd[1]: cri-containerd-f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce.scope: Consumed 2.280s CPU time, 33.1M memory peak, 4K read from disk, 4K written to disk. Jan 20 01:45:06.784431 containerd[1566]: time="2026-01-20T01:45:06.778134199Z" level=info msg="received container exit event container_id:\"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\" id:\"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\" pid:6622 exited_at:{seconds:1768873506 nanos:775792715}" Jan 20 01:45:07.070713 systemd[1]: cri-containerd-0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2.scope: Deactivated successfully. Jan 20 01:45:07.071243 systemd[1]: cri-containerd-0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2.scope: Consumed 59.031s CPU time, 131M memory peak, 584K read from disk, 13.3M written to disk. Jan 20 01:45:07.100494 containerd[1566]: time="2026-01-20T01:45:07.096602424Z" level=info msg="received container exit event container_id:\"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\" id:\"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\" pid:3822 exited_at:{seconds:1768873507 nanos:89183205}" Jan 20 01:45:07.213463 sshd[6976]: Connection closed by 10.0.0.1 port 51756 Jan 20 01:45:07.191160 sshd-session[6973]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:07.282252 kubelet[3172]: E0120 01:45:07.272843 3172 handlers.go:82] "Exec lifecycle hook for Container in Pod failed" err="command '/cni-uninstall.sh' exited with 137: " execCommand=["/cni-uninstall.sh"] containerName="cilium-agent" pod="kube-system/cilium-bchfb" message=< Jan 20 01:45:07.282252 kubelet[3172]: Removing active Cilium CNI configurations from /host/etc/cni/net.d... Jan 20 01:45:07.282252 kubelet[3172]: Removing /host/opt/cni/bin/cilium-cni... Jan 20 01:45:07.282252 kubelet[3172]: > Jan 20 01:45:07.282252 kubelet[3172]: E0120 01:45:07.272885 3172 kuberuntime_container.go:748] "PreStop hook failed" err="command '/cni-uninstall.sh' exited with 137: " pod="kube-system/cilium-bchfb" podUID="a58b624c-b508-4b98-9513-c7a4eae38f71" containerName="cilium-agent" containerID="containerd://0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2" Jan 20 01:45:07.299590 containerd[1566]: time="2026-01-20T01:45:07.299508967Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:45:07.311110 systemd[1]: sshd@110-10.0.0.15:22-10.0.0.1:51756.service: Deactivated successfully. Jan 20 01:45:07.351277 systemd[1]: session-111.scope: Deactivated successfully. Jan 20 01:45:07.358256 systemd[1]: session-111.scope: Consumed 1.935s CPU time, 31.2M memory peak. Jan 20 01:45:07.406442 systemd-logind[1552]: Session 111 logged out. Waiting for processes to exit. Jan 20 01:45:07.505226 systemd[1]: Started sshd@111-10.0.0.15:22-10.0.0.1:59344.service - OpenSSH per-connection server daemon (10.0.0.1:59344). Jan 20 01:45:07.547981 systemd-logind[1552]: Removed session 111. Jan 20 01:45:07.997232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2-rootfs.mount: Deactivated successfully. Jan 20 01:45:08.226122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce-rootfs.mount: Deactivated successfully. Jan 20 01:45:08.344990 containerd[1566]: time="2026-01-20T01:45:08.331530946Z" level=info msg="StopContainer for \"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\" returns successfully" Jan 20 01:45:08.699814 containerd[1566]: time="2026-01-20T01:45:08.690590409Z" level=info msg="StopPodSandbox for \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\"" Jan 20 01:45:08.757198 containerd[1566]: time="2026-01-20T01:45:08.751082885Z" level=info msg="StopContainer for \"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\" returns successfully" Jan 20 01:45:08.819552 containerd[1566]: time="2026-01-20T01:45:08.814948247Z" level=info msg="Container to stop \"6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:45:08.819552 containerd[1566]: time="2026-01-20T01:45:08.815079579Z" level=info msg="Container to stop \"ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:45:08.819552 containerd[1566]: time="2026-01-20T01:45:08.815104725Z" level=info msg="Container to stop \"4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:45:08.819552 containerd[1566]: time="2026-01-20T01:45:08.815132255Z" level=info msg="Container to stop \"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:45:08.819552 containerd[1566]: time="2026-01-20T01:45:08.815144508Z" level=info msg="Container to stop \"a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:45:08.901451 sshd[7040]: Accepted publickey for core from 10.0.0.1 port 59344 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:08.909562 sshd-session[7040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:09.025252 containerd[1566]: time="2026-01-20T01:45:09.025087639Z" level=info msg="StopPodSandbox for \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\"" Jan 20 01:45:09.073557 containerd[1566]: time="2026-01-20T01:45:09.070104696Z" level=info msg="Container to stop \"9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:45:09.074243 containerd[1566]: time="2026-01-20T01:45:09.073882278Z" level=info msg="Container to stop \"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 01:45:09.270083 systemd-logind[1552]: New session 112 of user core. Jan 20 01:45:09.272939 systemd[1]: Started session-112.scope - Session 112 of User core. Jan 20 01:45:10.285967 kubelet[3172]: E0120 01:45:10.285730 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:45:10.320799 kubelet[3172]: I0120 01:45:10.288947 3172 scope.go:117] "RemoveContainer" containerID="9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a" Jan 20 01:45:10.290260 systemd[1]: cri-containerd-a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c.scope: Deactivated successfully. Jan 20 01:45:10.519028 containerd[1566]: time="2026-01-20T01:45:10.509116379Z" level=info msg="RemoveContainer for \"9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a\"" Jan 20 01:45:10.513736 systemd[1]: cri-containerd-4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71.scope: Deactivated successfully. Jan 20 01:45:10.807253 containerd[1566]: time="2026-01-20T01:45:10.807182134Z" level=info msg="received sandbox exit event container_id:\"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" id:\"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" exit_status:137 exited_at:{seconds:1768873510 nanos:731471772}" monitor_name=podsandbox Jan 20 01:45:10.892651 containerd[1566]: time="2026-01-20T01:45:10.862006913Z" level=info msg="received sandbox exit event container_id:\"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" id:\"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" exit_status:137 exited_at:{seconds:1768873510 nanos:797818639}" monitor_name=podsandbox Jan 20 01:45:10.895180 containerd[1566]: time="2026-01-20T01:45:10.895070850Z" level=info msg="RemoveContainer for \"9bc7d3a54666ffedab5886c1f049a6079e1f50a6d374a39c61037fdaf4b37f9a\" returns successfully" Jan 20 01:45:12.566292 kubelet[3172]: E0120 01:45:12.564698 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:12.709262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c-rootfs.mount: Deactivated successfully. Jan 20 01:45:13.178482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71-rootfs.mount: Deactivated successfully. Jan 20 01:45:13.288452 containerd[1566]: time="2026-01-20T01:45:13.238270756Z" level=info msg="shim disconnected" id=a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c namespace=k8s.io Jan 20 01:45:13.288452 containerd[1566]: time="2026-01-20T01:45:13.238467249Z" level=warning msg="cleaning up after shim disconnected" id=a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c namespace=k8s.io Jan 20 01:45:13.424793 containerd[1566]: time="2026-01-20T01:45:13.238481865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:45:13.425442 containerd[1566]: time="2026-01-20T01:45:13.373190416Z" level=info msg="shim disconnected" id=4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71 namespace=k8s.io Jan 20 01:45:13.431775 containerd[1566]: time="2026-01-20T01:45:13.431720251Z" level=warning msg="cleaning up after shim disconnected" id=4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71 namespace=k8s.io Jan 20 01:45:13.431985 containerd[1566]: time="2026-01-20T01:45:13.431940397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:45:13.440819 kubelet[3172]: I0120 01:45:13.436100 3172 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T01:45:13Z","lastTransitionTime":"2026-01-20T01:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 01:45:14.369009 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c-shm.mount: Deactivated successfully. Jan 20 01:45:14.401123 containerd[1566]: time="2026-01-20T01:45:14.375774101Z" level=info msg="TearDown network for sandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" successfully" Jan 20 01:45:14.401123 containerd[1566]: time="2026-01-20T01:45:14.380593175Z" level=info msg="StopPodSandbox for \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" returns successfully" Jan 20 01:45:14.407272 containerd[1566]: time="2026-01-20T01:45:14.404661740Z" level=info msg="TearDown network for sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" successfully" Jan 20 01:45:14.407272 containerd[1566]: time="2026-01-20T01:45:14.404709448Z" level=info msg="StopPodSandbox for \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" returns successfully" Jan 20 01:45:14.414434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71-shm.mount: Deactivated successfully. Jan 20 01:45:14.447167 containerd[1566]: time="2026-01-20T01:45:14.447095220Z" level=info msg="received sandbox container exit event sandbox_id:\"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" exit_status:137 exited_at:{seconds:1768873510 nanos:731471772}" monitor_name=criService Jan 20 01:45:14.447991 containerd[1566]: time="2026-01-20T01:45:14.447641407Z" level=info msg="received sandbox container exit event sandbox_id:\"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" exit_status:137 exited_at:{seconds:1768873510 nanos:797818639}" monitor_name=criService Jan 20 01:45:14.725469 kubelet[3172]: I0120 01:45:14.719084 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp2wq\" (UniqueName: \"kubernetes.io/projected/de7db6e2-fcea-476f-b1ad-00e102084688-kube-api-access-zp2wq\") pod \"de7db6e2-fcea-476f-b1ad-00e102084688\" (UID: \"de7db6e2-fcea-476f-b1ad-00e102084688\") " Jan 20 01:45:14.749622 kubelet[3172]: I0120 01:45:14.719152 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de7db6e2-fcea-476f-b1ad-00e102084688-cilium-config-path\") pod \"de7db6e2-fcea-476f-b1ad-00e102084688\" (UID: \"de7db6e2-fcea-476f-b1ad-00e102084688\") " Jan 20 01:45:14.822778 kubelet[3172]: I0120 01:45:14.819057 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7db6e2-fcea-476f-b1ad-00e102084688-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de7db6e2-fcea-476f-b1ad-00e102084688" (UID: "de7db6e2-fcea-476f-b1ad-00e102084688"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:45:14.900239 kubelet[3172]: I0120 01:45:14.898783 3172 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de7db6e2-fcea-476f-b1ad-00e102084688-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.023724 kubelet[3172]: I0120 01:45:15.014163 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:45:15.090447 kubelet[3172]: I0120 01:45:15.064163 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-etc-cni-netd\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.090447 kubelet[3172]: I0120 01:45:15.082924 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-xtables-lock\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.090447 kubelet[3172]: I0120 01:45:15.083049 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a58b624c-b508-4b98-9513-c7a4eae38f71-clustermesh-secrets\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.090447 kubelet[3172]: I0120 01:45:15.083080 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-host-proc-sys-net\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.090447 kubelet[3172]: I0120 01:45:15.083116 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gkgn\" (UniqueName: \"kubernetes.io/projected/a58b624c-b508-4b98-9513-c7a4eae38f71-kube-api-access-9gkgn\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.090447 kubelet[3172]: I0120 01:45:15.083139 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a58b624c-b508-4b98-9513-c7a4eae38f71-hubble-tls\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.066425 systemd[1]: var-lib-kubelet-pods-de7db6e2\x2dfcea\x2d476f\x2db1ad\x2d00e102084688-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzp2wq.mount: Deactivated successfully. Jan 20 01:45:15.099272 kubelet[3172]: I0120 01:45:15.083159 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-hostproc\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.099272 kubelet[3172]: I0120 01:45:15.083178 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-host-proc-sys-kernel\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.099272 kubelet[3172]: I0120 01:45:15.083203 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-run\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.099272 kubelet[3172]: I0120 01:45:15.083223 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cni-path\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.099272 kubelet[3172]: I0120 01:45:15.083251 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-cgroup\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.099272 kubelet[3172]: I0120 01:45:15.083272 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-lib-modules\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.110084 kubelet[3172]: I0120 01:45:15.084259 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-bpf-maps\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.110084 kubelet[3172]: I0120 01:45:15.087264 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-hostproc" (OuterVolumeSpecName: "hostproc") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:45:15.110084 kubelet[3172]: I0120 01:45:15.087929 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:45:15.110084 kubelet[3172]: I0120 01:45:15.103219 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cni-path" (OuterVolumeSpecName: "cni-path") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:45:15.110084 kubelet[3172]: I0120 01:45:15.103286 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:45:15.118845 kubelet[3172]: I0120 01:45:15.103460 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:45:15.118845 kubelet[3172]: I0120 01:45:15.103486 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:45:15.118845 kubelet[3172]: I0120 01:45:15.103579 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:45:15.118845 kubelet[3172]: I0120 01:45:15.107760 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:45:15.118845 kubelet[3172]: I0120 01:45:15.107813 3172 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-config-path\") pod \"a58b624c-b508-4b98-9513-c7a4eae38f71\" (UID: \"a58b624c-b508-4b98-9513-c7a4eae38f71\") " Jan 20 01:45:15.119031 kubelet[3172]: I0120 01:45:15.107904 3172 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.119031 kubelet[3172]: I0120 01:45:15.107919 3172 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.119031 kubelet[3172]: I0120 01:45:15.107932 3172 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.119031 kubelet[3172]: I0120 01:45:15.107944 3172 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.119031 kubelet[3172]: I0120 01:45:15.107956 3172 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.119031 kubelet[3172]: I0120 01:45:15.107967 3172 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.119031 kubelet[3172]: I0120 01:45:15.107982 3172 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.119031 kubelet[3172]: I0120 01:45:15.107995 3172 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.145062 kubelet[3172]: I0120 01:45:15.108006 3172 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.162459 kubelet[3172]: I0120 01:45:15.156151 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 01:45:15.253464 kubelet[3172]: I0120 01:45:15.223128 3172 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a58b624c-b508-4b98-9513-c7a4eae38f71-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.269971 kubelet[3172]: I0120 01:45:15.261685 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7db6e2-fcea-476f-b1ad-00e102084688-kube-api-access-zp2wq" (OuterVolumeSpecName: "kube-api-access-zp2wq") pod "de7db6e2-fcea-476f-b1ad-00e102084688" (UID: "de7db6e2-fcea-476f-b1ad-00e102084688"). InnerVolumeSpecName "kube-api-access-zp2wq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:45:15.346664 kubelet[3172]: I0120 01:45:15.345843 3172 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zp2wq\" (UniqueName: \"kubernetes.io/projected/de7db6e2-fcea-476f-b1ad-00e102084688-kube-api-access-zp2wq\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.372894 kubelet[3172]: E0120 01:45:15.372826 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:45:15.373803 kubelet[3172]: I0120 01:45:15.373765 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:45:15.415139 systemd[1]: var-lib-kubelet-pods-a58b624c\x2db508\x2d4b98\x2d9513\x2dc7a4eae38f71-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 01:45:15.469794 kubelet[3172]: I0120 01:45:15.468711 3172 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a58b624c-b508-4b98-9513-c7a4eae38f71-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.503443 kubelet[3172]: I0120 01:45:15.502665 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a58b624c-b508-4b98-9513-c7a4eae38f71-kube-api-access-9gkgn" (OuterVolumeSpecName: "kube-api-access-9gkgn") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "kube-api-access-9gkgn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:45:15.503443 kubelet[3172]: I0120 01:45:15.503069 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a58b624c-b508-4b98-9513-c7a4eae38f71-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:45:15.521926 systemd[1]: var-lib-kubelet-pods-a58b624c\x2db508\x2d4b98\x2d9513\x2dc7a4eae38f71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9gkgn.mount: Deactivated successfully. Jan 20 01:45:15.522144 systemd[1]: var-lib-kubelet-pods-a58b624c\x2db508\x2d4b98\x2d9513\x2dc7a4eae38f71-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 01:45:15.660757 kubelet[3172]: I0120 01:45:15.625673 3172 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58b624c-b508-4b98-9513-c7a4eae38f71-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a58b624c-b508-4b98-9513-c7a4eae38f71" (UID: "a58b624c-b508-4b98-9513-c7a4eae38f71"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 01:45:15.660757 kubelet[3172]: I0120 01:45:15.625830 3172 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9gkgn\" (UniqueName: \"kubernetes.io/projected/a58b624c-b508-4b98-9513-c7a4eae38f71-kube-api-access-9gkgn\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.660757 kubelet[3172]: I0120 01:45:15.625856 3172 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a58b624c-b508-4b98-9513-c7a4eae38f71-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.660757 kubelet[3172]: I0120 01:45:15.625870 3172 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a58b624c-b508-4b98-9513-c7a4eae38f71-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 20 01:45:15.734231 kubelet[3172]: I0120 01:45:15.734191 3172 scope.go:117] "RemoveContainer" containerID="0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2" Jan 20 01:45:15.884590 containerd[1566]: time="2026-01-20T01:45:15.884456843Z" level=info msg="RemoveContainer for \"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\"" Jan 20 01:45:15.929070 systemd[1]: Removed slice kubepods-burstable-poda58b624c_b508_4b98_9513_c7a4eae38f71.slice - libcontainer container kubepods-burstable-poda58b624c_b508_4b98_9513_c7a4eae38f71.slice. Jan 20 01:45:15.929265 systemd[1]: kubepods-burstable-poda58b624c_b508_4b98_9513_c7a4eae38f71.slice: Consumed 59.681s CPU time, 131.4M memory peak, 600K read from disk, 13.3M written to disk. Jan 20 01:45:16.049213 systemd[1]: Removed slice kubepods-besteffort-podde7db6e2_fcea_476f_b1ad_00e102084688.slice - libcontainer container kubepods-besteffort-podde7db6e2_fcea_476f_b1ad_00e102084688.slice. Jan 20 01:45:16.053293 systemd[1]: kubepods-besteffort-podde7db6e2_fcea_476f_b1ad_00e102084688.slice: Consumed 14.963s CPU time, 33.4M memory peak, 4K read from disk, 12K written to disk. Jan 20 01:45:16.069215 containerd[1566]: time="2026-01-20T01:45:16.069164515Z" level=info msg="RemoveContainer for \"0bac2534b2835e585ec57295c6d305f8069b13f03d89eba1402477bd06ee90d2\" returns successfully" Jan 20 01:45:16.090554 kubelet[3172]: I0120 01:45:16.088615 3172 scope.go:117] "RemoveContainer" containerID="4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86" Jan 20 01:45:16.329861 containerd[1566]: time="2026-01-20T01:45:16.320387952Z" level=info msg="RemoveContainer for \"4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86\"" Jan 20 01:45:16.431233 containerd[1566]: time="2026-01-20T01:45:16.431175258Z" level=info msg="RemoveContainer for \"4524af2a98c4e67b7c86444429f0f88df6895f9f1a69bcb7436462df9416ea86\" returns successfully" Jan 20 01:45:16.443816 kubelet[3172]: I0120 01:45:16.437054 3172 scope.go:117] "RemoveContainer" containerID="6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b" Jan 20 01:45:16.690739 containerd[1566]: time="2026-01-20T01:45:16.688223264Z" level=info msg="RemoveContainer for \"6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b\"" Jan 20 01:45:16.701823 kubelet[3172]: I0120 01:45:16.692041 3172 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a58b624c-b508-4b98-9513-c7a4eae38f71" path="/var/lib/kubelet/pods/a58b624c-b508-4b98-9513-c7a4eae38f71/volumes" Jan 20 01:45:17.033734 containerd[1566]: time="2026-01-20T01:45:17.008115238Z" level=info msg="RemoveContainer for \"6f7d9e6eeadb00fb469eb1eae6a7cfe86fd82255cc3f32a7a2f24443a283d72b\" returns successfully" Jan 20 01:45:17.044227 kubelet[3172]: I0120 01:45:17.039869 3172 scope.go:117] "RemoveContainer" containerID="a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046" Jan 20 01:45:17.233132 containerd[1566]: time="2026-01-20T01:45:17.219841547Z" level=info msg="RemoveContainer for \"a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046\"" Jan 20 01:45:17.353004 containerd[1566]: time="2026-01-20T01:45:17.338456168Z" level=info msg="RemoveContainer for \"a980fbca93d03492785a934f9372dc63baa711c1e1c608f1e60e472c9324f046\" returns successfully" Jan 20 01:45:17.400906 kubelet[3172]: I0120 01:45:17.389819 3172 scope.go:117] "RemoveContainer" containerID="ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517" Jan 20 01:45:17.448819 containerd[1566]: time="2026-01-20T01:45:17.446579169Z" level=info msg="RemoveContainer for \"ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517\"" Jan 20 01:45:17.556058 containerd[1566]: time="2026-01-20T01:45:17.553004232Z" level=info msg="RemoveContainer for \"ed55e18bbbe04947e073a001ed3534ac98baaa65b17860aabd92cc84d11fe517\" returns successfully" Jan 20 01:45:17.588245 kubelet[3172]: I0120 01:45:17.563812 3172 scope.go:117] "RemoveContainer" containerID="f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce" Jan 20 01:45:17.605972 containerd[1566]: time="2026-01-20T01:45:17.605843520Z" level=info msg="RemoveContainer for \"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\"" Jan 20 01:45:17.815139 containerd[1566]: time="2026-01-20T01:45:17.815084648Z" level=info msg="RemoveContainer for \"f46227b1574a0b43db7a62a9380b52938960d34b8b97cdf086652fd5dc5bd0ce\" returns successfully" Jan 20 01:45:18.351970 sshd[7051]: Connection closed by 10.0.0.1 port 59344 Jan 20 01:45:18.387565 sshd-session[7040]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:18.526261 systemd[1]: Started sshd@112-10.0.0.15:22-10.0.0.1:55188.service - OpenSSH per-connection server daemon (10.0.0.1:55188). Jan 20 01:45:18.558005 systemd[1]: sshd@111-10.0.0.15:22-10.0.0.1:59344.service: Deactivated successfully. Jan 20 01:45:18.598987 systemd[1]: session-112.scope: Deactivated successfully. Jan 20 01:45:18.606108 systemd[1]: session-112.scope: Consumed 1.491s CPU time, 25M memory peak. Jan 20 01:45:18.710901 kubelet[3172]: I0120 01:45:18.709951 3172 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de7db6e2-fcea-476f-b1ad-00e102084688" path="/var/lib/kubelet/pods/de7db6e2-fcea-476f-b1ad-00e102084688/volumes" Jan 20 01:45:18.754293 systemd-logind[1552]: Session 112 logged out. Waiting for processes to exit. Jan 20 01:45:18.974899 systemd-logind[1552]: Removed session 112. Jan 20 01:45:19.843075 kubelet[3172]: I0120 01:45:19.842279 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-cilium-cgroup\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.885785 kubelet[3172]: I0120 01:45:19.885726 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-cni-path\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.886016 kubelet[3172]: I0120 01:45:19.885999 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-xtables-lock\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.886112 kubelet[3172]: I0120 01:45:19.886095 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqj6w\" (UniqueName: \"kubernetes.io/projected/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-kube-api-access-nqj6w\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.886195 kubelet[3172]: I0120 01:45:19.886180 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-bpf-maps\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.886613 kubelet[3172]: I0120 01:45:19.886279 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-cilium-ipsec-secrets\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.887212 kubelet[3172]: I0120 01:45:19.887187 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-host-proc-sys-net\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.888121 kubelet[3172]: I0120 01:45:19.887293 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-host-proc-sys-kernel\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.903561 kubelet[3172]: I0120 01:45:19.892957 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-hubble-tls\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.903561 kubelet[3172]: I0120 01:45:19.893007 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-cilium-run\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.903561 kubelet[3172]: I0120 01:45:19.893043 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-hostproc\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.903561 kubelet[3172]: I0120 01:45:19.893062 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-clustermesh-secrets\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.903561 kubelet[3172]: I0120 01:45:19.893085 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-etc-cni-netd\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.903561 kubelet[3172]: I0120 01:45:19.893116 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-cilium-config-path\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:19.909745 kubelet[3172]: I0120 01:45:19.893137 3172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26-lib-modules\") pod \"cilium-5srrf\" (UID: \"b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26\") " pod="kube-system/cilium-5srrf" Jan 20 01:45:20.050704 systemd[1]: Created slice kubepods-burstable-podb9a484a4_8bda_48a9_8d5e_7e22d8e8ed26.slice - libcontainer container kubepods-burstable-podb9a484a4_8bda_48a9_8d5e_7e22d8e8ed26.slice. Jan 20 01:45:20.643639 sshd[7132]: Accepted publickey for core from 10.0.0.1 port 55188 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:20.644244 kubelet[3172]: E0120 01:45:20.412124 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:45:20.362222 sshd-session[7132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:20.941080 systemd-logind[1552]: New session 113 of user core. Jan 20 01:45:20.991070 systemd[1]: Started session-113.scope - Session 113 of User core. Jan 20 01:45:21.745250 kubelet[3172]: E0120 01:45:21.732531 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:21.846103 containerd[1566]: time="2026-01-20T01:45:21.741958208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5srrf,Uid:b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26,Namespace:kube-system,Attempt:0,}" Jan 20 01:45:22.071647 sshd[7143]: Connection closed by 10.0.0.1 port 55188 Jan 20 01:45:22.071071 sshd-session[7132]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:22.202714 systemd[1]: sshd@112-10.0.0.15:22-10.0.0.1:55188.service: Deactivated successfully. Jan 20 01:45:22.242963 systemd[1]: session-113.scope: Deactivated successfully. Jan 20 01:45:22.284925 systemd-logind[1552]: Session 113 logged out. Waiting for processes to exit. Jan 20 01:45:22.311927 systemd[1]: Started sshd@113-10.0.0.15:22-10.0.0.1:55192.service - OpenSSH per-connection server daemon (10.0.0.1:55192). Jan 20 01:45:22.345209 systemd-logind[1552]: Removed session 113. Jan 20 01:45:23.090567 containerd[1566]: time="2026-01-20T01:45:23.085903465Z" level=info msg="connecting to shim 0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445" address="unix:///run/containerd/s/dfb8d7dd4ee233340179093185a006edf0e9183464c09dfde50f5d83ec8e507c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:45:23.543248 sshd[7154]: Accepted publickey for core from 10.0.0.1 port 55192 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:23.699206 sshd-session[7154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:24.016951 systemd-logind[1552]: New session 114 of user core. Jan 20 01:45:24.099879 systemd[1]: Started session-114.scope - Session 114 of User core. Jan 20 01:45:24.826742 systemd[1]: Started cri-containerd-0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445.scope - libcontainer container 0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445. Jan 20 01:45:25.453105 kubelet[3172]: E0120 01:45:25.453053 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:45:27.603738 containerd[1566]: time="2026-01-20T01:45:27.590234511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5srrf,Uid:b9a484a4-8bda-48a9-8d5e-7e22d8e8ed26,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\"" Jan 20 01:45:27.660124 kubelet[3172]: E0120 01:45:27.654006 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:28.344947 containerd[1566]: time="2026-01-20T01:45:28.323082013Z" level=info msg="CreateContainer within sandbox \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 01:45:28.820551 kubelet[3172]: E0120 01:45:28.819853 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:29.245708 containerd[1566]: time="2026-01-20T01:45:29.244154996Z" level=info msg="Container b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:45:29.593715 containerd[1566]: time="2026-01-20T01:45:29.571148701Z" level=info msg="CreateContainer within sandbox \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7\"" Jan 20 01:45:29.618747 containerd[1566]: time="2026-01-20T01:45:29.616773408Z" level=info msg="StartContainer for \"b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7\"" Jan 20 01:45:29.632694 containerd[1566]: time="2026-01-20T01:45:29.628150489Z" level=info msg="connecting to shim b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7" address="unix:///run/containerd/s/dfb8d7dd4ee233340179093185a006edf0e9183464c09dfde50f5d83ec8e507c" protocol=ttrpc version=3 Jan 20 01:45:30.385188 systemd[1]: Started cri-containerd-b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7.scope - libcontainer container b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7. Jan 20 01:45:30.703019 kubelet[3172]: E0120 01:45:30.623487 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:30.850544 kubelet[3172]: E0120 01:45:30.817937 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:45:32.584524 containerd[1566]: time="2026-01-20T01:45:32.583973083Z" level=info msg="StartContainer for \"b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7\" returns successfully" Jan 20 01:45:32.680578 kubelet[3172]: E0120 01:45:32.679613 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:32.686611 kubelet[3172]: E0120 01:45:32.686105 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:32.906762 systemd[1]: cri-containerd-b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7.scope: Deactivated successfully. Jan 20 01:45:33.033729 containerd[1566]: time="2026-01-20T01:45:33.021850498Z" level=info msg="received container exit event container_id:\"b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7\" id:\"b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7\" pid:7221 exited_at:{seconds:1768873532 nanos:997198082}" Jan 20 01:45:33.426762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b587f24041c42febc7cd54197cf45af073d695de74c6e01f93ddcb88fa8b07c7-rootfs.mount: Deactivated successfully. Jan 20 01:45:33.923457 kubelet[3172]: E0120 01:45:33.914137 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:34.226155 containerd[1566]: time="2026-01-20T01:45:34.225786481Z" level=info msg="CreateContainer within sandbox \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 01:45:35.045496 kubelet[3172]: E0120 01:45:35.024926 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:35.085965 containerd[1566]: time="2026-01-20T01:45:35.029869724Z" level=info msg="Container eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:45:35.250576 containerd[1566]: time="2026-01-20T01:45:35.243042778Z" level=info msg="CreateContainer within sandbox \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7\"" Jan 20 01:45:35.250576 containerd[1566]: time="2026-01-20T01:45:35.245807750Z" level=info msg="StartContainer for \"eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7\"" Jan 20 01:45:35.328491 containerd[1566]: time="2026-01-20T01:45:35.323121050Z" level=info msg="connecting to shim eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7" address="unix:///run/containerd/s/dfb8d7dd4ee233340179093185a006edf0e9183464c09dfde50f5d83ec8e507c" protocol=ttrpc version=3 Jan 20 01:45:35.898901 kubelet[3172]: E0120 01:45:35.889838 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:45:36.285890 systemd[1]: Started cri-containerd-eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7.scope - libcontainer container eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7. Jan 20 01:45:36.599744 kubelet[3172]: E0120 01:45:36.569155 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:38.516244 containerd[1566]: time="2026-01-20T01:45:38.515624924Z" level=info msg="StartContainer for \"eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7\" returns successfully" Jan 20 01:45:38.600878 kubelet[3172]: E0120 01:45:38.590704 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:38.598931 systemd[1]: cri-containerd-eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7.scope: Deactivated successfully. Jan 20 01:45:38.916689 containerd[1566]: time="2026-01-20T01:45:38.914074119Z" level=info msg="received container exit event container_id:\"eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7\" id:\"eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7\" pid:7267 exited_at:{seconds:1768873538 nanos:909540856}" Jan 20 01:45:39.468705 kubelet[3172]: E0120 01:45:39.454035 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:40.600704 kubelet[3172]: E0120 01:45:40.599479 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:40.637760 kubelet[3172]: E0120 01:45:40.635231 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:40.997884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eafd6c9de8f9fefda9d578141b490fae0568a90531ee22766b24ec0c33d980f7-rootfs.mount: Deactivated successfully. Jan 20 01:45:41.119734 kubelet[3172]: E0120 01:45:41.118721 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:45:41.865774 kubelet[3172]: E0120 01:45:41.853712 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:41.996251 containerd[1566]: time="2026-01-20T01:45:41.984723811Z" level=info msg="CreateContainer within sandbox \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 01:45:42.622938 kubelet[3172]: E0120 01:45:42.622747 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:42.682569 containerd[1566]: time="2026-01-20T01:45:42.682271291Z" level=info msg="Container ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:45:43.110940 containerd[1566]: time="2026-01-20T01:45:43.107923205Z" level=info msg="CreateContainer within sandbox \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d\"" Jan 20 01:45:43.167842 containerd[1566]: time="2026-01-20T01:45:43.156293873Z" level=info msg="StartContainer for \"ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d\"" Jan 20 01:45:43.179626 containerd[1566]: time="2026-01-20T01:45:43.171851034Z" level=info msg="connecting to shim ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d" address="unix:///run/containerd/s/dfb8d7dd4ee233340179093185a006edf0e9183464c09dfde50f5d83ec8e507c" protocol=ttrpc version=3 Jan 20 01:45:44.403014 systemd[1]: Started cri-containerd-ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d.scope - libcontainer container ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d. Jan 20 01:45:44.580756 kubelet[3172]: E0120 01:45:44.572037 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:45.808186 containerd[1566]: time="2026-01-20T01:45:45.784775537Z" level=info msg="StartContainer for \"ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d\" returns successfully" Jan 20 01:45:45.883676 systemd[1]: cri-containerd-ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d.scope: Deactivated successfully. Jan 20 01:45:45.941500 containerd[1566]: time="2026-01-20T01:45:45.941009940Z" level=info msg="received container exit event container_id:\"ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d\" id:\"ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d\" pid:7316 exited_at:{seconds:1768873545 nanos:939011444}" Jan 20 01:45:45.947553 kubelet[3172]: E0120 01:45:45.946179 3172 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9a484a4_8bda_48a9_8d5e_7e22d8e8ed26.slice/cri-containerd-ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d.scope\": RecentStats: unable to find data in memory cache]" Jan 20 01:45:46.126885 kubelet[3172]: E0120 01:45:46.123633 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:45:46.336141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec681c95b3727e3a4a14a445ebbff012c68a6a73de94413929fcbd21f728ea4d-rootfs.mount: Deactivated successfully. Jan 20 01:45:46.686525 kubelet[3172]: E0120 01:45:46.686456 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:47.008528 kubelet[3172]: E0120 01:45:46.982029 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:47.422604 containerd[1566]: time="2026-01-20T01:45:47.406946081Z" level=info msg="CreateContainer within sandbox \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 01:45:47.821633 containerd[1566]: time="2026-01-20T01:45:47.820462718Z" level=info msg="Container c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:45:47.913482 containerd[1566]: time="2026-01-20T01:45:47.912905752Z" level=info msg="CreateContainer within sandbox \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924\"" Jan 20 01:45:47.934591 containerd[1566]: time="2026-01-20T01:45:47.934530838Z" level=info msg="StartContainer for \"c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924\"" Jan 20 01:45:48.090510 containerd[1566]: time="2026-01-20T01:45:48.072667676Z" level=info msg="connecting to shim c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924" address="unix:///run/containerd/s/dfb8d7dd4ee233340179093185a006edf0e9183464c09dfde50f5d83ec8e507c" protocol=ttrpc version=3 Jan 20 01:45:48.575202 kubelet[3172]: E0120 01:45:48.575141 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:48.701709 systemd[1]: Started cri-containerd-c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924.scope - libcontainer container c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924. Jan 20 01:45:49.662733 systemd[1]: cri-containerd-c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924.scope: Deactivated successfully. Jan 20 01:45:49.759973 containerd[1566]: time="2026-01-20T01:45:49.759098712Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9a484a4_8bda_48a9_8d5e_7e22d8e8ed26.slice/cri-containerd-c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924.scope/memory.events\": no such file or directory" Jan 20 01:45:49.834572 containerd[1566]: time="2026-01-20T01:45:49.833139750Z" level=info msg="received container exit event container_id:\"c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924\" id:\"c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924\" pid:7356 exited_at:{seconds:1768873549 nanos:758799269}" Jan 20 01:45:53.893884 containerd[1566]: time="2026-01-20T01:45:53.891765143Z" level=error msg="get state for c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924" error="context deadline exceeded" Jan 20 01:45:53.893884 containerd[1566]: time="2026-01-20T01:45:53.892590747Z" level=warning msg="unknown status" status=0 Jan 20 01:45:54.035549 kubelet[3172]: E0120 01:45:53.991758 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:54.181282 kubelet[3172]: E0120 01:45:54.177634 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:45:54.633545 containerd[1566]: time="2026-01-20T01:45:54.614124542Z" level=info msg="StartContainer for \"c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924\" returns successfully" Jan 20 01:45:55.472265 kubelet[3172]: E0120 01:45:55.451120 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.682s" Jan 20 01:45:55.750655 kubelet[3172]: E0120 01:45:55.740086 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:55.795089 containerd[1566]: time="2026-01-20T01:45:55.794897630Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Jan 20 01:45:56.579255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924-rootfs.mount: Deactivated successfully. Jan 20 01:45:57.000492 kubelet[3172]: E0120 01:45:57.000268 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:57.168482 containerd[1566]: time="2026-01-20T01:45:57.151794521Z" level=error msg="collecting metrics for c75f27ba00fa889032dfdf1d00792857222755407438f58c17a3af17edad4924" error="ttrpc: closed" Jan 20 01:45:57.741196 containerd[1566]: time="2026-01-20T01:45:57.739695051Z" level=info msg="StopPodSandbox for \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\"" Jan 20 01:45:57.741196 containerd[1566]: time="2026-01-20T01:45:57.740057390Z" level=info msg="TearDown network for sandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" successfully" Jan 20 01:45:57.744611 containerd[1566]: time="2026-01-20T01:45:57.742862106Z" level=info msg="StopPodSandbox for \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" returns successfully" Jan 20 01:45:57.756218 containerd[1566]: time="2026-01-20T01:45:57.756168124Z" level=info msg="RemovePodSandbox for \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\"" Jan 20 01:45:57.756839 containerd[1566]: time="2026-01-20T01:45:57.756810919Z" level=info msg="Forcibly stopping sandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\"" Jan 20 01:45:57.776546 containerd[1566]: time="2026-01-20T01:45:57.774670511Z" level=info msg="TearDown network for sandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" successfully" Jan 20 01:45:57.792802 containerd[1566]: time="2026-01-20T01:45:57.791871808Z" level=info msg="Ensure that sandbox a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c in task-service has been cleanup successfully" Jan 20 01:45:57.888725 containerd[1566]: time="2026-01-20T01:45:57.888661611Z" level=info msg="RemovePodSandbox \"a1840d29f21086f1f8b580044aef43a7e5d6ec910933c7c02b2f9db2018c996c\" returns successfully" Jan 20 01:45:57.895049 containerd[1566]: time="2026-01-20T01:45:57.895000647Z" level=info msg="StopPodSandbox for \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\"" Jan 20 01:45:57.909685 containerd[1566]: time="2026-01-20T01:45:57.898884586Z" level=info msg="TearDown network for sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" successfully" Jan 20 01:45:57.909685 containerd[1566]: time="2026-01-20T01:45:57.908042989Z" level=info msg="StopPodSandbox for \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" returns successfully" Jan 20 01:45:57.917194 containerd[1566]: time="2026-01-20T01:45:57.915274548Z" level=info msg="RemovePodSandbox for \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\"" Jan 20 01:45:57.917194 containerd[1566]: time="2026-01-20T01:45:57.915494073Z" level=info msg="Forcibly stopping sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\"" Jan 20 01:45:57.917194 containerd[1566]: time="2026-01-20T01:45:57.915758251Z" level=info msg="TearDown network for sandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" successfully" Jan 20 01:45:57.924146 containerd[1566]: time="2026-01-20T01:45:57.923822854Z" level=info msg="Ensure that sandbox 4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71 in task-service has been cleanup successfully" Jan 20 01:45:57.982412 containerd[1566]: time="2026-01-20T01:45:57.979230594Z" level=info msg="RemovePodSandbox \"4cee1f446d99697969055103525cbf1775eb6730974687348783ab3044333f71\" returns successfully" Jan 20 01:45:58.075169 kubelet[3172]: E0120 01:45:58.072493 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:58.197645 containerd[1566]: time="2026-01-20T01:45:58.182727124Z" level=info msg="CreateContainer within sandbox \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 01:45:58.623532 kubelet[3172]: E0120 01:45:58.618284 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:45:58.636293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477900925.mount: Deactivated successfully. Jan 20 01:45:58.670566 containerd[1566]: time="2026-01-20T01:45:58.669740511Z" level=info msg="Container 6afc946c80e8d6cbc099631f7ff851f3721b660fdaac2c99c117f3019c63ce1a: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:45:58.937524 containerd[1566]: time="2026-01-20T01:45:58.913032680Z" level=info msg="CreateContainer within sandbox \"0a935ef85f57e78f2aef5cce09a0bb63845b492fcd61406824f6732b436ea445\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6afc946c80e8d6cbc099631f7ff851f3721b660fdaac2c99c117f3019c63ce1a\"" Jan 20 01:45:58.937524 containerd[1566]: time="2026-01-20T01:45:58.936798461Z" level=info msg="StartContainer for \"6afc946c80e8d6cbc099631f7ff851f3721b660fdaac2c99c117f3019c63ce1a\"" Jan 20 01:45:58.962265 containerd[1566]: time="2026-01-20T01:45:58.962112913Z" level=info msg="connecting to shim 6afc946c80e8d6cbc099631f7ff851f3721b660fdaac2c99c117f3019c63ce1a" address="unix:///run/containerd/s/dfb8d7dd4ee233340179093185a006edf0e9183464c09dfde50f5d83ec8e507c" protocol=ttrpc version=3 Jan 20 01:45:59.302680 kubelet[3172]: E0120 01:45:59.288678 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:45:59.505835 systemd[1]: Started cri-containerd-6afc946c80e8d6cbc099631f7ff851f3721b660fdaac2c99c117f3019c63ce1a.scope - libcontainer container 6afc946c80e8d6cbc099631f7ff851f3721b660fdaac2c99c117f3019c63ce1a. Jan 20 01:46:00.591204 kubelet[3172]: E0120 01:46:00.591134 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:46:01.846094 containerd[1566]: time="2026-01-20T01:46:01.842052916Z" level=info msg="StartContainer for \"6afc946c80e8d6cbc099631f7ff851f3721b660fdaac2c99c117f3019c63ce1a\" returns successfully" Jan 20 01:46:02.590524 kubelet[3172]: E0120 01:46:02.590257 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:46:04.342073 kubelet[3172]: E0120 01:46:04.342009 3172 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:46:04.573763 kubelet[3172]: E0120 01:46:04.573699 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:46:05.564567 kubelet[3172]: E0120 01:46:05.553627 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:06.106558 kubelet[3172]: I0120 01:46:06.096269 3172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5srrf" podStartSLOduration=47.096164752 podStartE2EDuration="47.096164752s" podCreationTimestamp="2026-01-20 01:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:46:06.089754509 +0000 UTC m=+1407.787958080" watchObservedRunningTime="2026-01-20 01:46:06.096164752 +0000 UTC m=+1407.794368333" Jan 20 01:46:06.578531 kubelet[3172]: E0120 01:46:06.574753 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:06.611617 kubelet[3172]: E0120 01:46:06.611254 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:46:08.595958 kubelet[3172]: E0120 01:46:08.590617 3172 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-6grwm" podUID="83193cf1-85b9-4f97-a265-e5c5cec01484" Jan 20 01:46:10.578413 kubelet[3172]: E0120 01:46:10.578289 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:11.842999 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 20 01:46:14.681568 kubelet[3172]: E0120 01:46:14.650647 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:16.585627 kubelet[3172]: E0120 01:46:16.585534 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:21.749847 kubelet[3172]: E0120 01:46:21.745837 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:39.914193 kubelet[3172]: E0120 01:46:39.852859 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:40.027618 kubelet[3172]: E0120 01:46:40.027563 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.801s" Jan 20 01:46:41.223081 systemd[1]: cri-containerd-822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0.scope: Deactivated successfully. Jan 20 01:46:42.313058 systemd[1]: cri-containerd-822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0.scope: Consumed 14.062s CPU time, 61.3M memory peak, 2.3M read from disk. Jan 20 01:46:51.546504 sshd[7185]: Connection closed by 10.0.0.1 port 55192 Jan 20 01:46:51.498082 sshd-session[7154]: pam_unix(sshd:session): session closed for user core Jan 20 01:46:51.888538 systemd[1]: sshd@113-10.0.0.15:22-10.0.0.1:55192.service: Deactivated successfully. Jan 20 01:46:51.932993 systemd[1]: session-114.scope: Deactivated successfully. Jan 20 01:46:51.944942 systemd[1]: session-114.scope: Consumed 2.892s CPU time, 28.5M memory peak. Jan 20 01:46:52.010364 containerd[1566]: time="2026-01-20T01:46:51.987134347Z" level=info msg="received container exit event container_id:\"822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0\" id:\"822d568f1a056c4f5be0bf08993d308fe75c7194e601769ab5c41085d8de36e0\" pid:6686 exit_status:1 exited_at:{seconds:1768873611 nanos:943144183}" Jan 20 01:46:51.996770 systemd-logind[1552]: Session 114 logged out. Waiting for processes to exit. Jan 20 01:46:52.810475 systemd-logind[1552]: Removed session 114. Jan 20 01:46:52.977997 kubelet[3172]: E0120 01:46:52.972412 3172 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.944s" Jan 20 01:46:52.985105 kubelet[3172]: E0120 01:46:52.984971 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:52.996156 kubelet[3172]: E0120 01:46:52.991653 3172 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"