Jan 20 02:23:39.480084 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 02:23:39.480121 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:23:39.480137 kernel: BIOS-provided physical RAM map: Jan 20 02:23:39.480147 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 02:23:39.480155 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 02:23:39.480165 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 02:23:39.480175 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 02:23:39.480184 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 02:23:39.480194 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 02:23:39.480203 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 02:23:39.480212 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 02:23:39.480225 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 02:23:39.480235 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 02:23:39.480244 kernel: NX (Execute Disable) protection: active Jan 20 02:23:39.480255 kernel: APIC: Static calls initialized Jan 20 02:23:39.480265 kernel: SMBIOS 2.8 present. Jan 20 02:23:39.480279 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 02:23:39.481006 kernel: DMI: Memory slots populated: 1/1 Jan 20 02:23:39.481024 kernel: Hypervisor detected: KVM Jan 20 02:23:39.481034 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:23:39.481045 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 02:23:39.481054 kernel: kvm-clock: using sched offset of 42188134464 cycles Jan 20 02:23:39.481066 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 02:23:39.481077 kernel: tsc: Detected 2445.426 MHz processor Jan 20 02:23:39.481087 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 02:23:39.481097 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 02:23:39.481113 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:23:39.481123 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 02:23:39.481135 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 02:23:39.481146 kernel: Using GB pages for direct mapping Jan 20 02:23:39.481155 kernel: ACPI: Early table checksum verification disabled Jan 20 02:23:39.481164 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 02:23:39.481172 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:39.481182 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:39.481194 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:39.481207 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 02:23:39.481217 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:39.481229 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:39.481238 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:39.481247 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:39.481264 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 02:23:39.481277 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 02:23:39.481353 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 02:23:39.481366 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 02:23:39.481377 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 02:23:39.481388 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 02:23:39.481399 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 02:23:39.481409 kernel: No NUMA configuration found Jan 20 02:23:39.481536 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 02:23:39.481552 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 02:23:39.481564 kernel: Zone ranges: Jan 20 02:23:39.481575 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 02:23:39.481586 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 02:23:39.481596 kernel: Normal empty Jan 20 02:23:39.481607 kernel: Device empty Jan 20 02:23:39.481619 kernel: Movable zone start for each node Jan 20 02:23:39.481630 kernel: Early memory node ranges Jan 20 02:23:39.481641 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 02:23:39.481655 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 02:23:39.481666 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 02:23:39.481677 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 02:23:39.481688 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 02:23:39.481699 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 02:23:39.481709 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 02:23:39.481720 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 02:23:39.481731 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 02:23:39.481742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 02:23:39.481756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 02:23:39.481767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 02:23:39.481778 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 02:23:39.481789 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 02:23:39.481800 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 02:23:39.481811 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 02:23:39.481822 kernel: TSC deadline timer available Jan 20 02:23:39.481833 kernel: CPU topo: Max. logical packages: 1 Jan 20 02:23:39.481844 kernel: CPU topo: Max. logical dies: 1 Jan 20 02:23:39.481858 kernel: CPU topo: Max. dies per package: 1 Jan 20 02:23:39.481868 kernel: CPU topo: Max. threads per core: 1 Jan 20 02:23:39.488363 kernel: CPU topo: Num. cores per package: 4 Jan 20 02:23:39.488407 kernel: CPU topo: Num. threads per package: 4 Jan 20 02:23:39.488534 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 02:23:39.488549 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 02:23:39.488560 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 02:23:39.488571 kernel: kvm-guest: setup PV sched yield Jan 20 02:23:39.488582 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 02:23:39.488604 kernel: Booting paravirtualized kernel on KVM Jan 20 02:23:39.488616 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 02:23:39.488627 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 02:23:39.488638 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 02:23:39.488649 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 02:23:39.488660 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 02:23:39.488671 kernel: kvm-guest: PV spinlocks enabled Jan 20 02:23:39.488682 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 02:23:39.488695 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:23:39.488711 kernel: random: crng init done Jan 20 02:23:39.488722 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 02:23:39.488733 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 02:23:39.488744 kernel: Fallback order for Node 0: 0 Jan 20 02:23:39.488755 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 02:23:39.488765 kernel: Policy zone: DMA32 Jan 20 02:23:39.488776 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 02:23:39.488787 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 02:23:39.488798 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 02:23:39.488813 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 02:23:39.488823 kernel: Dynamic Preempt: voluntary Jan 20 02:23:39.488834 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 02:23:39.488847 kernel: rcu: RCU event tracing is enabled. Jan 20 02:23:39.488858 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 02:23:39.490678 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 02:23:39.490744 kernel: Rude variant of Tasks RCU enabled. Jan 20 02:23:39.490757 kernel: Tracing variant of Tasks RCU enabled. Jan 20 02:23:39.490769 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 02:23:39.490787 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 02:23:39.490799 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:23:39.490810 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:23:39.490821 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:23:39.490832 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 02:23:39.490844 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 02:23:39.490867 kernel: Console: colour VGA+ 80x25 Jan 20 02:23:39.490882 kernel: printk: legacy console [ttyS0] enabled Jan 20 02:23:39.490894 kernel: ACPI: Core revision 20240827 Jan 20 02:23:39.490906 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 02:23:39.490917 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 02:23:39.490929 kernel: x2apic enabled Jan 20 02:23:39.490944 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 02:23:39.490956 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 02:23:39.490968 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 02:23:39.490979 kernel: kvm-guest: setup PV IPIs Jan 20 02:23:39.490994 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 02:23:39.491006 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:23:39.491018 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 02:23:39.491030 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 02:23:39.491041 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 02:23:39.491053 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 02:23:39.491065 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 02:23:39.491077 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 02:23:39.491089 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 02:23:39.491104 kernel: Speculative Store Bypass: Vulnerable Jan 20 02:23:39.491116 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 02:23:39.491130 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 02:23:39.491144 kernel: active return thunk: srso_alias_return_thunk Jan 20 02:23:39.491156 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 02:23:39.491170 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 02:23:39.491181 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 02:23:39.491193 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 02:23:39.491205 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 02:23:39.491220 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 02:23:39.491231 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 02:23:39.491242 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 02:23:39.491254 kernel: Freeing SMP alternatives memory: 32K Jan 20 02:23:39.491267 kernel: pid_max: default: 32768 minimum: 301 Jan 20 02:23:39.491279 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 02:23:39.491357 kernel: landlock: Up and running. Jan 20 02:23:39.491370 kernel: SELinux: Initializing. Jan 20 02:23:39.491382 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:23:39.491399 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:23:39.491531 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 02:23:39.491549 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 02:23:39.491562 kernel: signal: max sigframe size: 1776 Jan 20 02:23:39.491573 kernel: rcu: Hierarchical SRCU implementation. Jan 20 02:23:39.491583 kernel: rcu: Max phase no-delay instances is 400. Jan 20 02:23:39.491593 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 02:23:39.491606 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 02:23:39.491625 kernel: smp: Bringing up secondary CPUs ... Jan 20 02:23:39.491637 kernel: smpboot: x86: Booting SMP configuration: Jan 20 02:23:39.491647 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 02:23:39.491656 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 02:23:39.491669 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 02:23:39.491682 kernel: Memory: 2420724K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145096K reserved, 0K cma-reserved) Jan 20 02:23:39.491692 kernel: devtmpfs: initialized Jan 20 02:23:39.491702 kernel: x86/mm: Memory block size: 128MB Jan 20 02:23:39.491716 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 02:23:39.491733 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 02:23:39.491745 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 02:23:39.491755 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 02:23:39.491766 kernel: audit: initializing netlink subsys (disabled) Jan 20 02:23:39.491779 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 02:23:39.491791 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 02:23:39.491803 kernel: audit: type=2000 audit(1768875772.247:1): state=initialized audit_enabled=0 res=1 Jan 20 02:23:39.491812 kernel: cpuidle: using governor menu Jan 20 02:23:39.491822 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 02:23:39.491840 kernel: dca service started, version 1.12.1 Jan 20 02:23:39.491852 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 02:23:39.491864 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 02:23:39.491874 kernel: PCI: Using configuration type 1 for base access Jan 20 02:23:39.491884 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 02:23:39.491897 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 02:23:39.491909 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 02:23:39.491921 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 02:23:39.491931 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 02:23:39.491947 kernel: ACPI: Added _OSI(Module Device) Jan 20 02:23:39.491959 kernel: ACPI: Added _OSI(Processor Device) Jan 20 02:23:39.491971 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 02:23:39.491982 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 02:23:39.491992 kernel: ACPI: Interpreter enabled Jan 20 02:23:39.492003 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 02:23:39.492015 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 02:23:39.492027 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 02:23:39.492040 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 02:23:39.492054 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 02:23:39.492066 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 02:23:39.497156 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 02:23:39.498584 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 02:23:39.498870 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 02:23:39.498891 kernel: PCI host bridge to bus 0000:00 Jan 20 02:23:39.499282 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 02:23:39.500845 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 02:23:39.501046 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 02:23:39.503798 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 02:23:39.503992 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 02:23:39.504180 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 02:23:39.511195 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 02:23:39.513665 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 02:23:39.514045 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 02:23:39.514259 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 02:23:39.514659 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 02:23:39.514876 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 02:23:39.515083 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 02:23:39.515357 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 40039 usecs Jan 20 02:23:39.516023 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 02:23:39.519352 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 02:23:39.519724 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 02:23:39.519918 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 02:23:39.520366 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 02:23:39.520675 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 02:23:39.520887 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 02:23:39.521097 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 02:23:39.521623 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 02:23:39.521840 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 02:23:39.522050 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 02:23:39.522273 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 02:23:39.524800 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 02:23:39.525225 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 02:23:39.525617 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 02:23:39.525804 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 25390 usecs Jan 20 02:23:39.526107 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 02:23:39.526360 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 02:23:39.526679 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 02:23:39.527083 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 02:23:39.527363 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 02:23:39.527389 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 02:23:39.527402 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 02:23:39.527518 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 02:23:39.527533 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 02:23:39.527544 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 02:23:39.527556 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 02:23:39.527567 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 02:23:39.527578 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 02:23:39.527595 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 02:23:39.527606 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 02:23:39.527618 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 02:23:39.527628 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 02:23:39.527640 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 02:23:39.527652 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 02:23:39.527663 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 02:23:39.527675 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 02:23:39.527686 kernel: iommu: Default domain type: Translated Jan 20 02:23:39.527701 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 02:23:39.527713 kernel: PCI: Using ACPI for IRQ routing Jan 20 02:23:39.527724 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 02:23:39.527735 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 02:23:39.527747 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 02:23:39.527952 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 02:23:39.528148 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 02:23:39.529808 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 02:23:39.529829 kernel: vgaarb: loaded Jan 20 02:23:39.529848 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 02:23:39.529860 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 02:23:39.529872 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 02:23:39.529884 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 02:23:39.529896 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 02:23:39.529908 kernel: pnp: PnP ACPI init Jan 20 02:23:39.530909 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 02:23:39.530931 kernel: pnp: PnP ACPI: found 6 devices Jan 20 02:23:39.530950 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 02:23:39.530962 kernel: NET: Registered PF_INET protocol family Jan 20 02:23:39.530974 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 02:23:39.530986 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 02:23:39.530997 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 02:23:39.531009 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 02:23:39.531021 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 02:23:39.531032 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 02:23:39.531044 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:23:39.531059 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:23:39.531071 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 02:23:39.531082 kernel: NET: Registered PF_XDP protocol family Jan 20 02:23:39.531273 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 02:23:39.531693 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 02:23:39.531885 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 02:23:39.532060 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 02:23:39.532231 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 02:23:39.532621 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 02:23:39.532642 kernel: PCI: CLS 0 bytes, default 64 Jan 20 02:23:39.532655 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:23:39.532665 kernel: Initialise system trusted keyrings Jan 20 02:23:39.532675 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 02:23:39.532689 kernel: Key type asymmetric registered Jan 20 02:23:39.532700 kernel: Asymmetric key parser 'x509' registered Jan 20 02:23:39.532712 kernel: hrtimer: interrupt took 6010889 ns Jan 20 02:23:39.532724 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 02:23:39.532740 kernel: io scheduler mq-deadline registered Jan 20 02:23:39.532751 kernel: io scheduler kyber registered Jan 20 02:23:39.532763 kernel: io scheduler bfq registered Jan 20 02:23:39.532775 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 02:23:39.532788 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 02:23:39.532799 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 02:23:39.532809 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 02:23:39.532822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 02:23:39.532834 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 02:23:39.532852 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 02:23:39.532862 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 02:23:39.532872 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 02:23:39.536199 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 02:23:39.536226 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 02:23:39.536580 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 02:23:39.536760 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T02:23:35 UTC (1768875815) Jan 20 02:23:39.536936 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 02:23:39.536959 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 02:23:39.536971 kernel: NET: Registered PF_INET6 protocol family Jan 20 02:23:39.536982 kernel: Segment Routing with IPv6 Jan 20 02:23:39.536993 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 02:23:39.537004 kernel: NET: Registered PF_PACKET protocol family Jan 20 02:23:39.537016 kernel: Key type dns_resolver registered Jan 20 02:23:39.537027 kernel: IPI shorthand broadcast: enabled Jan 20 02:23:39.537038 kernel: sched_clock: Marking stable (35309003730, 4946432678)->(45433819604, -5178383196) Jan 20 02:23:39.537049 kernel: registered taskstats version 1 Jan 20 02:23:39.537063 kernel: Loading compiled-in X.509 certificates Jan 20 02:23:39.537074 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 02:23:39.537085 kernel: Demotion targets for Node 0: null Jan 20 02:23:39.537096 kernel: Key type .fscrypt registered Jan 20 02:23:39.537107 kernel: Key type fscrypt-provisioning registered Jan 20 02:23:39.537118 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 02:23:39.537131 kernel: ima: Allocated hash algorithm: sha1 Jan 20 02:23:39.537145 kernel: ima: No architecture policies found Jan 20 02:23:39.537158 kernel: clk: Disabling unused clocks Jan 20 02:23:39.537176 kernel: Warning: unable to open an initial console. Jan 20 02:23:39.537187 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 02:23:39.537199 kernel: Write protecting the kernel read-only data: 40960k Jan 20 02:23:39.537210 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 02:23:39.537221 kernel: Run /init as init process Jan 20 02:23:39.537232 kernel: with arguments: Jan 20 02:23:39.537243 kernel: /init Jan 20 02:23:39.537254 kernel: with environment: Jan 20 02:23:39.537264 kernel: HOME=/ Jan 20 02:23:39.537278 kernel: TERM=linux Jan 20 02:23:39.537353 systemd[1]: Successfully made /usr/ read-only. Jan 20 02:23:39.537370 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:23:39.537382 systemd[1]: Detected virtualization kvm. Jan 20 02:23:39.537394 systemd[1]: Detected architecture x86-64. Jan 20 02:23:39.537405 systemd[1]: Running in initrd. Jan 20 02:23:39.537505 systemd[1]: No hostname configured, using default hostname. Jan 20 02:23:39.537526 systemd[1]: Hostname set to . Jan 20 02:23:39.537551 systemd[1]: Initializing machine ID from VM UUID. Jan 20 02:23:39.537566 systemd[1]: Queued start job for default target initrd.target. Jan 20 02:23:39.537578 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:23:39.537590 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:23:39.537603 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 02:23:39.537618 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:23:39.537631 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 02:23:39.537644 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 02:23:39.537657 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 02:23:39.537669 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 02:23:39.537681 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:23:39.537693 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:23:39.537709 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:23:39.537720 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:23:39.537732 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:23:39.537744 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:23:39.537756 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:23:39.537768 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:23:39.537780 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 02:23:39.537792 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 02:23:39.537807 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:23:39.537819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:23:39.537834 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:23:39.537846 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:23:39.537858 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 02:23:39.537870 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:23:39.537882 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 02:23:39.537894 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 02:23:39.537906 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 02:23:39.537922 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:23:39.537934 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:23:39.537997 systemd-journald[203]: Collecting audit messages is disabled. Jan 20 02:23:39.538031 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:23:39.538045 systemd-journald[203]: Journal started Jan 20 02:23:39.538070 systemd-journald[203]: Runtime Journal (/run/log/journal/18a2a04e0d39435aa43823e74a400f22) is 6M, max 48.3M, 42.2M free. Jan 20 02:23:39.566273 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:23:39.597885 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 02:23:39.610690 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:23:39.611000 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 02:23:39.732824 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:23:39.795091 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:23:39.863630 systemd-modules-load[205]: Inserted module 'overlay' Jan 20 02:23:40.818762 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 02:23:40.818818 kernel: Bridge firewalling registered Jan 20 02:23:39.938818 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 02:23:39.965996 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:23:40.415290 systemd-modules-load[205]: Inserted module 'br_netfilter' Jan 20 02:23:40.829848 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:23:40.836257 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:23:40.857887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:23:40.874064 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:23:41.156097 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:23:41.211686 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 02:23:41.234680 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:23:41.345854 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:23:41.371644 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:23:41.415954 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:23:41.457673 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 02:23:41.602840 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:23:41.931779 systemd-resolved[235]: Positive Trust Anchors: Jan 20 02:23:41.931831 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:23:41.931875 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:23:41.976916 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 20 02:23:42.016254 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:23:42.044047 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:23:43.084399 kernel: SCSI subsystem initialized Jan 20 02:23:43.159147 kernel: Loading iSCSI transport class v2.0-870. Jan 20 02:23:43.270101 kernel: iscsi: registered transport (tcp) Jan 20 02:23:43.455261 kernel: iscsi: registered transport (qla4xxx) Jan 20 02:23:43.456002 kernel: QLogic iSCSI HBA Driver Jan 20 02:23:43.918928 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:23:44.096067 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:23:44.119957 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:23:44.849103 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 02:23:44.951278 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 02:23:45.823714 kernel: raid6: avx2x4 gen() 8288 MB/s Jan 20 02:23:45.876651 kernel: raid6: avx2x2 gen() 9086 MB/s Jan 20 02:23:45.933714 kernel: raid6: avx2x1 gen() 5465 MB/s Jan 20 02:23:45.934155 kernel: raid6: using algorithm avx2x2 gen() 9086 MB/s Jan 20 02:23:45.985842 kernel: raid6: .... xor() 3522 MB/s, rmw enabled Jan 20 02:23:45.986275 kernel: raid6: using avx2x2 recovery algorithm Jan 20 02:23:46.217810 kernel: xor: automatically using best checksumming function avx Jan 20 02:23:48.432351 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 02:23:48.569255 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:23:48.630100 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:23:48.927007 systemd-udevd[453]: Using default interface naming scheme 'v255'. Jan 20 02:23:48.978338 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:23:49.023569 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 02:23:49.319575 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Jan 20 02:23:49.731832 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:23:49.833105 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:23:50.770164 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:23:50.812817 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 02:23:51.699228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:23:51.700243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:23:51.816855 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:23:51.977004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:23:51.979249 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:23:52.266954 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 02:23:52.267590 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 02:23:52.495324 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 02:23:52.670835 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 02:23:52.671614 kernel: GPT:9289727 != 19775487 Jan 20 02:23:52.698699 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 02:23:52.698788 kernel: GPT:9289727 != 19775487 Jan 20 02:23:52.726659 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 02:23:52.815195 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:23:54.833182 kernel: libata version 3.00 loaded. Jan 20 02:23:55.214139 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 02:23:55.528034 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 02:23:57.315941 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 02:23:57.316599 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 02:23:57.316619 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 02:23:57.316840 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 02:23:57.317043 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 02:23:57.317240 kernel: scsi host0: ahci Jan 20 02:23:57.395942 kernel: scsi host1: ahci Jan 20 02:23:57.396339 kernel: scsi host2: ahci Jan 20 02:23:57.490093 kernel: scsi host3: ahci Jan 20 02:23:57.490332 kernel: scsi host4: ahci Jan 20 02:23:57.500348 kernel: scsi host5: ahci Jan 20 02:23:57.529301 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 lpm-pol 1 Jan 20 02:23:57.529329 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 lpm-pol 1 Jan 20 02:23:57.529359 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 lpm-pol 1 Jan 20 02:23:57.529373 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 lpm-pol 1 Jan 20 02:23:57.529387 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 lpm-pol 1 Jan 20 02:23:57.529401 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 lpm-pol 1 Jan 20 02:23:57.529582 kernel: AES CTR mode by8 optimization enabled Jan 20 02:23:57.529602 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 02:23:57.529617 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 02:23:57.529631 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:23:57.529646 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 02:23:57.529693 kernel: ata3.00: applying bridge limits Jan 20 02:23:57.529708 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:23:57.529722 kernel: ata3.00: configured for UDMA/100 Jan 20 02:23:57.529736 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 02:23:57.529750 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 02:23:57.529764 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 02:23:57.529778 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 02:23:57.529792 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 02:23:57.530057 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 02:23:57.530293 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 02:23:57.524571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:23:57.690939 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 02:23:58.000344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:23:58.490396 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 02:23:58.929645 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 02:23:58.983641 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 02:23:59.305971 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 02:23:59.520071 disk-uuid[633]: Primary Header is updated. Jan 20 02:23:59.520071 disk-uuid[633]: Secondary Entries is updated. Jan 20 02:23:59.520071 disk-uuid[633]: Secondary Header is updated. Jan 20 02:23:59.611212 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:24:00.820138 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:24:00.874654 disk-uuid[634]: The operation has completed successfully. Jan 20 02:24:01.271976 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 02:24:01.272204 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 02:24:01.482882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 02:24:01.639295 sh[655]: Success Jan 20 02:24:01.830287 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 02:24:01.867344 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:24:01.919180 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:24:01.919269 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:24:01.929256 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 02:24:02.356851 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:24:02.590855 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 02:24:02.590941 kernel: device-mapper: uevent: version 1.0.3 Jan 20 02:24:02.635974 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 02:24:03.101156 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 02:24:03.820207 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:24:03.841338 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 02:24:03.968025 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 02:24:04.101892 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (680) Jan 20 02:24:04.101952 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 02:24:04.101970 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:24:04.405153 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 02:24:04.405923 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 02:24:04.439057 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 02:24:04.482909 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:24:04.522103 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 02:24:04.574818 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 02:24:04.633921 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 02:24:05.035095 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (717) Jan 20 02:24:05.070317 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:24:05.070394 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:24:05.212672 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:24:05.212753 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:24:05.353583 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:24:05.414903 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 02:24:05.539974 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 02:24:08.316038 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1300610427 wd_nsec: 1300609815 Jan 20 02:24:09.251899 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:24:09.303407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:24:09.963310 systemd-networkd[854]: lo: Link UP Jan 20 02:24:09.963383 systemd-networkd[854]: lo: Gained carrier Jan 20 02:24:09.978977 ignition[780]: Ignition 2.22.0 Jan 20 02:24:09.984808 systemd-networkd[854]: Enumeration completed Jan 20 02:24:09.979064 ignition[780]: Stage: fetch-offline Jan 20 02:24:09.985722 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:24:09.979340 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:10.006331 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:24:09.979354 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:10.006338 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:24:09.991096 ignition[780]: parsed url from cmdline: "" Jan 20 02:24:10.029187 systemd[1]: Reached target network.target - Network. Jan 20 02:24:09.991105 ignition[780]: no config URL provided Jan 20 02:24:10.048042 systemd-networkd[854]: eth0: Link UP Jan 20 02:24:09.991183 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 02:24:10.067148 systemd-networkd[854]: eth0: Gained carrier Jan 20 02:24:09.991204 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jan 20 02:24:10.067175 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:24:09.991308 ignition[780]: op(1): [started] loading QEMU firmware config module Jan 20 02:24:10.683282 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:24:09.991316 ignition[780]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 02:24:10.273795 ignition[780]: op(1): [finished] loading QEMU firmware config module Jan 20 02:24:10.273840 ignition[780]: QEMU firmware config was not found. Ignoring... Jan 20 02:24:11.278641 systemd-networkd[854]: eth0: Gained IPv6LL Jan 20 02:24:12.425785 ignition[780]: parsing config with SHA512: cd681ce821b6690c36b869d72b2cd1e8ed5aa534d9df8231921e4de097202ca5507f35072cd5c2832cc5551132afd91ddc3ee37a87081f19f2281a7cbe444290 Jan 20 02:24:12.480843 unknown[780]: fetched base config from "system" Jan 20 02:24:12.481700 ignition[780]: fetch-offline: fetch-offline passed Jan 20 02:24:12.480857 unknown[780]: fetched user config from "qemu" Jan 20 02:24:12.481869 ignition[780]: Ignition finished successfully Jan 20 02:24:12.500764 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:24:12.634831 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 02:24:12.656745 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 02:24:13.896646 ignition[862]: Ignition 2.22.0 Jan 20 02:24:13.896666 ignition[862]: Stage: kargs Jan 20 02:24:13.897167 ignition[862]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:13.897183 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:13.969736 ignition[862]: kargs: kargs passed Jan 20 02:24:13.970098 ignition[862]: Ignition finished successfully Jan 20 02:24:14.117778 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 02:24:14.272248 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 02:24:15.733725 ignition[870]: Ignition 2.22.0 Jan 20 02:24:15.733879 ignition[870]: Stage: disks Jan 20 02:24:15.734141 ignition[870]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:15.734217 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:15.784397 ignition[870]: disks: disks passed Jan 20 02:24:15.872481 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 02:24:15.784548 ignition[870]: Ignition finished successfully Jan 20 02:24:15.934824 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 02:24:15.963724 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 02:24:16.189938 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:24:16.208346 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:24:16.208640 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:24:16.233918 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 02:24:17.281538 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 02:24:17.628726 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 02:24:17.766223 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 02:24:19.988802 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 02:24:19.998018 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 02:24:20.016884 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 02:24:20.038936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:24:20.183747 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 02:24:20.232249 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 02:24:20.232517 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 02:24:20.232579 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:24:20.439910 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Jan 20 02:24:20.473870 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:24:20.491046 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 02:24:20.530715 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:24:20.531698 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 02:24:20.620945 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:24:20.621379 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:24:20.632329 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:24:20.973823 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 02:24:21.074538 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Jan 20 02:24:21.117764 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 02:24:21.200947 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 02:24:22.683563 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 02:24:22.739930 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 02:24:22.790203 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 02:24:22.984134 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 02:24:23.022395 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:24:23.384913 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 02:24:23.685890 ignition[1002]: INFO : Ignition 2.22.0 Jan 20 02:24:23.720961 ignition[1002]: INFO : Stage: mount Jan 20 02:24:23.720961 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:23.720961 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:23.779928 ignition[1002]: INFO : mount: mount passed Jan 20 02:24:23.779928 ignition[1002]: INFO : Ignition finished successfully Jan 20 02:24:23.804851 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 02:24:23.917200 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 02:24:24.078967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:24:24.293538 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Jan 20 02:24:24.326076 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:24:24.355919 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:24:24.434608 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:24:24.434767 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:24:24.465401 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:24:25.047118 ignition[1031]: INFO : Ignition 2.22.0 Jan 20 02:24:25.084773 ignition[1031]: INFO : Stage: files Jan 20 02:24:25.084773 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:25.084773 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:25.132021 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Jan 20 02:24:25.167208 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 02:24:25.167208 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 02:24:25.230209 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 02:24:25.278279 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 02:24:25.319712 unknown[1031]: wrote ssh authorized keys file for user: core Jan 20 02:24:25.365533 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 02:24:25.409142 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 02:24:25.470516 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 02:24:25.880900 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 02:24:27.934503 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 02:24:27.934503 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 02:24:27.934503 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 20 02:24:28.390887 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 02:24:32.436590 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 02:24:32.436590 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 20 02:24:32.589402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 02:24:32.589402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:24:32.589402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:24:32.589402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:24:32.589402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:24:32.589402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:24:32.589402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:24:32.589402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:24:32.589402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:24:32.589402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:24:33.058604 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:24:33.058604 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:24:33.058604 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 02:24:33.478774 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 20 02:24:51.265685 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:24:51.331358 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 20 02:24:51.331358 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:24:51.469547 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:24:51.469547 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 20 02:24:51.538183 ignition[1031]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 20 02:24:51.538183 ignition[1031]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:24:51.538183 ignition[1031]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:24:51.538183 ignition[1031]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 20 02:24:51.538183 ignition[1031]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 02:24:52.143215 ignition[1031]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:24:52.218004 ignition[1031]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:24:52.218004 ignition[1031]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 02:24:52.336107 ignition[1031]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 20 02:24:52.336107 ignition[1031]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 02:24:52.336107 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:24:52.336107 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:24:52.336107 ignition[1031]: INFO : files: files passed Jan 20 02:24:52.336107 ignition[1031]: INFO : Ignition finished successfully Jan 20 02:24:52.322544 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 02:24:52.408286 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 02:24:52.840337 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 02:24:53.002770 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 02:24:53.026590 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 02:24:53.065147 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 02:24:53.160520 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:24:53.160520 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:24:53.144219 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:24:53.182125 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:24:53.349014 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 02:24:53.431209 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 02:24:53.918735 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 02:24:53.927088 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 02:24:53.970043 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 02:24:54.154616 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 02:24:54.172309 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 02:24:54.311963 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 02:24:54.698620 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:24:54.750407 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 02:24:55.076271 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:24:55.089077 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:24:55.089289 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 02:24:55.089571 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 02:24:55.089781 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:24:55.286106 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 02:24:55.401974 systemd[1]: Stopped target basic.target - Basic System. Jan 20 02:24:55.474194 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 02:24:55.498619 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:24:55.498799 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 02:24:55.499022 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:24:55.499167 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 02:24:55.499302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:24:55.499569 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 02:24:55.499720 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 02:24:55.507585 systemd[1]: Stopped target swap.target - Swaps. Jan 20 02:24:55.530368 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 02:24:55.532737 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:24:55.813955 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:24:55.814101 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:24:55.814199 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 02:24:55.829287 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:24:56.018376 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 02:24:56.018700 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 02:24:56.061009 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 02:24:56.061264 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:24:56.061694 systemd[1]: Stopped target paths.target - Path Units. Jan 20 02:24:56.061797 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 02:24:56.077321 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:24:56.204916 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 02:24:56.205142 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 02:24:56.205303 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 02:24:56.205564 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:24:56.205742 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 02:24:56.213011 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:24:57.371177 ignition[1087]: INFO : Ignition 2.22.0 Jan 20 02:24:57.371177 ignition[1087]: INFO : Stage: umount Jan 20 02:24:57.371177 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:24:57.371177 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:24:56.407378 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 02:24:57.671221 ignition[1087]: INFO : umount: umount passed Jan 20 02:24:57.671221 ignition[1087]: INFO : Ignition finished successfully Jan 20 02:24:56.407726 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:24:56.410283 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 02:24:56.410643 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 02:24:56.578379 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 02:24:56.642027 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 02:24:56.673163 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 02:24:56.673641 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:24:56.675234 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 02:24:56.675537 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:24:56.722034 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 02:24:56.722243 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 02:24:57.106209 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 02:24:57.282729 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 02:24:57.288151 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 02:24:57.413360 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 02:24:57.416312 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 02:24:57.467274 systemd[1]: Stopped target network.target - Network. Jan 20 02:24:57.467357 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 02:24:57.469136 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 02:24:57.469257 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 02:24:57.469333 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 02:24:57.471731 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 02:24:57.471818 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 02:24:57.597404 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 02:24:57.603257 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 02:24:57.716922 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 02:24:57.717062 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 02:24:57.717394 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 02:24:57.727199 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 02:24:57.904063 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 02:24:57.906383 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 02:24:57.946634 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 02:24:57.952702 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 02:24:57.955283 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:24:58.030906 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:24:58.045571 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 02:24:58.045828 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 02:24:58.123774 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 02:24:58.131751 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 02:24:58.160225 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 02:24:58.160396 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:24:58.228170 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 02:24:58.282119 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 02:24:58.282255 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:24:58.338677 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 02:24:58.338802 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:24:58.438041 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 02:24:58.438156 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 02:24:58.504559 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:24:58.533294 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 02:24:58.830741 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 02:24:58.844850 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:24:58.885711 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 02:24:58.885951 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 02:24:58.907773 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 02:24:58.907945 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 02:24:58.923931 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 02:24:58.925390 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:24:58.946590 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 02:24:58.946720 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:24:58.969404 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 02:24:58.972063 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 02:24:59.001000 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 02:24:59.001120 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:24:59.047367 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 02:24:59.059109 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 02:24:59.062085 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:24:59.328578 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 02:24:59.328691 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:24:59.799056 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 02:24:59.799168 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:25:00.039140 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 02:25:00.039289 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:25:00.071301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:25:00.071576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:25:00.121510 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 02:25:00.121667 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 20 02:25:00.121728 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 02:25:00.121794 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:25:00.122623 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 02:25:00.122816 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 02:25:00.268546 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 02:25:00.300228 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 02:25:00.631237 systemd[1]: Switching root. Jan 20 02:25:00.792592 systemd-journald[203]: Journal stopped Jan 20 02:25:17.374219 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 20 02:25:17.374398 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 02:25:17.374535 kernel: SELinux: policy capability open_perms=1 Jan 20 02:25:17.374554 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 02:25:17.374572 kernel: SELinux: policy capability always_check_network=0 Jan 20 02:25:17.374587 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 02:25:17.374603 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 02:25:17.374618 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 02:25:17.374706 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 02:25:17.374723 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 02:25:17.374744 kernel: audit: type=1403 audit(1768875901.824:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 02:25:17.374777 systemd[1]: Successfully loaded SELinux policy in 496.930ms. Jan 20 02:25:17.374809 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 64.841ms. Jan 20 02:25:17.374829 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:25:17.374845 systemd[1]: Detected virtualization kvm. Jan 20 02:25:17.374865 systemd[1]: Detected architecture x86-64. Jan 20 02:25:17.374880 systemd[1]: Detected first boot. Jan 20 02:25:17.374896 systemd[1]: Initializing machine ID from VM UUID. Jan 20 02:25:17.376593 zram_generator::config[1131]: No configuration found. Jan 20 02:25:17.376629 kernel: Guest personality initialized and is inactive Jan 20 02:25:17.376652 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 02:25:17.376670 kernel: Initialized host personality Jan 20 02:25:17.376685 kernel: NET: Registered PF_VSOCK protocol family Jan 20 02:25:17.376702 systemd[1]: Populated /etc with preset unit settings. Jan 20 02:25:17.376721 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 02:25:17.376737 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 02:25:17.376757 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 02:25:17.376777 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 02:25:17.376797 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 02:25:17.376890 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 02:25:17.376907 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 02:25:17.376997 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 02:25:17.377014 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 02:25:17.377036 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 02:25:17.377053 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 02:25:17.377076 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 02:25:17.377092 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:25:17.377107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:25:17.377126 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 02:25:17.377141 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 02:25:17.377160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 02:25:17.377177 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:25:17.377194 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 02:25:17.377215 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:25:17.377233 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:25:17.377249 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 02:25:17.377266 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 02:25:17.377283 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 02:25:17.377299 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 02:25:17.377317 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:25:17.377405 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:25:17.377572 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:25:17.377595 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:25:17.377614 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 02:25:17.377630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 02:25:17.377648 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 02:25:17.377664 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:25:17.377683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:25:17.377698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:25:17.377717 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 02:25:17.377733 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 02:25:17.377755 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 02:25:17.377772 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 02:25:17.377788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:25:17.377805 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 02:25:17.377826 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 02:25:17.377844 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 02:25:17.377859 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 02:25:17.377878 systemd[1]: Reached target machines.target - Containers. Jan 20 02:25:17.377894 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 02:25:17.386094 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:25:17.386208 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:25:17.386226 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 02:25:17.386245 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:25:17.386260 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:25:17.386278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:25:17.386295 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 02:25:17.386313 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:25:17.386339 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 02:25:17.386357 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 02:25:17.386373 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 02:25:17.386393 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 02:25:17.386409 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 02:25:17.386544 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:25:17.386562 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:25:17.386580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:25:17.386596 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:25:17.386619 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 02:25:17.386635 kernel: loop: module loaded Jan 20 02:25:17.386654 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 02:25:17.386670 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:25:17.386688 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 02:25:17.386773 systemd[1]: Stopped verity-setup.service. Jan 20 02:25:17.386791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:25:17.386816 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 02:25:17.386833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 02:25:17.386849 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 02:25:17.386872 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 02:25:17.386889 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 02:25:17.386908 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 02:25:17.386997 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 02:25:17.387016 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:25:17.387031 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 02:25:17.387104 systemd-journald[1217]: Collecting audit messages is disabled. Jan 20 02:25:17.387146 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 02:25:17.387166 kernel: fuse: init (API version 7.41) Jan 20 02:25:17.387182 systemd-journald[1217]: Journal started Jan 20 02:25:17.387213 systemd-journald[1217]: Runtime Journal (/run/log/journal/18a2a04e0d39435aa43823e74a400f22) is 6M, max 48.3M, 42.2M free. Jan 20 02:25:11.384557 systemd[1]: Queued start job for default target multi-user.target. Jan 20 02:25:11.479578 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 02:25:11.492304 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 02:25:11.493385 systemd[1]: systemd-journald.service: Consumed 3.731s CPU time. Jan 20 02:25:17.428753 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:25:17.473083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:25:17.474289 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:25:17.499842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:25:17.500625 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:25:17.541060 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:25:17.562324 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:25:17.580720 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 02:25:17.619571 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 02:25:17.653844 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:25:17.786725 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:25:17.820408 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 02:25:17.867340 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 02:25:17.867561 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:25:17.914572 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 02:25:17.963205 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 02:25:17.989982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:25:18.009181 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 02:25:18.013523 kernel: ACPI: bus type drm_connector registered Jan 20 02:25:18.034095 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 02:25:18.065204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:25:18.112299 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 02:25:18.161642 systemd-journald[1217]: Time spent on flushing to /var/log/journal/18a2a04e0d39435aa43823e74a400f22 is 151.485ms for 976 entries. Jan 20 02:25:18.161642 systemd-journald[1217]: System Journal (/var/log/journal/18a2a04e0d39435aa43823e74a400f22) is 8M, max 195.6M, 187.6M free. Jan 20 02:25:18.436124 systemd-journald[1217]: Received client request to flush runtime journal. Jan 20 02:25:18.144580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:25:18.215870 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 02:25:18.252579 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:25:18.285320 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:25:18.286297 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:25:18.326627 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 02:25:18.391795 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 02:25:18.425603 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:25:18.515122 kernel: loop0: detected capacity change from 0 to 224512 Jan 20 02:25:18.446833 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:25:18.470782 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 02:25:18.519692 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 02:25:18.558071 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 02:25:18.600253 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:25:18.663664 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 02:25:18.705283 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 02:25:18.763822 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 02:25:18.818794 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:25:19.236071 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 02:25:19.275813 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 20 02:25:19.276849 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 20 02:25:19.324792 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 02:25:19.381539 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:25:19.424794 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 02:25:19.532677 kernel: loop1: detected capacity change from 0 to 110984 Jan 20 02:25:19.563281 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 02:25:19.582904 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 02:25:19.747815 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 02:25:19.767064 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:25:19.845917 kernel: loop2: detected capacity change from 0 to 128560 Jan 20 02:25:20.028622 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 20 02:25:20.028706 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 20 02:25:20.070087 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:25:20.138777 kernel: loop3: detected capacity change from 0 to 224512 Jan 20 02:25:20.299156 kernel: loop4: detected capacity change from 0 to 110984 Jan 20 02:25:20.495845 kernel: loop5: detected capacity change from 0 to 128560 Jan 20 02:25:20.613775 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 02:25:20.615036 (sd-merge)[1277]: Merged extensions into '/usr'. Jan 20 02:25:20.645902 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 02:25:20.646278 systemd[1]: Reloading... Jan 20 02:25:20.973538 zram_generator::config[1303]: No configuration found. Jan 20 02:25:21.602568 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 02:25:21.974280 systemd[1]: Reloading finished in 1325 ms. Jan 20 02:25:22.082658 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 02:25:22.111058 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 02:25:22.138147 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 02:25:22.256559 systemd[1]: Starting ensure-sysext.service... Jan 20 02:25:22.289890 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:25:22.390325 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:25:22.530033 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Jan 20 02:25:22.530098 systemd[1]: Reloading... Jan 20 02:25:22.650804 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:25:22.650858 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:25:22.651790 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:25:22.652262 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 02:25:22.667862 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 02:25:22.668576 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jan 20 02:25:22.668696 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jan 20 02:25:22.730563 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:25:22.730582 systemd-tmpfiles[1343]: Skipping /boot Jan 20 02:25:22.756138 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Jan 20 02:25:22.868761 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:25:22.876212 systemd-tmpfiles[1343]: Skipping /boot Jan 20 02:25:23.823235 zram_generator::config[1375]: No configuration found. Jan 20 02:25:25.516888 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 02:25:25.651870 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 02:25:25.780714 kernel: ACPI: button: Power Button [PWRF] Jan 20 02:25:26.001743 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 02:25:26.140248 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 02:25:28.039963 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:25:28.129676 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 02:25:28.178547 systemd[1]: Reloading finished in 5647 ms. Jan 20 02:25:28.342259 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:25:28.365370 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:25:28.565910 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:25:28.597739 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 02:25:28.701055 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 02:25:28.769075 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 02:25:28.804149 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:25:28.919933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:25:29.075320 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 02:25:29.421292 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:25:29.421866 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:25:29.773717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:25:29.823790 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:25:29.986777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:25:30.004834 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:25:30.010683 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:25:30.010820 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:25:30.054620 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 02:25:30.125883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:25:30.126378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:25:30.464823 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:25:30.475393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:25:30.808305 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 02:25:30.835603 systemd[1]: Finished ensure-sysext.service. Jan 20 02:25:30.883792 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:25:30.885349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:25:31.117376 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 02:25:31.421403 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:25:31.429753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:25:31.465737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:25:31.575614 augenrules[1498]: No rules Jan 20 02:25:31.595352 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:25:31.667697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:25:31.716349 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:25:31.717111 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:25:31.899841 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 02:25:31.931552 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 02:25:32.032771 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 02:25:32.058953 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:25:32.101723 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:25:32.102534 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:25:32.175691 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 02:25:32.211320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:25:32.212341 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:25:32.322898 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:25:32.324779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:25:32.386371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:25:32.386866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:25:32.427990 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 02:25:32.913986 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:25:32.915341 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:25:32.987751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:25:33.088940 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 02:25:34.028612 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 02:25:36.009709 systemd-networkd[1465]: lo: Link UP Jan 20 02:25:36.013178 systemd-networkd[1465]: lo: Gained carrier Jan 20 02:25:36.046620 systemd-networkd[1465]: Enumeration completed Jan 20 02:25:36.066374 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:25:36.088772 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:25:36.088789 systemd-networkd[1465]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:25:36.125006 systemd-networkd[1465]: eth0: Link UP Jan 20 02:25:36.130262 systemd-networkd[1465]: eth0: Gained carrier Jan 20 02:25:36.130307 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:25:36.146039 systemd-resolved[1466]: Positive Trust Anchors: Jan 20 02:25:36.146142 systemd-resolved[1466]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:25:36.146188 systemd-resolved[1466]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:25:36.187862 systemd-resolved[1466]: Defaulting to hostname 'linux'. Jan 20 02:25:36.310984 systemd-networkd[1465]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:25:36.319725 systemd-timesyncd[1505]: Network configuration changed, trying to establish connection. Jan 20 02:25:36.844702 systemd-resolved[1466]: Clock change detected. Flushing caches. Jan 20 02:25:36.844882 systemd-timesyncd[1505]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 02:25:36.844959 systemd-timesyncd[1505]: Initial clock synchronization to Tue 2026-01-20 02:25:36.844592 UTC. Jan 20 02:25:36.950895 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 02:25:37.011182 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:25:37.055916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:25:37.103260 systemd[1]: Reached target network.target - Network. Jan 20 02:25:37.136922 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:25:37.188791 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:25:37.216897 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 02:25:37.258986 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 02:25:37.300644 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 02:25:37.368789 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 02:25:37.404140 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 02:25:37.436212 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:25:37.466567 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 02:25:37.488581 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 02:25:37.515653 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 02:25:37.545691 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:25:37.597987 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 02:25:37.695645 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 02:25:37.761129 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 02:25:37.825991 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 02:25:37.860263 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 02:25:37.902307 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 02:25:37.922220 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 02:25:38.032531 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 02:25:38.089551 systemd-networkd[1465]: eth0: Gained IPv6LL Jan 20 02:25:38.139142 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 02:25:38.204770 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 02:25:38.246959 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:25:38.285288 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:25:38.327721 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:25:38.337551 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:25:38.399654 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 02:25:38.476618 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 02:25:38.596558 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 02:25:38.762305 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 02:25:38.802921 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 02:25:38.841721 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 02:25:38.863877 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 02:25:38.907587 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 02:25:39.041778 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 02:25:39.123738 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 02:25:39.275665 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 02:25:39.390959 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing passwd entry cache Jan 20 02:25:39.407307 oslogin_cache_refresh[1539]: Refreshing passwd entry cache Jan 20 02:25:39.487145 jq[1537]: false Jan 20 02:25:39.516121 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 02:25:39.584873 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 02:25:39.595346 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 02:25:39.643899 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 02:25:39.694931 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting users, quitting Jan 20 02:25:39.694868 oslogin_cache_refresh[1539]: Failure getting users, quitting Jan 20 02:25:39.701352 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:25:39.701352 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing group entry cache Jan 20 02:25:39.695125 oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:25:39.697661 oslogin_cache_refresh[1539]: Refreshing group entry cache Jan 20 02:25:39.744333 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting groups, quitting Jan 20 02:25:39.744333 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:25:39.735658 oslogin_cache_refresh[1539]: Failure getting groups, quitting Jan 20 02:25:39.735684 oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:25:39.795228 extend-filesystems[1538]: Found /dev/vda6 Jan 20 02:25:39.840733 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 02:25:39.896296 extend-filesystems[1538]: Found /dev/vda9 Jan 20 02:25:39.895976 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 02:25:40.035154 extend-filesystems[1538]: Checking size of /dev/vda9 Jan 20 02:25:39.903550 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 02:25:40.193017 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 02:25:40.265742 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 02:25:40.288022 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 02:25:40.292691 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 02:25:40.294943 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 02:25:40.431481 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 02:25:40.436494 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 02:25:40.614794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 02:25:40.617324 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 02:25:41.290733 update_engine[1550]: I20260120 02:25:41.284356 1550 main.cc:92] Flatcar Update Engine starting Jan 20 02:25:41.314164 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 02:25:41.327282 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 02:25:41.471049 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 02:25:41.538041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:25:41.557942 extend-filesystems[1538]: Resized partition /dev/vda9 Jan 20 02:25:41.633038 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 02:25:41.633144 jq[1555]: true Jan 20 02:25:41.625018 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 02:25:41.636887 extend-filesystems[1579]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 02:25:41.632466 dbus-daemon[1535]: [system] SELinux support is enabled Jan 20 02:25:41.663012 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 02:25:41.705923 update_engine[1550]: I20260120 02:25:41.705336 1550 update_check_scheduler.cc:74] Next update check in 5m46s Jan 20 02:25:41.958902 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 02:25:41.958950 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 02:25:41.986603 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 02:25:41.986644 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 02:25:41.991809 tar[1561]: linux-amd64/LICENSE Jan 20 02:25:41.991809 tar[1561]: linux-amd64/helm Jan 20 02:25:42.003948 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 02:25:42.010479 systemd[1]: Started update-engine.service - Update Engine. Jan 20 02:25:42.843042 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 02:25:43.180053 jq[1581]: true Jan 20 02:25:43.206274 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 02:25:43.396311 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 02:25:43.396311 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 02:25:43.396311 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 02:25:43.546959 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Jan 20 02:25:43.464931 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 02:25:43.482665 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 02:25:46.800238 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2846087533 wd_nsec: 2846086227 Jan 20 02:25:46.907794 systemd-logind[1548]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 02:25:46.907845 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 02:25:47.032024 systemd-logind[1548]: New seat seat0. Jan 20 02:25:47.130957 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 02:25:47.163946 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 02:25:47.526547 sshd_keygen[1565]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 02:25:47.677926 bash[1618]: Updated "/home/core/.ssh/authorized_keys" Jan 20 02:25:48.399768 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 02:25:48.602523 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 02:25:48.684524 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 02:25:48.684929 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 02:25:49.086901 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 02:25:49.478935 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 02:25:49.613831 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 02:25:49.678776 locksmithd[1589]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 02:25:49.680754 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:47910.service - OpenSSH per-connection server daemon (10.0.0.1:47910). Jan 20 02:25:51.185718 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 02:25:51.190841 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 02:25:51.385906 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 02:25:52.509872 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 02:25:52.615951 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 02:25:52.670954 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 02:25:52.707281 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 02:25:53.506682 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 47910 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:25:53.480933 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:25:55.006721 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 02:25:55.073990 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 02:25:55.581756 systemd-logind[1548]: New session 1 of user core. Jan 20 02:25:56.680974 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 02:25:56.945723 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 02:25:57.338671 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 02:25:57.454002 systemd-logind[1548]: New session c1 of user core. Jan 20 02:26:00.341348 containerd[1582]: time="2026-01-20T02:26:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 02:26:00.378535 containerd[1582]: time="2026-01-20T02:26:00.375568294Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.055999873Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="62.337µs" Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.056242555Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.059126118Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.060044993Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.060075710Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.060267719Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.074968128Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.075009635Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.081324642Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.082090572Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.082127491Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:26:01.093502 containerd[1582]: time="2026-01-20T02:26:01.082141958Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 02:26:01.063011 systemd[1652]: Queued start job for default target default.target. Jan 20 02:26:01.098052 containerd[1582]: time="2026-01-20T02:26:01.082651639Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 02:26:01.098052 containerd[1582]: time="2026-01-20T02:26:01.096561423Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:26:01.098052 containerd[1582]: time="2026-01-20T02:26:01.096644527Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:26:01.098052 containerd[1582]: time="2026-01-20T02:26:01.096661789Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 02:26:01.066033 systemd[1652]: Created slice app.slice - User Application Slice. Jan 20 02:26:01.066064 systemd[1652]: Reached target paths.target - Paths. Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.100069831Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.101098641Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.111642982Z" level=info msg="metadata content store policy set" policy=shared Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127286702Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127611238Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127711394Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127734418Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127754205Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127768471Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127785803Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127865773Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127884858Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127904915Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 02:26:01.141690 containerd[1582]: time="2026-01-20T02:26:01.127918581Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 02:26:01.066660 systemd[1652]: Reached target timers.target - Timers. Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.127935874Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.128288642Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133011898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133071700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133091637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133108318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133125420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133142682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133256836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133286390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133307260Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133329692Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133683862Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133709430Z" level=info msg="Start snapshots syncer" Jan 20 02:26:01.142154 containerd[1582]: time="2026-01-20T02:26:01.133833261Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 02:26:01.073901 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 02:26:01.149840 containerd[1582]: time="2026-01-20T02:26:01.137823509Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 02:26:01.149840 containerd[1582]: time="2026-01-20T02:26:01.137919608Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 02:26:01.106633 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.141859453Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.142111783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.142145326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.142239592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.142260541Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.142332635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.142349657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.142466355Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.142559048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.142579807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.142594465Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.144526490Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.144567056Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:26:01.395985 containerd[1582]: time="2026-01-20T02:26:01.144583998Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:26:01.106819 systemd[1652]: Reached target sockets.target - Sockets. Jan 20 02:26:01.427282 containerd[1582]: time="2026-01-20T02:26:01.144597463Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:26:01.427282 containerd[1582]: time="2026-01-20T02:26:01.144607972Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 02:26:01.427282 containerd[1582]: time="2026-01-20T02:26:01.144624243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 02:26:01.427282 containerd[1582]: time="2026-01-20T02:26:01.144650783Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 02:26:01.427282 containerd[1582]: time="2026-01-20T02:26:01.144677452Z" level=info msg="runtime interface created" Jan 20 02:26:01.427282 containerd[1582]: time="2026-01-20T02:26:01.144685156Z" level=info msg="created NRI interface" Jan 20 02:26:01.427282 containerd[1582]: time="2026-01-20T02:26:01.144698151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 02:26:01.427282 containerd[1582]: time="2026-01-20T02:26:01.144717517Z" level=info msg="Connect containerd service" Jan 20 02:26:01.427282 containerd[1582]: time="2026-01-20T02:26:01.144759426Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 02:26:01.107047 systemd[1652]: Reached target basic.target - Basic System. Jan 20 02:26:01.107265 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 02:26:01.110153 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 02:26:01.111107 systemd[1652]: Reached target default.target - Main User Target. Jan 20 02:26:01.111248 systemd[1652]: Startup finished in 2.874s. Jan 20 02:26:01.577343 containerd[1582]: time="2026-01-20T02:26:01.535488603Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:26:01.565836 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:36416.service - OpenSSH per-connection server daemon (10.0.0.1:36416). Jan 20 02:26:03.043560 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 36416 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:26:03.048087 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:03.289997 systemd-logind[1548]: New session 2 of user core. Jan 20 02:26:03.444683 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 02:26:03.704857 kernel: kvm_amd: TSC scaling supported Jan 20 02:26:03.705077 kernel: kvm_amd: Nested Virtualization enabled Jan 20 02:26:03.709598 kernel: kvm_amd: Nested Paging enabled Jan 20 02:26:03.745517 sshd[1684]: Connection closed by 10.0.0.1 port 36416 Jan 20 02:26:03.753125 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:03.769626 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 02:26:03.769694 kernel: kvm_amd: PMU virtualization is disabled Jan 20 02:26:04.147996 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:36416.service: Deactivated successfully. Jan 20 02:26:04.290987 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 02:26:04.512031 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:36490.service - OpenSSH per-connection server daemon (10.0.0.1:36490). Jan 20 02:26:04.537931 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Jan 20 02:26:04.547279 systemd-logind[1548]: Removed session 2. Jan 20 02:26:05.443540 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 36490 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:26:05.446569 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:05.455574 tar[1561]: linux-amd64/README.md Jan 20 02:26:05.514867 containerd[1582]: time="2026-01-20T02:26:05.514809226Z" level=info msg="Start subscribing containerd event" Jan 20 02:26:05.516002 containerd[1582]: time="2026-01-20T02:26:05.515770841Z" level=info msg="Start recovering state" Jan 20 02:26:05.517572 systemd-logind[1548]: New session 3 of user core. Jan 20 02:26:05.518732 containerd[1582]: time="2026-01-20T02:26:05.518603889Z" level=info msg="Start event monitor" Jan 20 02:26:05.523159 containerd[1582]: time="2026-01-20T02:26:05.522969502Z" level=info msg="Start cni network conf syncer for default" Jan 20 02:26:05.523159 containerd[1582]: time="2026-01-20T02:26:05.523054280Z" level=info msg="Start streaming server" Jan 20 02:26:05.523159 containerd[1582]: time="2026-01-20T02:26:05.523078546Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 02:26:05.523159 containerd[1582]: time="2026-01-20T02:26:05.523088895Z" level=info msg="runtime interface starting up..." Jan 20 02:26:05.523159 containerd[1582]: time="2026-01-20T02:26:05.523149408Z" level=info msg="starting plugins..." Jan 20 02:26:05.523553 containerd[1582]: time="2026-01-20T02:26:05.523179384Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 02:26:05.525521 containerd[1582]: time="2026-01-20T02:26:05.524072661Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 02:26:05.525521 containerd[1582]: time="2026-01-20T02:26:05.524157119Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 02:26:05.525521 containerd[1582]: time="2026-01-20T02:26:05.524334971Z" level=info msg="containerd successfully booted in 5.195482s" Jan 20 02:26:05.546812 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 02:26:05.555731 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 02:26:05.592341 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 02:26:05.819697 sshd[1701]: Connection closed by 10.0.0.1 port 36490 Jan 20 02:26:05.827020 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:05.888741 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:36490.service: Deactivated successfully. Jan 20 02:26:05.946783 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 02:26:05.987702 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Jan 20 02:26:06.003121 systemd-logind[1548]: Removed session 3. Jan 20 02:26:10.514214 kernel: EDAC MC: Ver: 3.0.0 Jan 20 02:26:14.161053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:26:14.175022 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 02:26:14.186338 systemd[1]: Startup finished in 36.390s (kernel) + 1min 24.403s (initrd) + 1min 12.343s (userspace) = 3min 13.137s. Jan 20 02:26:14.263413 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:26:15.925600 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:53752.service - OpenSSH per-connection server daemon (10.0.0.1:53752). Jan 20 02:26:16.501532 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 53752 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:26:16.506616 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:16.741285 systemd-logind[1548]: New session 4 of user core. Jan 20 02:26:16.826598 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 02:26:17.162539 sshd[1724]: Connection closed by 10.0.0.1 port 53752 Jan 20 02:26:17.164248 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:17.431575 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:53752.service: Deactivated successfully. Jan 20 02:26:17.460076 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 02:26:17.767607 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Jan 20 02:26:17.804132 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:53760.service - OpenSSH per-connection server daemon (10.0.0.1:53760). Jan 20 02:26:17.816324 systemd-logind[1548]: Removed session 4. Jan 20 02:26:18.719029 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 53760 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:26:18.729471 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:18.802028 systemd-logind[1548]: New session 5 of user core. Jan 20 02:26:18.826974 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 02:26:19.011692 sshd[1733]: Connection closed by 10.0.0.1 port 53760 Jan 20 02:26:19.016327 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:19.077286 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:53760.service: Deactivated successfully. Jan 20 02:26:19.087970 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 02:26:19.097778 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Jan 20 02:26:19.119126 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:53788.service - OpenSSH per-connection server daemon (10.0.0.1:53788). Jan 20 02:26:19.130446 systemd-logind[1548]: Removed session 5. Jan 20 02:26:19.355355 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 53788 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:26:19.371617 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:19.420770 systemd-logind[1548]: New session 6 of user core. Jan 20 02:26:19.439799 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 02:26:19.740192 sshd[1742]: Connection closed by 10.0.0.1 port 53788 Jan 20 02:26:19.727222 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:19.833898 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:53788.service: Deactivated successfully. Jan 20 02:26:19.866999 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 02:26:19.888007 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Jan 20 02:26:19.911935 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:53794.service - OpenSSH per-connection server daemon (10.0.0.1:53794). Jan 20 02:26:19.914904 systemd-logind[1548]: Removed session 6. Jan 20 02:26:20.241455 kubelet[1712]: E0120 02:26:20.240854 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:26:20.261902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:26:20.262494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:26:20.263099 systemd[1]: kubelet.service: Consumed 7.521s CPU time, 268.5M memory peak. Jan 20 02:26:20.400814 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 53794 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:26:20.422440 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:20.495421 systemd-logind[1548]: New session 7 of user core. Jan 20 02:26:20.538896 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 02:26:20.792083 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 02:26:20.801815 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:26:20.880712 sudo[1754]: pam_unix(sudo:session): session closed for user root Jan 20 02:26:20.891510 sshd[1753]: Connection closed by 10.0.0.1 port 53794 Jan 20 02:26:20.893530 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:20.934727 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:53794.service: Deactivated successfully. Jan 20 02:26:20.942148 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 02:26:20.955796 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Jan 20 02:26:20.969628 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:53802.service - OpenSSH per-connection server daemon (10.0.0.1:53802). Jan 20 02:26:20.977684 systemd-logind[1548]: Removed session 7. Jan 20 02:26:21.200967 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 53802 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:26:21.206679 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:21.245014 systemd-logind[1548]: New session 8 of user core. Jan 20 02:26:21.272032 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 02:26:21.451136 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 02:26:21.458541 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:26:21.752279 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 20 02:26:21.767060 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 02:26:21.767752 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:26:21.836037 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:26:22.062154 augenrules[1787]: No rules Jan 20 02:26:22.073537 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:26:22.074015 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:26:22.089507 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 20 02:26:22.098261 sshd[1763]: Connection closed by 10.0.0.1 port 53802 Jan 20 02:26:22.095174 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:22.128790 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:53802.service: Deactivated successfully. Jan 20 02:26:22.146182 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 02:26:22.179509 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Jan 20 02:26:22.209961 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:53830.service - OpenSSH per-connection server daemon (10.0.0.1:53830). Jan 20 02:26:22.216914 systemd-logind[1548]: Removed session 8. Jan 20 02:26:22.817712 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 53830 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:26:22.895720 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:22.962923 systemd-logind[1548]: New session 9 of user core. Jan 20 02:26:22.984830 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 02:26:23.169059 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 02:26:23.169826 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:26:27.102238 update_engine[1550]: I20260120 02:26:27.081826 1550 update_attempter.cc:509] Updating boot flags... Jan 20 02:26:28.326838 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 02:26:28.464118 (dockerd)[1834]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 02:26:30.572271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 02:26:30.595870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:26:35.551850 dockerd[1834]: time="2026-01-20T02:26:35.545565107Z" level=info msg="Starting up" Jan 20 02:26:35.579972 dockerd[1834]: time="2026-01-20T02:26:35.579696995Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 02:26:36.037632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:26:36.131632 (kubelet)[1859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:26:36.522560 dockerd[1834]: time="2026-01-20T02:26:36.509655138Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 02:26:37.328500 kubelet[1859]: E0120 02:26:37.326462 1859 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:26:37.369769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:26:37.370769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:26:37.371832 systemd[1]: kubelet.service: Consumed 1.480s CPU time, 109.9M memory peak. Jan 20 02:26:37.399469 dockerd[1834]: time="2026-01-20T02:26:37.396027481Z" level=info msg="Loading containers: start." Jan 20 02:26:37.519775 kernel: Initializing XFRM netlink socket Jan 20 02:26:45.362214 systemd-networkd[1465]: docker0: Link UP Jan 20 02:26:45.509663 dockerd[1834]: time="2026-01-20T02:26:45.507611161Z" level=info msg="Loading containers: done." Jan 20 02:26:45.817600 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3595909809-merged.mount: Deactivated successfully. Jan 20 02:26:45.849565 dockerd[1834]: time="2026-01-20T02:26:45.840748256Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 02:26:45.854869 dockerd[1834]: time="2026-01-20T02:26:45.851679685Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 02:26:45.854869 dockerd[1834]: time="2026-01-20T02:26:45.854467265Z" level=info msg="Initializing buildkit" Jan 20 02:26:46.197966 dockerd[1834]: time="2026-01-20T02:26:46.196583353Z" level=info msg="Completed buildkit initialization" Jan 20 02:26:46.285873 dockerd[1834]: time="2026-01-20T02:26:46.285468423Z" level=info msg="Daemon has completed initialization" Jan 20 02:26:46.290018 dockerd[1834]: time="2026-01-20T02:26:46.287557432Z" level=info msg="API listen on /run/docker.sock" Jan 20 02:26:46.315332 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 02:26:47.521892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 02:26:47.541903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:26:51.814320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:26:51.879688 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:26:53.020971 kubelet[2077]: E0120 02:26:53.020325 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:26:53.031065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:26:53.031503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:26:53.039234 systemd[1]: kubelet.service: Consumed 1.365s CPU time, 110.1M memory peak. Jan 20 02:26:56.773676 containerd[1582]: time="2026-01-20T02:26:56.765647970Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 02:27:00.198539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419154770.mount: Deactivated successfully. Jan 20 02:27:03.279782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 02:27:03.297513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:07.293246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:07.377214 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:27:09.874560 kubelet[2124]: E0120 02:27:09.872785 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:27:09.911838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:27:09.912147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:27:09.924496 systemd[1]: kubelet.service: Consumed 1.354s CPU time, 110.7M memory peak. Jan 20 02:27:20.064172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 02:27:20.126696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:21.577819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:21.789868 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:27:23.575902 kubelet[2168]: E0120 02:27:23.575184 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:27:23.601701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:27:23.602023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:27:23.606618 systemd[1]: kubelet.service: Consumed 965ms CPU time, 110.4M memory peak. Jan 20 02:27:25.165175 containerd[1582]: time="2026-01-20T02:27:25.163336999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:27:25.187807 containerd[1582]: time="2026-01-20T02:27:25.183052996Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 20 02:27:25.209579 containerd[1582]: time="2026-01-20T02:27:25.209509453Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:27:25.301550 containerd[1582]: time="2026-01-20T02:27:25.295856428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:27:25.383837 containerd[1582]: time="2026-01-20T02:27:25.377697608Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 28.6118073s" Jan 20 02:27:25.383837 containerd[1582]: time="2026-01-20T02:27:25.383445099Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 02:27:25.438674 containerd[1582]: time="2026-01-20T02:27:25.438091312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 02:27:33.852685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 02:27:33.911645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:37.514648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:37.573517 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:27:38.226823 kubelet[2188]: E0120 02:27:38.221701 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:27:38.234320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:27:38.235073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:27:38.238710 systemd[1]: kubelet.service: Consumed 1.060s CPU time, 109.6M memory peak. Jan 20 02:27:42.546504 containerd[1582]: time="2026-01-20T02:27:42.542980313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:27:42.561491 containerd[1582]: time="2026-01-20T02:27:42.561265381Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 20 02:27:42.574896 containerd[1582]: time="2026-01-20T02:27:42.574522523Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:27:42.602868 containerd[1582]: time="2026-01-20T02:27:42.601509531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:27:42.619110 containerd[1582]: time="2026-01-20T02:27:42.612792604Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 17.170379232s" Jan 20 02:27:42.619110 containerd[1582]: time="2026-01-20T02:27:42.618871058Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 02:27:42.630991 containerd[1582]: time="2026-01-20T02:27:42.630599068Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 02:27:48.299815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 02:27:48.386488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:53.123594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:53.217184 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:27:55.322968 kubelet[2204]: E0120 02:27:55.322527 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:27:55.389207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:27:55.389657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:27:55.395013 systemd[1]: kubelet.service: Consumed 2.086s CPU time, 110.8M memory peak. Jan 20 02:28:05.751103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 02:28:05.777308 containerd[1582]: time="2026-01-20T02:28:05.760340640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:05.790613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:28:05.799542 containerd[1582]: time="2026-01-20T02:28:05.799272532Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 20 02:28:05.809474 containerd[1582]: time="2026-01-20T02:28:05.807593405Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:05.893120 containerd[1582]: time="2026-01-20T02:28:05.893033214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:05.915778 containerd[1582]: time="2026-01-20T02:28:05.915650231Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 23.284958572s" Jan 20 02:28:05.916201 containerd[1582]: time="2026-01-20T02:28:05.916171526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 02:28:05.983899 containerd[1582]: time="2026-01-20T02:28:05.982533737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 02:28:08.527037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:28:09.518529 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:28:12.099320 kubelet[2225]: E0120 02:28:12.079930 2225 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:28:12.167076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:28:12.170808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:28:12.174156 systemd[1]: kubelet.service: Consumed 1.315s CPU time, 110.7M memory peak. Jan 20 02:28:20.201029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592685068.mount: Deactivated successfully. Jan 20 02:28:22.278542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 02:28:22.316727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:28:25.313709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:28:25.372925 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:28:27.929650 kubelet[2250]: E0120 02:28:27.916168 2250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:28:27.978344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:28:27.978743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:28:27.992142 systemd[1]: kubelet.service: Consumed 1.666s CPU time, 108.6M memory peak. Jan 20 02:28:30.840979 containerd[1582]: time="2026-01-20T02:28:30.840566370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:30.853505 containerd[1582]: time="2026-01-20T02:28:30.853335521Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 20 02:28:30.870447 containerd[1582]: time="2026-01-20T02:28:30.868615828Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:30.898426 containerd[1582]: time="2026-01-20T02:28:30.898034374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:30.910913 containerd[1582]: time="2026-01-20T02:28:30.907057780Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 24.92441716s" Jan 20 02:28:30.910913 containerd[1582]: time="2026-01-20T02:28:30.907240893Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 02:28:30.954313 containerd[1582]: time="2026-01-20T02:28:30.953518632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 02:28:32.270629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260074750.mount: Deactivated successfully. Jan 20 02:28:38.016344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 02:28:38.033114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:28:39.764854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:28:39.812681 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:28:40.483497 kubelet[2319]: E0120 02:28:40.481050 2319 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:28:40.500249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:28:40.500854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:28:40.501999 systemd[1]: kubelet.service: Consumed 622ms CPU time, 109.8M memory peak. Jan 20 02:28:41.768187 containerd[1582]: time="2026-01-20T02:28:41.766122651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:41.771320 containerd[1582]: time="2026-01-20T02:28:41.771229337Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 20 02:28:41.777693 containerd[1582]: time="2026-01-20T02:28:41.777584819Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:41.803123 containerd[1582]: time="2026-01-20T02:28:41.800640105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:41.808836 containerd[1582]: time="2026-01-20T02:28:41.806183106Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 10.852603172s" Jan 20 02:28:41.809535 containerd[1582]: time="2026-01-20T02:28:41.809307996Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 02:28:41.822239 containerd[1582]: time="2026-01-20T02:28:41.821599106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 02:28:44.225745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2096398770.mount: Deactivated successfully. Jan 20 02:28:45.666888 containerd[1582]: time="2026-01-20T02:28:45.656116702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:28:45.681759 containerd[1582]: time="2026-01-20T02:28:45.671706617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 02:28:45.685555 containerd[1582]: time="2026-01-20T02:28:45.685025884Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:28:45.700811 containerd[1582]: time="2026-01-20T02:28:45.698214671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:28:45.704135 containerd[1582]: time="2026-01-20T02:28:45.703271728Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.881610919s" Jan 20 02:28:45.712519 containerd[1582]: time="2026-01-20T02:28:45.706696249Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 02:28:45.726490 containerd[1582]: time="2026-01-20T02:28:45.724294458Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 02:28:48.435215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1256094934.mount: Deactivated successfully. Jan 20 02:28:50.713763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 02:28:50.760763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:28:55.538477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:28:55.622769 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:28:56.256894 kubelet[2368]: E0120 02:28:56.256140 2368 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:28:56.309640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:28:56.309962 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:28:56.315236 systemd[1]: kubelet.service: Consumed 1.084s CPU time, 110.3M memory peak. Jan 20 02:29:06.581475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 02:29:06.629880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:29:08.877323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:29:09.012565 (kubelet)[2410]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:29:09.652141 kubelet[2410]: E0120 02:29:09.647722 2410 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:29:09.671125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:29:09.673184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:29:09.677239 systemd[1]: kubelet.service: Consumed 670ms CPU time, 108.9M memory peak. Jan 20 02:29:19.825618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 20 02:29:20.147621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:29:22.319952 containerd[1582]: time="2026-01-20T02:29:22.316208861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:29:22.352711 containerd[1582]: time="2026-01-20T02:29:22.352567454Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 20 02:29:22.369249 containerd[1582]: time="2026-01-20T02:29:22.368585047Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:29:22.426918 containerd[1582]: time="2026-01-20T02:29:22.425716688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:29:22.436961 containerd[1582]: time="2026-01-20T02:29:22.436893312Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 36.712323613s" Jan 20 02:29:22.437321 containerd[1582]: time="2026-01-20T02:29:22.437282136Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 02:29:23.603736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:29:23.698202 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:29:24.290024 kubelet[2435]: E0120 02:29:24.289957 2435 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:29:24.314966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:29:24.315284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:29:24.316116 systemd[1]: kubelet.service: Consumed 819ms CPU time, 109M memory peak. Jan 20 02:29:34.516255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 20 02:29:34.542913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:29:36.729031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:29:36.838216 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:29:39.589739 kubelet[2469]: E0120 02:29:39.586075 2469 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:29:39.652895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:29:39.669214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:29:39.698546 systemd[1]: kubelet.service: Consumed 665ms CPU time, 110.2M memory peak. Jan 20 02:29:51.606687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 20 02:29:51.714933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:29:53.410675 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 02:29:53.410850 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 02:29:53.418696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:29:53.508685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:29:53.976801 systemd[1]: Reload requested from client PID 2489 ('systemctl') (unit session-9.scope)... Jan 20 02:29:53.976885 systemd[1]: Reloading... Jan 20 02:29:55.281269 zram_generator::config[2532]: No configuration found. Jan 20 02:29:57.478910 systemd[1]: Reloading finished in 3492 ms. Jan 20 02:29:58.033920 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 02:29:58.035891 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 02:29:58.069736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:29:58.074640 systemd[1]: kubelet.service: Consumed 616ms CPU time, 98.3M memory peak. Jan 20 02:29:58.169895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:29:59.932025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:29:59.990226 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:30:01.130332 kubelet[2580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:30:01.130332 kubelet[2580]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:30:01.130332 kubelet[2580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:30:01.130332 kubelet[2580]: I0120 02:30:01.129155 2580 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:30:04.001102 kubelet[2580]: I0120 02:30:03.994774 2580 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 02:30:04.001102 kubelet[2580]: I0120 02:30:03.998484 2580 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:30:04.020601 kubelet[2580]: I0120 02:30:04.005698 2580 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 02:30:04.288322 kubelet[2580]: E0120 02:30:04.287538 2580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:04.293637 kubelet[2580]: I0120 02:30:04.293258 2580 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:30:04.532104 kubelet[2580]: I0120 02:30:04.530737 2580 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:30:04.602582 kubelet[2580]: I0120 02:30:04.601593 2580 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:30:04.620174 kubelet[2580]: I0120 02:30:04.616724 2580 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:30:04.620174 kubelet[2580]: I0120 02:30:04.617152 2580 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:30:04.620174 kubelet[2580]: I0120 02:30:04.617799 2580 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:30:04.620174 kubelet[2580]: I0120 02:30:04.617820 2580 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 02:30:04.624212 kubelet[2580]: I0120 02:30:04.618455 2580 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:30:04.660279 kubelet[2580]: I0120 02:30:04.656233 2580 kubelet.go:446] "Attempting to sync node with API server" Jan 20 02:30:04.660279 kubelet[2580]: I0120 02:30:04.659946 2580 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:30:04.664659 kubelet[2580]: I0120 02:30:04.662199 2580 kubelet.go:352] "Adding apiserver pod source" Jan 20 02:30:04.664659 kubelet[2580]: I0120 02:30:04.662269 2580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:30:04.681549 kubelet[2580]: W0120 02:30:04.674484 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:04.681549 kubelet[2580]: E0120 02:30:04.674628 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:04.684154 kubelet[2580]: W0120 02:30:04.683643 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:04.684154 kubelet[2580]: E0120 02:30:04.683726 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:04.701578 kubelet[2580]: I0120 02:30:04.698023 2580 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 02:30:04.701578 kubelet[2580]: I0120 02:30:04.698873 2580 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 02:30:04.701578 kubelet[2580]: W0120 02:30:04.699148 2580 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 02:30:04.721615 kubelet[2580]: I0120 02:30:04.720724 2580 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:30:04.721615 kubelet[2580]: I0120 02:30:04.720855 2580 server.go:1287] "Started kubelet" Jan 20 02:30:04.724697 kubelet[2580]: I0120 02:30:04.724568 2580 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:30:04.734010 kubelet[2580]: I0120 02:30:04.725969 2580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:30:04.734010 kubelet[2580]: I0120 02:30:04.726639 2580 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:30:04.746306 kubelet[2580]: I0120 02:30:04.745258 2580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:30:04.751041 kubelet[2580]: I0120 02:30:04.750764 2580 server.go:479] "Adding debug handlers to kubelet server" Jan 20 02:30:04.760195 kubelet[2580]: I0120 02:30:04.758543 2580 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:30:05.006504 kubelet[2580]: I0120 02:30:04.969782 2580 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:30:05.006504 kubelet[2580]: E0120 02:30:04.996293 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:05.006504 kubelet[2580]: I0120 02:30:05.003245 2580 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:30:05.006504 kubelet[2580]: I0120 02:30:05.003674 2580 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:30:05.156830 kubelet[2580]: W0120 02:30:05.144805 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:05.188633 kubelet[2580]: E0120 02:30:05.188219 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:05.207302 kubelet[2580]: E0120 02:30:05.207266 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:05.216474 kubelet[2580]: E0120 02:30:05.199744 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Jan 20 02:30:05.230775 kubelet[2580]: I0120 02:30:05.222107 2580 factory.go:221] Registration of the systemd container factory successfully Jan 20 02:30:05.230775 kubelet[2580]: I0120 02:30:05.222313 2580 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:30:05.454154 kubelet[2580]: E0120 02:30:05.451623 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:05.558515 kubelet[2580]: E0120 02:30:05.555685 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Jan 20 02:30:05.558515 kubelet[2580]: E0120 02:30:05.207674 2580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4f894a068e26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:30:04.72081975 +0000 UTC m=+4.449888032,LastTimestamp:2026-01-20 02:30:04.72081975 +0000 UTC m=+4.449888032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:30:05.567517 kubelet[2580]: W0120 02:30:05.560678 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:05.567517 kubelet[2580]: E0120 02:30:05.560824 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:05.593062 kubelet[2580]: E0120 02:30:05.590184 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:05.595061 kubelet[2580]: E0120 02:30:05.594192 2580 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:30:05.621815 kubelet[2580]: I0120 02:30:05.620558 2580 factory.go:221] Registration of the containerd container factory successfully Jan 20 02:30:05.699275 kubelet[2580]: E0120 02:30:05.693019 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:05.889054 kubelet[2580]: E0120 02:30:05.881250 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:05.983646 kubelet[2580]: E0120 02:30:05.983567 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Jan 20 02:30:06.306457 kubelet[2580]: E0120 02:30:06.204777 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:06.306457 kubelet[2580]: W0120 02:30:06.218785 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:06.306457 kubelet[2580]: E0120 02:30:06.293791 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:06.323840 kubelet[2580]: E0120 02:30:06.319174 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:06.519333 kubelet[2580]: E0120 02:30:06.516570 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:06.523784 kubelet[2580]: W0120 02:30:06.408236 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:06.533088 kubelet[2580]: E0120 02:30:06.530068 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:06.596698 kubelet[2580]: E0120 02:30:06.586716 2580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:06.659527 kubelet[2580]: E0120 02:30:06.658556 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:06.760186 kubelet[2580]: E0120 02:30:06.759989 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:06.809776 kubelet[2580]: I0120 02:30:06.808993 2580 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:30:06.809776 kubelet[2580]: I0120 02:30:06.809027 2580 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:30:06.809776 kubelet[2580]: I0120 02:30:06.809059 2580 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:30:06.821666 kubelet[2580]: E0120 02:30:06.813827 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="1.6s" Jan 20 02:30:06.879150 kubelet[2580]: I0120 02:30:06.859825 2580 policy_none.go:49] "None policy: Start" Jan 20 02:30:06.879150 kubelet[2580]: I0120 02:30:06.860000 2580 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:30:06.879150 kubelet[2580]: I0120 02:30:06.860075 2580 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:30:06.879150 kubelet[2580]: E0120 02:30:06.870830 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:06.961634 kubelet[2580]: I0120 02:30:06.961574 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 02:30:06.972134 kubelet[2580]: E0120 02:30:06.972099 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:06.994667 kubelet[2580]: I0120 02:30:06.994345 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 02:30:07.001490 kubelet[2580]: I0120 02:30:07.001462 2580 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 02:30:07.002797 kubelet[2580]: I0120 02:30:07.002774 2580 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:30:07.010514 kubelet[2580]: I0120 02:30:07.010487 2580 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 02:30:07.016008 kubelet[2580]: E0120 02:30:07.010811 2580 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:30:07.017115 kubelet[2580]: W0120 02:30:07.017082 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:07.027751 kubelet[2580]: E0120 02:30:07.027684 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:07.099487 kubelet[2580]: E0120 02:30:07.099321 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:07.121952 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 02:30:07.144611 kubelet[2580]: E0120 02:30:07.143979 2580 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:30:07.580748 kubelet[2580]: E0120 02:30:07.579587 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:07.580748 kubelet[2580]: E0120 02:30:07.580738 2580 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:30:07.658694 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 02:30:07.691210 kubelet[2580]: E0120 02:30:07.685724 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:30:07.711787 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 02:30:07.772952 kubelet[2580]: I0120 02:30:07.766929 2580 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 02:30:07.772952 kubelet[2580]: I0120 02:30:07.767679 2580 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:30:07.772952 kubelet[2580]: I0120 02:30:07.767700 2580 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:30:07.772952 kubelet[2580]: I0120 02:30:07.771760 2580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:30:07.816105 kubelet[2580]: E0120 02:30:07.811107 2580 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:30:07.816105 kubelet[2580]: E0120 02:30:07.811347 2580 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:30:07.950821 kubelet[2580]: I0120 02:30:07.949036 2580 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:30:07.959125 kubelet[2580]: E0120 02:30:07.958504 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 20 02:30:08.101026 kubelet[2580]: I0120 02:30:08.100694 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5915019a198048ec3eea369ccf32d44a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5915019a198048ec3eea369ccf32d44a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:30:08.115697 kubelet[2580]: I0120 02:30:08.111305 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5915019a198048ec3eea369ccf32d44a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5915019a198048ec3eea369ccf32d44a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:30:08.115697 kubelet[2580]: I0120 02:30:08.111560 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5915019a198048ec3eea369ccf32d44a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5915019a198048ec3eea369ccf32d44a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:30:08.194922 kubelet[2580]: I0120 02:30:08.194815 2580 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:30:08.256989 kubelet[2580]: E0120 02:30:08.245340 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 20 02:30:08.256989 kubelet[2580]: I0120 02:30:08.246718 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:30:08.256989 kubelet[2580]: I0120 02:30:08.250904 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:30:08.256989 kubelet[2580]: I0120 02:30:08.251126 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:30:08.256989 kubelet[2580]: I0120 02:30:08.251151 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:30:08.261275 kubelet[2580]: I0120 02:30:08.251235 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:30:08.261275 kubelet[2580]: I0120 02:30:08.251259 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:30:08.289186 systemd[1]: Created slice kubepods-burstable-pod5915019a198048ec3eea369ccf32d44a.slice - libcontainer container kubepods-burstable-pod5915019a198048ec3eea369ccf32d44a.slice. Jan 20 02:30:08.336357 kubelet[2580]: E0120 02:30:08.336310 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:08.346555 kubelet[2580]: E0120 02:30:08.342795 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:08.371131 kubelet[2580]: W0120 02:30:08.365778 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:08.371604 kubelet[2580]: E0120 02:30:08.371345 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:08.480216 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 02:30:08.492595 kubelet[2580]: E0120 02:30:08.491599 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="3.2s" Jan 20 02:30:08.493492 containerd[1582]: time="2026-01-20T02:30:08.493274173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5915019a198048ec3eea369ccf32d44a,Namespace:kube-system,Attempt:0,}" Jan 20 02:30:08.657264 kubelet[2580]: W0120 02:30:08.643665 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:08.657264 kubelet[2580]: E0120 02:30:08.652317 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:08.674068 kubelet[2580]: W0120 02:30:08.664221 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:08.674068 kubelet[2580]: E0120 02:30:08.664286 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:08.674068 kubelet[2580]: E0120 02:30:08.669089 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:08.674558 containerd[1582]: time="2026-01-20T02:30:08.671064891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 02:30:08.674921 kubelet[2580]: E0120 02:30:08.643816 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:08.688358 kubelet[2580]: I0120 02:30:08.688319 2580 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:30:08.712682 kubelet[2580]: E0120 02:30:08.712511 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 20 02:30:08.732763 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 02:30:08.756641 kubelet[2580]: E0120 02:30:08.755671 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:08.756641 kubelet[2580]: E0120 02:30:08.756157 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:08.761812 containerd[1582]: time="2026-01-20T02:30:08.761274295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 02:30:09.184527 kubelet[2580]: E0120 02:30:09.177751 2580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4f894a068e26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:30:04.72081975 +0000 UTC m=+4.449888032,LastTimestamp:2026-01-20 02:30:04.72081975 +0000 UTC m=+4.449888032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:30:09.606804 kubelet[2580]: W0120 02:30:09.606523 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:09.606804 kubelet[2580]: E0120 02:30:09.606717 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:09.931960 kubelet[2580]: I0120 02:30:09.926502 2580 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:30:09.971030 kubelet[2580]: E0120 02:30:09.959780 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 20 02:30:10.344771 containerd[1582]: time="2026-01-20T02:30:10.344694655Z" level=info msg="connecting to shim a3de901ed30117134596b1b25869d5b46bca8341e7904d8ee5038cd291b7ecd7" address="unix:///run/containerd/s/61e3162b6d1e4508fe64ce7e201878b9bd01b81bf1036a2063a5c1f4911b1a64" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:30:10.456058 kubelet[2580]: W0120 02:30:10.455152 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:10.456058 kubelet[2580]: E0120 02:30:10.455317 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:10.499052 containerd[1582]: time="2026-01-20T02:30:10.498743016Z" level=info msg="connecting to shim 21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1" address="unix:///run/containerd/s/d4cd72ef33465427891f2c4e6222467b366be89a169ed5c7c3c5f56089797b10" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:30:10.521728 containerd[1582]: time="2026-01-20T02:30:10.521561966Z" level=info msg="connecting to shim 26c7912681f5e24bc62bdea7724fb16ee3b7e9dab2194782aa96528ea7432dc3" address="unix:///run/containerd/s/c9b7f07b7b484bb7f7d55c81487d29cb32d44fac5c37f30d19d18b557b203a9d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:30:10.903688 kubelet[2580]: E0120 02:30:10.898094 2580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:12.539718 kubelet[2580]: E0120 02:30:12.526553 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="6.4s" Jan 20 02:30:12.661929 kubelet[2580]: I0120 02:30:12.661882 2580 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:30:12.674830 kubelet[2580]: E0120 02:30:12.674144 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 20 02:30:12.796078 systemd[1]: Started cri-containerd-a3de901ed30117134596b1b25869d5b46bca8341e7904d8ee5038cd291b7ecd7.scope - libcontainer container a3de901ed30117134596b1b25869d5b46bca8341e7904d8ee5038cd291b7ecd7. Jan 20 02:30:14.172453 kubelet[2580]: W0120 02:30:14.169733 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:14.172453 kubelet[2580]: E0120 02:30:14.170234 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:14.172453 kubelet[2580]: W0120 02:30:14.172167 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:14.172453 kubelet[2580]: E0120 02:30:14.172242 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:14.383153 systemd[1]: Started cri-containerd-26c7912681f5e24bc62bdea7724fb16ee3b7e9dab2194782aa96528ea7432dc3.scope - libcontainer container 26c7912681f5e24bc62bdea7724fb16ee3b7e9dab2194782aa96528ea7432dc3. Jan 20 02:30:14.931959 containerd[1582]: time="2026-01-20T02:30:14.928477063Z" level=error msg="get state for a3de901ed30117134596b1b25869d5b46bca8341e7904d8ee5038cd291b7ecd7" error="context deadline exceeded" Jan 20 02:30:14.931959 containerd[1582]: time="2026-01-20T02:30:14.928879252Z" level=warning msg="unknown status" status=0 Jan 20 02:30:14.965574 kubelet[2580]: W0120 02:30:14.962826 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:14.965574 kubelet[2580]: E0120 02:30:14.962978 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:15.052482 kubelet[2580]: W0120 02:30:15.034834 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:15.052482 kubelet[2580]: E0120 02:30:15.034921 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:15.960109 kubelet[2580]: I0120 02:30:15.959960 2580 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:30:15.967353 systemd[1]: Started cri-containerd-21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1.scope - libcontainer container 21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1. Jan 20 02:30:15.989802 kubelet[2580]: E0120 02:30:15.986097 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 20 02:30:17.104520 containerd[1582]: time="2026-01-20T02:30:17.104042875Z" level=error msg="get state for a3de901ed30117134596b1b25869d5b46bca8341e7904d8ee5038cd291b7ecd7" error="context deadline exceeded" Jan 20 02:30:17.127550 containerd[1582]: time="2026-01-20T02:30:17.127495975Z" level=warning msg="unknown status" status=0 Jan 20 02:30:17.235603 containerd[1582]: time="2026-01-20T02:30:17.233845639Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:30:17.235603 containerd[1582]: time="2026-01-20T02:30:17.233916730Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Jan 20 02:30:18.596238 kubelet[2580]: E0120 02:30:18.594574 2580 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:30:19.291279 kubelet[2580]: E0120 02:30:19.286291 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="7s" Jan 20 02:30:19.479183 kubelet[2580]: E0120 02:30:19.477760 2580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4f894a068e26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:30:04.72081975 +0000 UTC m=+4.449888032,LastTimestamp:2026-01-20 02:30:04.72081975 +0000 UTC m=+4.449888032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:30:19.914497 kubelet[2580]: E0120 02:30:19.914005 2580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:20.327450 containerd[1582]: time="2026-01-20T02:30:20.327208806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5915019a198048ec3eea369ccf32d44a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3de901ed30117134596b1b25869d5b46bca8341e7904d8ee5038cd291b7ecd7\"" Jan 20 02:30:20.370993 kubelet[2580]: E0120 02:30:20.370824 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:20.446789 containerd[1582]: time="2026-01-20T02:30:20.430530745Z" level=info msg="CreateContainer within sandbox \"a3de901ed30117134596b1b25869d5b46bca8341e7904d8ee5038cd291b7ecd7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 02:30:21.154788 containerd[1582]: time="2026-01-20T02:30:21.154732341Z" level=info msg="Container af156bc1d1970554372dfca6d09f0bf944e80e6414e83a3b5fa1f0e3770deb29: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:30:21.200488 containerd[1582]: time="2026-01-20T02:30:21.194854806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"26c7912681f5e24bc62bdea7724fb16ee3b7e9dab2194782aa96528ea7432dc3\"" Jan 20 02:30:21.200713 kubelet[2580]: E0120 02:30:21.196326 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:21.277936 containerd[1582]: time="2026-01-20T02:30:21.276532719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1\"" Jan 20 02:30:21.302750 containerd[1582]: time="2026-01-20T02:30:21.300469919Z" level=info msg="CreateContainer within sandbox \"26c7912681f5e24bc62bdea7724fb16ee3b7e9dab2194782aa96528ea7432dc3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 02:30:21.318836 kubelet[2580]: E0120 02:30:21.315206 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:21.319157 containerd[1582]: time="2026-01-20T02:30:21.316712779Z" level=info msg="CreateContainer within sandbox \"a3de901ed30117134596b1b25869d5b46bca8341e7904d8ee5038cd291b7ecd7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af156bc1d1970554372dfca6d09f0bf944e80e6414e83a3b5fa1f0e3770deb29\"" Jan 20 02:30:21.319157 containerd[1582]: time="2026-01-20T02:30:21.318307590Z" level=info msg="StartContainer for \"af156bc1d1970554372dfca6d09f0bf944e80e6414e83a3b5fa1f0e3770deb29\"" Jan 20 02:30:21.367125 containerd[1582]: time="2026-01-20T02:30:21.364340827Z" level=info msg="connecting to shim af156bc1d1970554372dfca6d09f0bf944e80e6414e83a3b5fa1f0e3770deb29" address="unix:///run/containerd/s/61e3162b6d1e4508fe64ce7e201878b9bd01b81bf1036a2063a5c1f4911b1a64" protocol=ttrpc version=3 Jan 20 02:30:21.418950 containerd[1582]: time="2026-01-20T02:30:21.418152710Z" level=info msg="CreateContainer within sandbox \"21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 02:30:21.678182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411976842.mount: Deactivated successfully. Jan 20 02:30:21.705013 containerd[1582]: time="2026-01-20T02:30:21.695721968Z" level=info msg="Container b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:30:21.706175 containerd[1582]: time="2026-01-20T02:30:21.706127741Z" level=info msg="Container 36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:30:21.768036 systemd[1]: Started cri-containerd-af156bc1d1970554372dfca6d09f0bf944e80e6414e83a3b5fa1f0e3770deb29.scope - libcontainer container af156bc1d1970554372dfca6d09f0bf944e80e6414e83a3b5fa1f0e3770deb29. Jan 20 02:30:21.855548 containerd[1582]: time="2026-01-20T02:30:21.855488859Z" level=info msg="CreateContainer within sandbox \"26c7912681f5e24bc62bdea7724fb16ee3b7e9dab2194782aa96528ea7432dc3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0\"" Jan 20 02:30:21.868041 containerd[1582]: time="2026-01-20T02:30:21.865441988Z" level=info msg="CreateContainer within sandbox \"21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457\"" Jan 20 02:30:21.897297 containerd[1582]: time="2026-01-20T02:30:21.875883329Z" level=info msg="StartContainer for \"b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0\"" Jan 20 02:30:21.954488 containerd[1582]: time="2026-01-20T02:30:21.934353389Z" level=info msg="StartContainer for \"36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457\"" Jan 20 02:30:21.976156 containerd[1582]: time="2026-01-20T02:30:21.971253072Z" level=info msg="connecting to shim b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0" address="unix:///run/containerd/s/c9b7f07b7b484bb7f7d55c81487d29cb32d44fac5c37f30d19d18b557b203a9d" protocol=ttrpc version=3 Jan 20 02:30:22.011583 containerd[1582]: time="2026-01-20T02:30:22.011524339Z" level=info msg="connecting to shim 36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457" address="unix:///run/containerd/s/d4cd72ef33465427891f2c4e6222467b366be89a169ed5c7c3c5f56089797b10" protocol=ttrpc version=3 Jan 20 02:30:22.020791 kubelet[2580]: W0120 02:30:22.011341 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:22.020791 kubelet[2580]: E0120 02:30:22.019290 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:22.407213 kubelet[2580]: I0120 02:30:22.399917 2580 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:30:22.407213 kubelet[2580]: E0120 02:30:22.407100 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 20 02:30:22.428859 systemd[1]: Started cri-containerd-b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0.scope - libcontainer container b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0. Jan 20 02:30:22.605578 systemd[1]: Started cri-containerd-36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457.scope - libcontainer container 36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457. Jan 20 02:30:22.989997 containerd[1582]: time="2026-01-20T02:30:22.989946282Z" level=info msg="StartContainer for \"af156bc1d1970554372dfca6d09f0bf944e80e6414e83a3b5fa1f0e3770deb29\" returns successfully" Jan 20 02:30:23.399995 containerd[1582]: time="2026-01-20T02:30:23.395570958Z" level=info msg="StartContainer for \"b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0\" returns successfully" Jan 20 02:30:24.017457 containerd[1582]: time="2026-01-20T02:30:24.016464052Z" level=info msg="StartContainer for \"36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457\" returns successfully" Jan 20 02:30:24.459188 kubelet[2580]: E0120 02:30:24.400176 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:24.523868 kubelet[2580]: E0120 02:30:24.443287 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:24.686419 kubelet[2580]: W0120 02:30:24.685109 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:24.686419 kubelet[2580]: E0120 02:30:24.685276 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:24.782811 kubelet[2580]: W0120 02:30:24.782632 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 20 02:30:24.782811 kubelet[2580]: E0120 02:30:24.782705 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:30:25.021175 kubelet[2580]: E0120 02:30:24.996195 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:25.108507 kubelet[2580]: E0120 02:30:25.098469 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:25.497158 kubelet[2580]: E0120 02:30:25.451192 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:25.497158 kubelet[2580]: E0120 02:30:25.483536 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:26.328144 kubelet[2580]: E0120 02:30:26.327953 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="7s" Jan 20 02:30:26.497782 kubelet[2580]: E0120 02:30:26.481912 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:26.501301 kubelet[2580]: E0120 02:30:26.501226 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:26.543079 kubelet[2580]: E0120 02:30:26.543041 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:26.565041 kubelet[2580]: E0120 02:30:26.553162 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:26.565041 kubelet[2580]: E0120 02:30:26.562100 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:26.637832 kubelet[2580]: E0120 02:30:26.620842 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:27.592322 kubelet[2580]: E0120 02:30:27.585770 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:27.613108 kubelet[2580]: E0120 02:30:27.594101 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:27.613108 kubelet[2580]: E0120 02:30:27.603860 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:27.613108 kubelet[2580]: E0120 02:30:27.604186 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:27.613108 kubelet[2580]: E0120 02:30:27.604951 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:27.613108 kubelet[2580]: E0120 02:30:27.605113 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:28.632309 kubelet[2580]: E0120 02:30:28.631909 2580 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:30:28.668119 kubelet[2580]: E0120 02:30:28.668036 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:28.671482 kubelet[2580]: E0120 02:30:28.668745 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:29.420721 kubelet[2580]: I0120 02:30:29.417778 2580 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:30:32.548638 kubelet[2580]: E0120 02:30:32.547617 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:32.548638 kubelet[2580]: E0120 02:30:32.548198 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:37.567721 kubelet[2580]: W0120 02:30:37.545772 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 02:30:37.567721 kubelet[2580]: E0120 02:30:37.545876 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:30:38.639354 kubelet[2580]: E0120 02:30:38.638754 2580 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:30:39.490578 kubelet[2580]: E0120 02:30:39.485734 2580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4f894a068e26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:30:04.72081975 +0000 UTC m=+4.449888032,LastTimestamp:2026-01-20 02:30:04.72081975 +0000 UTC m=+4.449888032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:30:39.490578 kubelet[2580]: E0120 02:30:39.489491 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 02:30:43.404452 kubelet[2580]: E0120 02:30:43.381059 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 20 02:30:45.892033 kubelet[2580]: W0120 02:30:45.890850 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 02:30:45.892033 kubelet[2580]: E0120 02:30:45.891670 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:30:46.270099 kubelet[2580]: E0120 02:30:46.257064 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:30:46.270099 kubelet[2580]: E0120 02:30:46.257516 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:46.503487 kubelet[2580]: I0120 02:30:46.503052 2580 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:30:47.023934 kubelet[2580]: E0120 02:30:47.014937 2580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:30:47.052325 kubelet[2580]: E0120 02:30:47.052262 2580 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Jan 20 02:30:48.667828 kubelet[2580]: E0120 02:30:48.664178 2580 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:30:51.114984 kubelet[2580]: W0120 02:30:51.070944 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 02:30:51.266750 kubelet[2580]: E0120 02:30:51.251843 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:30:56.112762 kubelet[2580]: E0120 02:30:56.111920 2580 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4f894a068e26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:30:04.72081975 +0000 UTC m=+4.449888032,LastTimestamp:2026-01-20 02:30:04.72081975 +0000 UTC m=+4.449888032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:30:56.177230 kubelet[2580]: I0120 02:30:56.134268 2580 apiserver.go:52] "Watching apiserver" Jan 20 02:30:56.177230 kubelet[2580]: I0120 02:30:56.157530 2580 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:30:56.177230 kubelet[2580]: E0120 02:30:56.159166 2580 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 02:30:56.210749 kubelet[2580]: I0120 02:30:56.205571 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:30:56.623205 kubelet[2580]: I0120 02:30:56.507733 2580 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:30:56.623205 kubelet[2580]: E0120 02:30:56.562067 2580 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4f897e14d8af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:30:05.594171567 +0000 UTC m=+5.323239870,LastTimestamp:2026-01-20 02:30:05.594171567 +0000 UTC m=+5.323239870,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:30:56.623205 kubelet[2580]: E0120 02:30:56.572355 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="7s" Jan 20 02:30:56.719237 kubelet[2580]: I0120 02:30:56.691474 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:30:56.852277 kubelet[2580]: I0120 02:30:56.851089 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:30:56.867323 kubelet[2580]: E0120 02:30:56.865729 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 02:30:56.867323 kubelet[2580]: E0120 02:30:56.866071 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:57.068848 kubelet[2580]: I0120 02:30:57.068648 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:30:57.076591 kubelet[2580]: E0120 02:30:57.076559 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:57.214353 kubelet[2580]: E0120 02:30:57.214200 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:57.506637 kubelet[2580]: E0120 02:30:57.424056 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:07.571951 kubelet[2580]: I0120 02:31:07.570523 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=11.570346546 podStartE2EDuration="11.570346546s" podCreationTimestamp="2026-01-20 02:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:31:00.832598289 +0000 UTC m=+60.561666602" watchObservedRunningTime="2026-01-20 02:31:07.570346546 +0000 UTC m=+67.299414828" Jan 20 02:31:08.319215 kubelet[2580]: I0120 02:31:08.305720 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=12.30566931 podStartE2EDuration="12.30566931s" podCreationTimestamp="2026-01-20 02:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:31:07.590253359 +0000 UTC m=+67.319321651" watchObservedRunningTime="2026-01-20 02:31:08.30566931 +0000 UTC m=+68.034737592" Jan 20 02:31:08.319215 kubelet[2580]: I0120 02:31:08.319160 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=11.319129533 podStartE2EDuration="11.319129533s" podCreationTimestamp="2026-01-20 02:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:31:08.287621374 +0000 UTC m=+68.016689667" watchObservedRunningTime="2026-01-20 02:31:08.319129533 +0000 UTC m=+68.048197815" Jan 20 02:31:28.178629 update_engine[1550]: I20260120 02:31:28.176635 1550 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 02:31:28.178629 update_engine[1550]: I20260120 02:31:28.177139 1550 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 02:31:28.189227 update_engine[1550]: I20260120 02:31:28.187083 1550 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 02:31:28.205871 update_engine[1550]: I20260120 02:31:28.194165 1550 omaha_request_params.cc:62] Current group set to stable Jan 20 02:31:28.205871 update_engine[1550]: I20260120 02:31:28.195165 1550 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 02:31:28.205871 update_engine[1550]: I20260120 02:31:28.195189 1550 update_attempter.cc:643] Scheduling an action processor start. Jan 20 02:31:28.205871 update_engine[1550]: I20260120 02:31:28.195263 1550 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:31:28.205871 update_engine[1550]: I20260120 02:31:28.195680 1550 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 02:31:28.205871 update_engine[1550]: I20260120 02:31:28.195990 1550 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:31:28.205871 update_engine[1550]: I20260120 02:31:28.196009 1550 omaha_request_action.cc:272] Request: Jan 20 02:31:28.205871 update_engine[1550]: Jan 20 02:31:28.205871 update_engine[1550]: Jan 20 02:31:28.205871 update_engine[1550]: Jan 20 02:31:28.205871 update_engine[1550]: Jan 20 02:31:28.205871 update_engine[1550]: Jan 20 02:31:28.205871 update_engine[1550]: Jan 20 02:31:28.205871 update_engine[1550]: Jan 20 02:31:28.205871 update_engine[1550]: Jan 20 02:31:28.205871 update_engine[1550]: I20260120 02:31:28.196069 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:31:28.205871 update_engine[1550]: I20260120 02:31:28.203205 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:31:28.205871 update_engine[1550]: I20260120 02:31:28.205207 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:31:28.353901 update_engine[1550]: E20260120 02:31:28.339926 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:31:28.353901 update_engine[1550]: I20260120 02:31:28.340233 1550 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 02:31:28.428167 locksmithd[1589]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 02:31:32.634029 kubelet[2580]: E0120 02:31:32.623907 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:33.483084 kubelet[2580]: E0120 02:31:33.478683 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:36.394227 systemd[1]: Reload requested from client PID 2875 ('systemctl') (unit session-9.scope)... Jan 20 02:31:36.423180 systemd[1]: Reloading... Jan 20 02:31:39.062879 update_engine[1550]: I20260120 02:31:39.062806 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:31:39.064590 update_engine[1550]: I20260120 02:31:39.064550 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:31:39.067599 update_engine[1550]: I20260120 02:31:39.067556 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:31:39.109018 update_engine[1550]: E20260120 02:31:39.108847 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:31:39.109018 update_engine[1550]: I20260120 02:31:39.108971 1550 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 02:31:40.206990 kubelet[2580]: E0120 02:31:40.206838 2580 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.177s" Jan 20 02:31:40.386053 zram_generator::config[2921]: No configuration found. Jan 20 02:31:42.561459 kubelet[2580]: E0120 02:31:42.500097 2580 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.482s" Jan 20 02:31:42.953576 kubelet[2580]: E0120 02:31:42.952924 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:48.333064 systemd[1]: Reloading finished in 11881 ms. Jan 20 02:31:48.889913 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:31:49.065791 update_engine[1550]: I20260120 02:31:49.064337 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:31:49.065791 update_engine[1550]: I20260120 02:31:49.064513 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:31:49.071535 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:31:49.093699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:31:49.093801 systemd[1]: kubelet.service: Consumed 14.678s CPU time, 139M memory peak. Jan 20 02:31:49.112995 update_engine[1550]: I20260120 02:31:49.112905 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:31:49.145505 update_engine[1550]: E20260120 02:31:49.138655 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:31:49.145505 update_engine[1550]: I20260120 02:31:49.138839 1550 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 02:31:49.205320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:31:55.856891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:31:56.058935 (kubelet)[2962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:31:57.820623 kubelet[2962]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:31:57.820623 kubelet[2962]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:31:57.820623 kubelet[2962]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:31:57.820623 kubelet[2962]: I0120 02:31:57.797481 2962 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:31:58.045089 kubelet[2962]: I0120 02:31:58.037163 2962 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 02:31:58.045089 kubelet[2962]: I0120 02:31:58.037216 2962 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:31:58.095566 kubelet[2962]: I0120 02:31:58.090723 2962 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 02:31:58.211042 kubelet[2962]: I0120 02:31:58.210834 2962 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 02:31:58.242152 kubelet[2962]: I0120 02:31:58.235823 2962 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:31:58.456040 kubelet[2962]: I0120 02:31:58.420744 2962 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:31:58.666345 kubelet[2962]: I0120 02:31:58.617881 2962 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:31:58.666345 kubelet[2962]: I0120 02:31:58.618575 2962 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:31:58.676129 kubelet[2962]: I0120 02:31:58.618624 2962 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:31:58.676875 kubelet[2962]: I0120 02:31:58.676841 2962 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:31:58.686047 kubelet[2962]: I0120 02:31:58.679608 2962 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 02:31:58.686047 kubelet[2962]: I0120 02:31:58.679715 2962 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:31:58.686047 kubelet[2962]: I0120 02:31:58.680047 2962 kubelet.go:446] "Attempting to sync node with API server" Jan 20 02:31:58.686047 kubelet[2962]: I0120 02:31:58.680081 2962 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:31:58.686047 kubelet[2962]: I0120 02:31:58.680133 2962 kubelet.go:352] "Adding apiserver pod source" Jan 20 02:31:58.686047 kubelet[2962]: I0120 02:31:58.680159 2962 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:31:58.917276 kubelet[2962]: I0120 02:31:58.893681 2962 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 02:31:58.917276 kubelet[2962]: I0120 02:31:58.894501 2962 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 02:31:58.917276 kubelet[2962]: I0120 02:31:58.913700 2962 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:31:58.917276 kubelet[2962]: I0120 02:31:58.913754 2962 server.go:1287] "Started kubelet" Jan 20 02:31:58.990777 kubelet[2962]: I0120 02:31:58.958649 2962 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:31:59.008018 kubelet[2962]: I0120 02:31:59.002465 2962 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:31:59.046563 kubelet[2962]: I0120 02:31:59.035842 2962 server.go:479] "Adding debug handlers to kubelet server" Jan 20 02:31:59.060824 update_engine[1550]: I20260120 02:31:59.057689 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:31:59.061807 update_engine[1550]: I20260120 02:31:59.061761 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:31:59.077353 update_engine[1550]: I20260120 02:31:59.077203 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:31:59.095742 kubelet[2962]: I0120 02:31:59.091800 2962 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:31:59.208316 update_engine[1550]: E20260120 02:31:59.193264 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:31:59.208722 kubelet[2962]: I0120 02:31:59.201509 2962 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:31:59.223460 update_engine[1550]: I20260120 02:31:59.208994 1550 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:31:59.223460 update_engine[1550]: I20260120 02:31:59.209049 1550 omaha_request_action.cc:617] Omaha request response: Jan 20 02:31:59.223460 update_engine[1550]: E20260120 02:31:59.209177 1550 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 02:31:59.223460 update_engine[1550]: I20260120 02:31:59.209323 1550 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 02:31:59.223460 update_engine[1550]: I20260120 02:31:59.209340 1550 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:31:59.223460 update_engine[1550]: I20260120 02:31:59.209352 1550 update_attempter.cc:306] Processing Done. Jan 20 02:31:59.337354 update_engine[1550]: E20260120 02:31:59.209875 1550 update_attempter.cc:619] Update failed. Jan 20 02:31:59.337354 update_engine[1550]: I20260120 02:31:59.262593 1550 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 02:31:59.337354 update_engine[1550]: I20260120 02:31:59.262634 1550 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 02:31:59.337354 update_engine[1550]: I20260120 02:31:59.262647 1550 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 02:31:59.337354 update_engine[1550]: I20260120 02:31:59.262789 1550 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:31:59.337354 update_engine[1550]: I20260120 02:31:59.262868 1550 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:31:59.337354 update_engine[1550]: I20260120 02:31:59.262879 1550 omaha_request_action.cc:272] Request: Jan 20 02:31:59.337354 update_engine[1550]: Jan 20 02:31:59.337354 update_engine[1550]: Jan 20 02:31:59.337354 update_engine[1550]: Jan 20 02:31:59.337354 update_engine[1550]: Jan 20 02:31:59.337354 update_engine[1550]: Jan 20 02:31:59.337354 update_engine[1550]: Jan 20 02:31:59.337354 update_engine[1550]: I20260120 02:31:59.262951 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:31:59.337354 update_engine[1550]: I20260120 02:31:59.262998 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:31:59.498745 kubelet[2962]: I0120 02:31:59.268853 2962 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:31:59.499081 update_engine[1550]: I20260120 02:31:59.446213 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:31:59.499081 update_engine[1550]: E20260120 02:31:59.482756 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:31:59.499081 update_engine[1550]: I20260120 02:31:59.482977 1550 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:31:59.499081 update_engine[1550]: I20260120 02:31:59.482996 1550 omaha_request_action.cc:617] Omaha request response: Jan 20 02:31:59.499081 update_engine[1550]: I20260120 02:31:59.483068 1550 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:31:59.499081 update_engine[1550]: I20260120 02:31:59.483084 1550 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:31:59.499081 update_engine[1550]: I20260120 02:31:59.483096 1550 update_attempter.cc:306] Processing Done. Jan 20 02:31:59.499081 update_engine[1550]: I20260120 02:31:59.483109 1550 update_attempter.cc:310] Error event sent. Jan 20 02:31:59.499081 update_engine[1550]: I20260120 02:31:59.483122 1550 update_check_scheduler.cc:74] Next update check in 46m13s Jan 20 02:31:59.523696 kubelet[2962]: I0120 02:31:59.523646 2962 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:31:59.524333 kubelet[2962]: E0120 02:31:59.524295 2962 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:31:59.538665 kubelet[2962]: I0120 02:31:59.538623 2962 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:31:59.673291 kubelet[2962]: I0120 02:31:59.661766 2962 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:31:59.696849 kubelet[2962]: E0120 02:31:59.687721 2962 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:31:59.707779 locksmithd[1589]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 02:31:59.721703 locksmithd[1589]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 02:31:59.793194 kubelet[2962]: E0120 02:31:59.793144 2962 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:31:59.843084 kubelet[2962]: I0120 02:31:59.833127 2962 factory.go:221] Registration of the systemd container factory successfully Jan 20 02:31:59.843084 kubelet[2962]: I0120 02:31:59.833632 2962 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:32:00.554198 kubelet[2962]: I0120 02:32:00.554157 2962 factory.go:221] Registration of the containerd container factory successfully Jan 20 02:32:00.740206 kubelet[2962]: I0120 02:32:00.723711 2962 apiserver.go:52] "Watching apiserver" Jan 20 02:32:00.829618 kubelet[2962]: I0120 02:32:00.827847 2962 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 02:32:00.977513 kubelet[2962]: I0120 02:32:00.977315 2962 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 02:32:00.980342 kubelet[2962]: I0120 02:32:00.980310 2962 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 02:32:00.981599 kubelet[2962]: I0120 02:32:00.981568 2962 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:32:00.981713 kubelet[2962]: I0120 02:32:00.981698 2962 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 02:32:00.984509 kubelet[2962]: E0120 02:32:00.984328 2962 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:32:01.085853 kubelet[2962]: E0120 02:32:01.084852 2962 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:32:01.301940 kubelet[2962]: E0120 02:32:01.290173 2962 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:32:01.712345 kubelet[2962]: E0120 02:32:01.694789 2962 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:32:02.286054 kubelet[2962]: I0120 02:32:02.283784 2962 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:32:02.286054 kubelet[2962]: I0120 02:32:02.283933 2962 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:32:02.286054 kubelet[2962]: I0120 02:32:02.283978 2962 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:32:02.286054 kubelet[2962]: I0120 02:32:02.284243 2962 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 02:32:02.286054 kubelet[2962]: I0120 02:32:02.284282 2962 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 02:32:02.286054 kubelet[2962]: I0120 02:32:02.284317 2962 policy_none.go:49] "None policy: Start" Jan 20 02:32:02.286054 kubelet[2962]: I0120 02:32:02.284335 2962 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:32:02.286054 kubelet[2962]: I0120 02:32:02.284355 2962 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:32:02.365257 kubelet[2962]: I0120 02:32:02.292910 2962 state_mem.go:75] "Updated machine memory state" Jan 20 02:32:02.396618 sudo[2997]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 20 02:32:02.426689 sudo[2997]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 20 02:32:02.505234 kubelet[2962]: E0120 02:32:02.505120 2962 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:32:02.611947 kubelet[2962]: I0120 02:32:02.544644 2962 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 02:32:02.730236 kubelet[2962]: I0120 02:32:02.729353 2962 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:32:02.770798 kubelet[2962]: I0120 02:32:02.769740 2962 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:32:02.783750 kubelet[2962]: I0120 02:32:02.771122 2962 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:32:02.957086 kubelet[2962]: I0120 02:32:02.946708 2962 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 02:32:02.971247 containerd[1582]: time="2026-01-20T02:32:02.969480960Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 02:32:03.073133 kubelet[2962]: I0120 02:32:03.072558 2962 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 02:32:03.199013 kubelet[2962]: E0120 02:32:03.149804 2962 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:32:03.488285 kubelet[2962]: I0120 02:32:03.482970 2962 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:32:03.916508 kubelet[2962]: I0120 02:32:03.916125 2962 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 02:32:03.952128 kubelet[2962]: I0120 02:32:03.927630 2962 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:32:04.122579 kubelet[2962]: I0120 02:32:04.122531 2962 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:32:04.488609 kubelet[2962]: I0120 02:32:04.468912 2962 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:32:04.900485 kubelet[2962]: I0120 02:32:04.889340 2962 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:32:04.975586 kubelet[2962]: I0120 02:32:04.975122 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5915019a198048ec3eea369ccf32d44a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5915019a198048ec3eea369ccf32d44a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:32:04.975586 kubelet[2962]: I0120 02:32:04.975184 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:32:04.975586 kubelet[2962]: I0120 02:32:04.975221 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:32:04.975586 kubelet[2962]: I0120 02:32:04.975246 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:32:04.975586 kubelet[2962]: I0120 02:32:04.975268 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:32:05.090334 kubelet[2962]: I0120 02:32:04.975286 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5915019a198048ec3eea369ccf32d44a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5915019a198048ec3eea369ccf32d44a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:32:05.090334 kubelet[2962]: I0120 02:32:04.975305 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5915019a198048ec3eea369ccf32d44a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5915019a198048ec3eea369ccf32d44a\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:32:05.090334 kubelet[2962]: I0120 02:32:04.975325 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:32:05.090334 kubelet[2962]: I0120 02:32:04.975345 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:32:05.210559 kubelet[2962]: E0120 02:32:05.210291 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:06.045940 kubelet[2962]: E0120 02:32:05.964947 2962 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:32:06.045940 kubelet[2962]: E0120 02:32:05.978313 2962 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 02:32:06.045940 kubelet[2962]: E0120 02:32:05.985004 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:06.099103 kubelet[2962]: E0120 02:32:06.050236 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:06.133099 kubelet[2962]: E0120 02:32:06.122325 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:06.167277 kubelet[2962]: E0120 02:32:06.155163 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:06.257174 kubelet[2962]: I0120 02:32:06.253038 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a83d5d5-1926-4c47-81e6-492ee8ec8802-xtables-lock\") pod \"kube-proxy-qk7wq\" (UID: \"5a83d5d5-1926-4c47-81e6-492ee8ec8802\") " pod="kube-system/kube-proxy-qk7wq" Jan 20 02:32:06.257174 kubelet[2962]: I0120 02:32:06.253159 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zh66\" (UniqueName: \"kubernetes.io/projected/5a83d5d5-1926-4c47-81e6-492ee8ec8802-kube-api-access-7zh66\") pod \"kube-proxy-qk7wq\" (UID: \"5a83d5d5-1926-4c47-81e6-492ee8ec8802\") " pod="kube-system/kube-proxy-qk7wq" Jan 20 02:32:06.257174 kubelet[2962]: I0120 02:32:06.253195 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a83d5d5-1926-4c47-81e6-492ee8ec8802-kube-proxy\") pod \"kube-proxy-qk7wq\" (UID: \"5a83d5d5-1926-4c47-81e6-492ee8ec8802\") " pod="kube-system/kube-proxy-qk7wq" Jan 20 02:32:06.257174 kubelet[2962]: I0120 02:32:06.253228 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a83d5d5-1926-4c47-81e6-492ee8ec8802-lib-modules\") pod \"kube-proxy-qk7wq\" (UID: \"5a83d5d5-1926-4c47-81e6-492ee8ec8802\") " pod="kube-system/kube-proxy-qk7wq" Jan 20 02:32:06.571280 systemd[1]: Created slice kubepods-besteffort-pod5a83d5d5_1926_4c47_81e6_492ee8ec8802.slice - libcontainer container kubepods-besteffort-pod5a83d5d5_1926_4c47_81e6_492ee8ec8802.slice. Jan 20 02:32:07.047835 kubelet[2962]: E0120 02:32:07.040631 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:07.084853 containerd[1582]: time="2026-01-20T02:32:07.084357267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qk7wq,Uid:5a83d5d5-1926-4c47-81e6-492ee8ec8802,Namespace:kube-system,Attempt:0,}" Jan 20 02:32:07.223100 kubelet[2962]: E0120 02:32:07.221553 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:07.223100 kubelet[2962]: E0120 02:32:07.223017 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:07.229731 kubelet[2962]: E0120 02:32:07.229601 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:08.275031 kubelet[2962]: E0120 02:32:08.265461 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:08.658117 containerd[1582]: time="2026-01-20T02:32:08.655131728Z" level=info msg="connecting to shim a5c28bd0650c94151b26960cfbfbe368568f0c1cb964bd3a84b239ef18d9b18e" address="unix:///run/containerd/s/b4a5319c76c80a97211de596f9b3013da360af03f341c556b55639e514f48c23" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:32:09.986224 systemd[1]: Started cri-containerd-a5c28bd0650c94151b26960cfbfbe368568f0c1cb964bd3a84b239ef18d9b18e.scope - libcontainer container a5c28bd0650c94151b26960cfbfbe368568f0c1cb964bd3a84b239ef18d9b18e. Jan 20 02:32:10.548899 kubelet[2962]: E0120 02:32:10.499958 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:11.329806 kubelet[2962]: E0120 02:32:11.313159 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:11.798631 containerd[1582]: time="2026-01-20T02:32:11.798575157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qk7wq,Uid:5a83d5d5-1926-4c47-81e6-492ee8ec8802,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5c28bd0650c94151b26960cfbfbe368568f0c1cb964bd3a84b239ef18d9b18e\"" Jan 20 02:32:11.877180 kubelet[2962]: E0120 02:32:11.873998 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:11.879000 sudo[2997]: pam_unix(sudo:session): session closed for user root Jan 20 02:32:12.033951 containerd[1582]: time="2026-01-20T02:32:12.031250518Z" level=info msg="CreateContainer within sandbox \"a5c28bd0650c94151b26960cfbfbe368568f0c1cb964bd3a84b239ef18d9b18e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 02:32:12.683830 kubelet[2962]: E0120 02:32:12.678266 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:12.710925 containerd[1582]: time="2026-01-20T02:32:12.705932955Z" level=info msg="Container 743f08211d202c02fd7532f7592347162028315c5cbc03bc57edb25de528cf8b: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:32:12.722777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890391776.mount: Deactivated successfully. Jan 20 02:32:12.903280 containerd[1582]: time="2026-01-20T02:32:12.902152140Z" level=info msg="CreateContainer within sandbox \"a5c28bd0650c94151b26960cfbfbe368568f0c1cb964bd3a84b239ef18d9b18e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"743f08211d202c02fd7532f7592347162028315c5cbc03bc57edb25de528cf8b\"" Jan 20 02:32:12.908467 containerd[1582]: time="2026-01-20T02:32:12.905794185Z" level=info msg="StartContainer for \"743f08211d202c02fd7532f7592347162028315c5cbc03bc57edb25de528cf8b\"" Jan 20 02:32:12.939612 containerd[1582]: time="2026-01-20T02:32:12.935599467Z" level=info msg="connecting to shim 743f08211d202c02fd7532f7592347162028315c5cbc03bc57edb25de528cf8b" address="unix:///run/containerd/s/b4a5319c76c80a97211de596f9b3013da360af03f341c556b55639e514f48c23" protocol=ttrpc version=3 Jan 20 02:32:13.578063 systemd[1]: Started cri-containerd-743f08211d202c02fd7532f7592347162028315c5cbc03bc57edb25de528cf8b.scope - libcontainer container 743f08211d202c02fd7532f7592347162028315c5cbc03bc57edb25de528cf8b. Jan 20 02:32:13.809237 kubelet[2962]: E0120 02:32:13.806223 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:15.437069 containerd[1582]: time="2026-01-20T02:32:15.436091081Z" level=info msg="StartContainer for \"743f08211d202c02fd7532f7592347162028315c5cbc03bc57edb25de528cf8b\" returns successfully" Jan 20 02:32:15.622015 kubelet[2962]: E0120 02:32:15.618214 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:15.946732 kubelet[2962]: E0120 02:32:15.946324 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:15.954827 kubelet[2962]: E0120 02:32:15.953250 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:16.626345 kubelet[2962]: I0120 02:32:16.621941 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qk7wq" podStartSLOduration=14.621912508 podStartE2EDuration="14.621912508s" podCreationTimestamp="2026-01-20 02:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:32:16.612874636 +0000 UTC m=+20.339652288" watchObservedRunningTime="2026-01-20 02:32:16.621912508 +0000 UTC m=+20.348690160" Jan 20 02:32:16.971126 kubelet[2962]: E0120 02:32:16.970124 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:18.804188 kubelet[2962]: I0120 02:32:18.788216 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b072ca2a-6aa0-458a-88fb-0b9971bf97b9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wrz6w\" (UID: \"b072ca2a-6aa0-458a-88fb-0b9971bf97b9\") " pod="kube-system/cilium-operator-6c4d7847fc-wrz6w" Jan 20 02:32:18.804188 kubelet[2962]: I0120 02:32:18.788323 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkm2g\" (UniqueName: \"kubernetes.io/projected/b072ca2a-6aa0-458a-88fb-0b9971bf97b9-kube-api-access-vkm2g\") pod \"cilium-operator-6c4d7847fc-wrz6w\" (UID: \"b072ca2a-6aa0-458a-88fb-0b9971bf97b9\") " pod="kube-system/cilium-operator-6c4d7847fc-wrz6w" Jan 20 02:32:18.892194 systemd[1]: Created slice kubepods-besteffort-podb072ca2a_6aa0_458a_88fb_0b9971bf97b9.slice - libcontainer container kubepods-besteffort-podb072ca2a_6aa0_458a_88fb_0b9971bf97b9.slice. Jan 20 02:32:19.573460 kubelet[2962]: I0120 02:32:19.550971 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-run\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.573460 kubelet[2962]: I0120 02:32:19.551079 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-xtables-lock\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.573460 kubelet[2962]: I0120 02:32:19.551113 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-bpf-maps\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.573460 kubelet[2962]: I0120 02:32:19.551138 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-cgroup\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.573460 kubelet[2962]: I0120 02:32:19.551163 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-host-proc-sys-net\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.573460 kubelet[2962]: I0120 02:32:19.551186 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-host-proc-sys-kernel\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.573943 kubelet[2962]: I0120 02:32:19.551211 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnq6c\" (UniqueName: \"kubernetes.io/projected/edb7e671-ac61-412f-b840-58c11df66d8f-kube-api-access-dnq6c\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.573943 kubelet[2962]: I0120 02:32:19.551278 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edb7e671-ac61-412f-b840-58c11df66d8f-hubble-tls\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.573943 kubelet[2962]: I0120 02:32:19.551313 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-hostproc\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.573943 kubelet[2962]: I0120 02:32:19.551339 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-etc-cni-netd\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.573943 kubelet[2962]: I0120 02:32:19.551458 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edb7e671-ac61-412f-b840-58c11df66d8f-clustermesh-secrets\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.607457 kubelet[2962]: I0120 02:32:19.551494 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cni-path\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.607457 kubelet[2962]: I0120 02:32:19.599928 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-config-path\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.607457 kubelet[2962]: I0120 02:32:19.599980 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-lib-modules\") pod \"cilium-f54ms\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " pod="kube-system/cilium-f54ms" Jan 20 02:32:19.805980 systemd[1]: Created slice kubepods-burstable-podedb7e671_ac61_412f_b840_58c11df66d8f.slice - libcontainer container kubepods-burstable-podedb7e671_ac61_412f_b840_58c11df66d8f.slice. Jan 20 02:32:20.003333 kubelet[2962]: E0120 02:32:19.997073 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:20.052003 containerd[1582]: time="2026-01-20T02:32:20.051945454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wrz6w,Uid:b072ca2a-6aa0-458a-88fb-0b9971bf97b9,Namespace:kube-system,Attempt:0,}" Jan 20 02:32:20.605015 kubelet[2962]: E0120 02:32:20.604965 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:20.628778 containerd[1582]: time="2026-01-20T02:32:20.625081771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f54ms,Uid:edb7e671-ac61-412f-b840-58c11df66d8f,Namespace:kube-system,Attempt:0,}" Jan 20 02:32:20.634782 containerd[1582]: time="2026-01-20T02:32:20.630095026Z" level=info msg="connecting to shim 10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3" address="unix:///run/containerd/s/93bf0fb31df29d22ff4aa43116e9bc2f6ddfc35876e8b2f6feb6dc3f8250e7d1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:32:21.397593 containerd[1582]: time="2026-01-20T02:32:21.343910799Z" level=info msg="connecting to shim a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26" address="unix:///run/containerd/s/56f86f5eab3e1a379278d4a66df36a58b80058f498217db38f94897f9a3f813b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:32:22.022103 systemd[1]: Started cri-containerd-10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3.scope - libcontainer container 10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3. Jan 20 02:32:22.200582 systemd[1]: Started cri-containerd-a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26.scope - libcontainer container a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26. Jan 20 02:32:23.082951 containerd[1582]: time="2026-01-20T02:32:23.064321544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f54ms,Uid:edb7e671-ac61-412f-b840-58c11df66d8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\"" Jan 20 02:32:23.122041 kubelet[2962]: E0120 02:32:23.121996 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:23.254969 containerd[1582]: time="2026-01-20T02:32:23.246785050Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 02:32:23.445205 containerd[1582]: time="2026-01-20T02:32:23.444184334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wrz6w,Uid:b072ca2a-6aa0-458a-88fb-0b9971bf97b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\"" Jan 20 02:32:23.449021 kubelet[2962]: E0120 02:32:23.448982 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:42.902682 kernel: sched: DL replenish lagged too much Jan 20 02:32:43.513287 systemd[1]: cri-containerd-36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457.scope: Deactivated successfully. Jan 20 02:32:43.581978 systemd[1]: cri-containerd-36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457.scope: Consumed 7.392s CPU time, 19.8M memory peak. Jan 20 02:32:43.763553 systemd[1]: cri-containerd-b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0.scope: Deactivated successfully. Jan 20 02:32:43.783805 systemd[1]: cri-containerd-b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0.scope: Consumed 10.950s CPU time, 47M memory peak. Jan 20 02:32:44.417641 containerd[1582]: time="2026-01-20T02:32:44.417526410Z" level=info msg="received container exit event container_id:\"36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457\" id:\"36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457\" pid:2814 exit_status:1 exited_at:{seconds:1768876363 nanos:565288557}" Jan 20 02:32:44.758344 kubelet[2962]: E0120 02:32:44.749958 2962 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.547s" Jan 20 02:32:44.817142 kubelet[2962]: E0120 02:32:44.801576 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:45.245314 containerd[1582]: time="2026-01-20T02:32:45.216177141Z" level=info msg="received container exit event container_id:\"b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0\" id:\"b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0\" pid:2806 exit_status:1 exited_at:{seconds:1768876365 nanos:204596310}" Jan 20 02:32:47.313695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457-rootfs.mount: Deactivated successfully. Jan 20 02:32:47.727616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0-rootfs.mount: Deactivated successfully. Jan 20 02:32:47.866766 kubelet[2962]: I0120 02:32:47.859343 2962 scope.go:117] "RemoveContainer" containerID="36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457" Jan 20 02:32:47.866766 kubelet[2962]: E0120 02:32:47.859628 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:48.505677 containerd[1582]: time="2026-01-20T02:32:48.434890855Z" level=info msg="CreateContainer within sandbox \"21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 02:32:49.765702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221592988.mount: Deactivated successfully. Jan 20 02:32:49.783188 kubelet[2962]: I0120 02:32:49.773546 2962 scope.go:117] "RemoveContainer" containerID="b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0" Jan 20 02:32:49.783188 kubelet[2962]: E0120 02:32:49.777550 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:49.892081 containerd[1582]: time="2026-01-20T02:32:49.878227024Z" level=info msg="CreateContainer within sandbox \"26c7912681f5e24bc62bdea7724fb16ee3b7e9dab2194782aa96528ea7432dc3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 02:32:49.995434 containerd[1582]: time="2026-01-20T02:32:49.987934944Z" level=info msg="Container edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:32:50.128001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3715718486.mount: Deactivated successfully. Jan 20 02:32:50.428344 containerd[1582]: time="2026-01-20T02:32:50.411486035Z" level=info msg="CreateContainer within sandbox \"21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53\"" Jan 20 02:32:50.500501 containerd[1582]: time="2026-01-20T02:32:50.472325995Z" level=info msg="StartContainer for \"edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53\"" Jan 20 02:32:50.631716 containerd[1582]: time="2026-01-20T02:32:50.618253843Z" level=info msg="connecting to shim edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53" address="unix:///run/containerd/s/d4cd72ef33465427891f2c4e6222467b366be89a169ed5c7c3c5f56089797b10" protocol=ttrpc version=3 Jan 20 02:32:50.775005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640438260.mount: Deactivated successfully. Jan 20 02:32:51.012224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156606125.mount: Deactivated successfully. Jan 20 02:32:51.055909 containerd[1582]: time="2026-01-20T02:32:51.054228975Z" level=info msg="Container b6251eaacc76c5fef651af58722641cf320439e001b6f2551782e45a20b40766: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:32:51.310962 systemd[1]: Started cri-containerd-edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53.scope - libcontainer container edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53. Jan 20 02:32:52.710427 containerd[1582]: time="2026-01-20T02:32:52.699775273Z" level=info msg="CreateContainer within sandbox \"26c7912681f5e24bc62bdea7724fb16ee3b7e9dab2194782aa96528ea7432dc3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b6251eaacc76c5fef651af58722641cf320439e001b6f2551782e45a20b40766\"" Jan 20 02:32:52.917739 containerd[1582]: time="2026-01-20T02:32:52.889801492Z" level=info msg="StartContainer for \"b6251eaacc76c5fef651af58722641cf320439e001b6f2551782e45a20b40766\"" Jan 20 02:32:52.965234 containerd[1582]: time="2026-01-20T02:32:52.919229381Z" level=info msg="connecting to shim b6251eaacc76c5fef651af58722641cf320439e001b6f2551782e45a20b40766" address="unix:///run/containerd/s/c9b7f07b7b484bb7f7d55c81487d29cb32d44fac5c37f30d19d18b557b203a9d" protocol=ttrpc version=3 Jan 20 02:32:54.159707 containerd[1582]: time="2026-01-20T02:32:54.159477106Z" level=error msg="get state for edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53" error="context deadline exceeded" Jan 20 02:32:54.160636 containerd[1582]: time="2026-01-20T02:32:54.160475722Z" level=warning msg="unknown status" status=0 Jan 20 02:32:54.456059 kubelet[2962]: E0120 02:32:54.434215 2962 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.432s" Jan 20 02:32:55.976579 systemd[1]: Started cri-containerd-b6251eaacc76c5fef651af58722641cf320439e001b6f2551782e45a20b40766.scope - libcontainer container b6251eaacc76c5fef651af58722641cf320439e001b6f2551782e45a20b40766. Jan 20 02:32:56.029786 containerd[1582]: time="2026-01-20T02:32:56.012749343Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:32:58.009582 containerd[1582]: time="2026-01-20T02:32:58.009333663Z" level=info msg="StartContainer for \"edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53\" returns successfully" Jan 20 02:32:58.251620 containerd[1582]: time="2026-01-20T02:32:58.251470702Z" level=error msg="get state for b6251eaacc76c5fef651af58722641cf320439e001b6f2551782e45a20b40766" error="context deadline exceeded" Jan 20 02:32:58.251620 containerd[1582]: time="2026-01-20T02:32:58.251522347Z" level=warning msg="unknown status" status=0 Jan 20 02:32:58.430077 containerd[1582]: time="2026-01-20T02:32:58.427445451Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:33:00.351545 containerd[1582]: time="2026-01-20T02:33:00.350134240Z" level=info msg="StartContainer for \"b6251eaacc76c5fef651af58722641cf320439e001b6f2551782e45a20b40766\" returns successfully" Jan 20 02:33:59.577704 kubelet[2962]: E0120 02:33:59.577568 2962 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Jan 20 02:34:02.126693 kubelet[2962]: E0120 02:34:02.126336 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:07.206244 kubelet[2962]: E0120 02:34:07.206184 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:12.497700 kubelet[2962]: E0120 02:34:12.497131 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:17.558226 kubelet[2962]: E0120 02:34:17.558147 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:19.704470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752228365.mount: Deactivated successfully. Jan 20 02:34:22.587953 kubelet[2962]: E0120 02:34:22.587007 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:27.604589 kubelet[2962]: E0120 02:34:27.602302 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:32.616033 kubelet[2962]: E0120 02:34:32.604147 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:37.647892 kubelet[2962]: E0120 02:34:37.647753 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:42.681258 kubelet[2962]: E0120 02:34:42.674732 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:47.679102 kubelet[2962]: E0120 02:34:47.678942 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:52.690619 kubelet[2962]: E0120 02:34:52.690562 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:34:57.762164 kubelet[2962]: E0120 02:34:57.762100 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:02.768067 kubelet[2962]: E0120 02:35:02.768007 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:07.818984 kubelet[2962]: E0120 02:35:07.818901 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:13.163062 kubelet[2962]: E0120 02:35:13.160026 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:18.176536 kubelet[2962]: E0120 02:35:18.173027 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:20.329097 containerd[1582]: time="2026-01-20T02:35:20.328863488Z" level=warning msg="container event discarded" container=a3de901ed30117134596b1b25869d5b46bca8341e7904d8ee5038cd291b7ecd7 type=CONTAINER_CREATED_EVENT Jan 20 02:35:20.362082 containerd[1582]: time="2026-01-20T02:35:20.361909185Z" level=warning msg="container event discarded" container=a3de901ed30117134596b1b25869d5b46bca8341e7904d8ee5038cd291b7ecd7 type=CONTAINER_STARTED_EVENT Jan 20 02:35:21.215563 containerd[1582]: time="2026-01-20T02:35:21.210548567Z" level=warning msg="container event discarded" container=26c7912681f5e24bc62bdea7724fb16ee3b7e9dab2194782aa96528ea7432dc3 type=CONTAINER_CREATED_EVENT Jan 20 02:35:21.215563 containerd[1582]: time="2026-01-20T02:35:21.210617710Z" level=warning msg="container event discarded" container=26c7912681f5e24bc62bdea7724fb16ee3b7e9dab2194782aa96528ea7432dc3 type=CONTAINER_STARTED_EVENT Jan 20 02:35:21.290789 containerd[1582]: time="2026-01-20T02:35:21.290599510Z" level=warning msg="container event discarded" container=21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1 type=CONTAINER_CREATED_EVENT Jan 20 02:35:21.290789 containerd[1582]: time="2026-01-20T02:35:21.290731882Z" level=warning msg="container event discarded" container=21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1 type=CONTAINER_STARTED_EVENT Jan 20 02:35:21.290789 containerd[1582]: time="2026-01-20T02:35:21.290749165Z" level=warning msg="container event discarded" container=af156bc1d1970554372dfca6d09f0bf944e80e6414e83a3b5fa1f0e3770deb29 type=CONTAINER_CREATED_EVENT Jan 20 02:35:21.859864 containerd[1582]: time="2026-01-20T02:35:21.859329152Z" level=warning msg="container event discarded" container=b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0 type=CONTAINER_CREATED_EVENT Jan 20 02:35:21.859864 containerd[1582]: time="2026-01-20T02:35:21.859499737Z" level=warning msg="container event discarded" container=36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457 type=CONTAINER_CREATED_EVENT Jan 20 02:35:23.016541 containerd[1582]: time="2026-01-20T02:35:23.002655388Z" level=warning msg="container event discarded" container=af156bc1d1970554372dfca6d09f0bf944e80e6414e83a3b5fa1f0e3770deb29 type=CONTAINER_STARTED_EVENT Jan 20 02:35:23.197879 kubelet[2962]: E0120 02:35:23.197772 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:23.390785 containerd[1582]: time="2026-01-20T02:35:23.386447807Z" level=warning msg="container event discarded" container=b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0 type=CONTAINER_STARTED_EVENT Jan 20 02:35:23.821078 containerd[1582]: time="2026-01-20T02:35:23.819815809Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:35:23.844746 containerd[1582]: time="2026-01-20T02:35:23.832703710Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 02:35:23.855633 containerd[1582]: time="2026-01-20T02:35:23.854667286Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:35:23.884293 containerd[1582]: time="2026-01-20T02:35:23.882026505Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 3m0.635163781s" Jan 20 02:35:23.884293 containerd[1582]: time="2026-01-20T02:35:23.882099955Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 02:35:23.924527 containerd[1582]: time="2026-01-20T02:35:23.924122458Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 02:35:23.933581 containerd[1582]: time="2026-01-20T02:35:23.932714873Z" level=info msg="CreateContainer within sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 02:35:24.063115 containerd[1582]: time="2026-01-20T02:35:24.036722473Z" level=warning msg="container event discarded" container=36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457 type=CONTAINER_STARTED_EVENT Jan 20 02:35:24.235858 containerd[1582]: time="2026-01-20T02:35:24.227870396Z" level=info msg="Container 788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:35:24.386016 containerd[1582]: time="2026-01-20T02:35:24.382563303Z" level=info msg="CreateContainer within sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\"" Jan 20 02:35:24.388573 containerd[1582]: time="2026-01-20T02:35:24.388330381Z" level=info msg="StartContainer for \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\"" Jan 20 02:35:24.404623 containerd[1582]: time="2026-01-20T02:35:24.394933914Z" level=info msg="connecting to shim 788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048" address="unix:///run/containerd/s/56f86f5eab3e1a379278d4a66df36a58b80058f498217db38f94897f9a3f813b" protocol=ttrpc version=3 Jan 20 02:35:25.396336 systemd[1]: Started cri-containerd-788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048.scope - libcontainer container 788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048. Jan 20 02:35:26.634218 containerd[1582]: time="2026-01-20T02:35:26.627574567Z" level=info msg="StartContainer for \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\" returns successfully" Jan 20 02:35:26.736810 systemd[1]: cri-containerd-788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048.scope: Deactivated successfully. Jan 20 02:35:26.862158 containerd[1582]: time="2026-01-20T02:35:26.861929138Z" level=info msg="received container exit event container_id:\"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\" id:\"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\" pid:3505 exited_at:{seconds:1768876526 nanos:850737223}" Jan 20 02:35:27.540304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048-rootfs.mount: Deactivated successfully. Jan 20 02:35:28.172765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110376374.mount: Deactivated successfully. Jan 20 02:35:28.203115 kubelet[2962]: E0120 02:35:28.201564 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:28.789217 containerd[1582]: time="2026-01-20T02:35:28.786230703Z" level=info msg="CreateContainer within sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 02:35:28.992047 containerd[1582]: time="2026-01-20T02:35:28.991998431Z" level=info msg="Container e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:35:29.117198 containerd[1582]: time="2026-01-20T02:35:29.116043345Z" level=info msg="CreateContainer within sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\"" Jan 20 02:35:29.130036 containerd[1582]: time="2026-01-20T02:35:29.125617448Z" level=info msg="StartContainer for \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\"" Jan 20 02:35:29.130036 containerd[1582]: time="2026-01-20T02:35:29.127170534Z" level=info msg="connecting to shim e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7" address="unix:///run/containerd/s/56f86f5eab3e1a379278d4a66df36a58b80058f498217db38f94897f9a3f813b" protocol=ttrpc version=3 Jan 20 02:35:29.535874 systemd[1]: Started cri-containerd-e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7.scope - libcontainer container e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7. Jan 20 02:35:31.559062 containerd[1582]: time="2026-01-20T02:35:31.559007959Z" level=info msg="StartContainer for \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\" returns successfully" Jan 20 02:35:31.714760 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 02:35:31.715348 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:35:31.748882 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:35:31.772046 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:35:31.852630 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 02:35:31.875074 systemd[1]: cri-containerd-e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7.scope: Deactivated successfully. Jan 20 02:35:32.234231 containerd[1582]: time="2026-01-20T02:35:32.182100330Z" level=info msg="received container exit event container_id:\"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\" id:\"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\" pid:3559 exited_at:{seconds:1768876531 nanos:855820937}" Jan 20 02:35:32.899125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:35:33.294931 kubelet[2962]: E0120 02:35:33.294087 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:34.691068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7-rootfs.mount: Deactivated successfully. Jan 20 02:35:36.379175 containerd[1582]: time="2026-01-20T02:35:36.379045921Z" level=info msg="CreateContainer within sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 02:35:36.702188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount793802709.mount: Deactivated successfully. Jan 20 02:35:36.766614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719830549.mount: Deactivated successfully. Jan 20 02:35:36.849045 containerd[1582]: time="2026-01-20T02:35:36.801357212Z" level=info msg="Container ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:35:37.314112 containerd[1582]: time="2026-01-20T02:35:37.314052523Z" level=info msg="CreateContainer within sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\"" Jan 20 02:35:37.388875 containerd[1582]: time="2026-01-20T02:35:37.376702915Z" level=info msg="StartContainer for \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\"" Jan 20 02:35:37.419600 containerd[1582]: time="2026-01-20T02:35:37.419545336Z" level=info msg="connecting to shim ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3" address="unix:///run/containerd/s/56f86f5eab3e1a379278d4a66df36a58b80058f498217db38f94897f9a3f813b" protocol=ttrpc version=3 Jan 20 02:35:38.018295 systemd[1]: Started cri-containerd-ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3.scope - libcontainer container ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3. Jan 20 02:35:38.336771 kubelet[2962]: E0120 02:35:38.331238 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:39.903597 systemd[1]: cri-containerd-ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3.scope: Deactivated successfully. Jan 20 02:35:39.932946 containerd[1582]: time="2026-01-20T02:35:39.931026698Z" level=info msg="StartContainer for \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\" returns successfully" Jan 20 02:35:39.932946 containerd[1582]: time="2026-01-20T02:35:39.931213183Z" level=info msg="received container exit event container_id:\"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\" id:\"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\" pid:3610 exited_at:{seconds:1768876539 nanos:928203957}" Jan 20 02:35:40.278771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3-rootfs.mount: Deactivated successfully. Jan 20 02:35:41.571550 containerd[1582]: time="2026-01-20T02:35:41.536711167Z" level=info msg="CreateContainer within sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 02:35:42.328417 containerd[1582]: time="2026-01-20T02:35:42.325332030Z" level=info msg="Container 42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:35:42.542663 containerd[1582]: time="2026-01-20T02:35:42.534042848Z" level=info msg="CreateContainer within sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\"" Jan 20 02:35:42.599720 containerd[1582]: time="2026-01-20T02:35:42.587832504Z" level=info msg="StartContainer for \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\"" Jan 20 02:35:42.599720 containerd[1582]: time="2026-01-20T02:35:42.589783238Z" level=info msg="connecting to shim 42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b" address="unix:///run/containerd/s/56f86f5eab3e1a379278d4a66df36a58b80058f498217db38f94897f9a3f813b" protocol=ttrpc version=3 Jan 20 02:35:43.179199 systemd[1]: Started cri-containerd-42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b.scope - libcontainer container 42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b. Jan 20 02:35:43.480614 kubelet[2962]: E0120 02:35:43.432861 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:44.604939 systemd[1]: cri-containerd-42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b.scope: Deactivated successfully. Jan 20 02:35:44.714951 containerd[1582]: time="2026-01-20T02:35:44.656904242Z" level=info msg="received container exit event container_id:\"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\" id:\"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\" pid:3652 exited_at:{seconds:1768876544 nanos:656545061}" Jan 20 02:35:44.815786 containerd[1582]: time="2026-01-20T02:35:44.787030827Z" level=info msg="StartContainer for \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\" returns successfully" Jan 20 02:35:45.889931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b-rootfs.mount: Deactivated successfully. Jan 20 02:35:46.794524 containerd[1582]: time="2026-01-20T02:35:46.792544498Z" level=info msg="CreateContainer within sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 02:35:47.658802 containerd[1582]: time="2026-01-20T02:35:47.644078708Z" level=info msg="Container bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:35:47.761688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846365037.mount: Deactivated successfully. Jan 20 02:35:47.971575 containerd[1582]: time="2026-01-20T02:35:47.937886676Z" level=info msg="CreateContainer within sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\"" Jan 20 02:35:48.015719 containerd[1582]: time="2026-01-20T02:35:47.994797690Z" level=info msg="StartContainer for \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\"" Jan 20 02:35:48.124790 containerd[1582]: time="2026-01-20T02:35:48.117651795Z" level=info msg="connecting to shim bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60" address="unix:///run/containerd/s/56f86f5eab3e1a379278d4a66df36a58b80058f498217db38f94897f9a3f813b" protocol=ttrpc version=3 Jan 20 02:35:48.937780 kubelet[2962]: E0120 02:35:48.924826 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:35:49.409030 systemd[1]: Started cri-containerd-bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60.scope - libcontainer container bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60. Jan 20 02:35:51.251280 containerd[1582]: time="2026-01-20T02:35:51.250210128Z" level=info msg="StartContainer for \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\" returns successfully" Jan 20 02:35:54.452937 kubelet[2962]: I0120 02:35:54.452044 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f54ms" podStartSLOduration=35.786452626 podStartE2EDuration="3m36.452020059s" podCreationTimestamp="2026-01-20 02:32:18 +0000 UTC" firstStartedPulling="2026-01-20 02:32:23.234199343 +0000 UTC m=+26.960976995" lastFinishedPulling="2026-01-20 02:35:23.899766776 +0000 UTC m=+207.626544428" observedRunningTime="2026-01-20 02:35:54.451167024 +0000 UTC m=+238.177944696" watchObservedRunningTime="2026-01-20 02:35:54.452020059 +0000 UTC m=+238.178797711" Jan 20 02:35:58.495881 containerd[1582]: time="2026-01-20T02:35:58.487789075Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:35:58.521524 containerd[1582]: time="2026-01-20T02:35:58.510788795Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 02:35:58.549449 containerd[1582]: time="2026-01-20T02:35:58.539538392Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:35:58.552061 containerd[1582]: time="2026-01-20T02:35:58.552008239Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 34.627746575s" Jan 20 02:35:58.552244 containerd[1582]: time="2026-01-20T02:35:58.552216052Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 02:35:58.614877 containerd[1582]: time="2026-01-20T02:35:58.609533363Z" level=info msg="CreateContainer within sandbox \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 02:35:58.852559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3529268662.mount: Deactivated successfully. Jan 20 02:35:58.961014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656839819.mount: Deactivated successfully. Jan 20 02:35:59.049834 containerd[1582]: time="2026-01-20T02:35:59.049061063Z" level=info msg="Container 92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:35:59.268313 containerd[1582]: time="2026-01-20T02:35:59.266005173Z" level=info msg="CreateContainer within sandbox \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7\"" Jan 20 02:35:59.357687 containerd[1582]: time="2026-01-20T02:35:59.354705827Z" level=info msg="StartContainer for \"92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7\"" Jan 20 02:35:59.427048 containerd[1582]: time="2026-01-20T02:35:59.378173367Z" level=info msg="connecting to shim 92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7" address="unix:///run/containerd/s/93bf0fb31df29d22ff4aa43116e9bc2f6ddfc35876e8b2f6feb6dc3f8250e7d1" protocol=ttrpc version=3 Jan 20 02:36:00.393754 systemd[1]: Started cri-containerd-92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7.scope - libcontainer container 92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7. Jan 20 02:36:02.145980 containerd[1582]: time="2026-01-20T02:36:02.144046004Z" level=info msg="StartContainer for \"92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7\" returns successfully" Jan 20 02:36:03.602010 kubelet[2962]: I0120 02:36:03.580975 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c425765-80c0-4c88-9b91-ddc8edfbcbe1-config-volume\") pod \"coredns-668d6bf9bc-87d86\" (UID: \"0c425765-80c0-4c88-9b91-ddc8edfbcbe1\") " pod="kube-system/coredns-668d6bf9bc-87d86" Jan 20 02:36:03.602010 kubelet[2962]: I0120 02:36:03.581158 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5s7\" (UniqueName: \"kubernetes.io/projected/0c425765-80c0-4c88-9b91-ddc8edfbcbe1-kube-api-access-qm5s7\") pod \"coredns-668d6bf9bc-87d86\" (UID: \"0c425765-80c0-4c88-9b91-ddc8edfbcbe1\") " pod="kube-system/coredns-668d6bf9bc-87d86" Jan 20 02:36:03.602010 kubelet[2962]: I0120 02:36:03.581209 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e05144e9-5a86-48de-95c8-50e7065113f1-config-volume\") pod \"coredns-668d6bf9bc-4pc6f\" (UID: \"e05144e9-5a86-48de-95c8-50e7065113f1\") " pod="kube-system/coredns-668d6bf9bc-4pc6f" Jan 20 02:36:03.602010 kubelet[2962]: I0120 02:36:03.581246 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvk8m\" (UniqueName: \"kubernetes.io/projected/e05144e9-5a86-48de-95c8-50e7065113f1-kube-api-access-mvk8m\") pod \"coredns-668d6bf9bc-4pc6f\" (UID: \"e05144e9-5a86-48de-95c8-50e7065113f1\") " pod="kube-system/coredns-668d6bf9bc-4pc6f" Jan 20 02:36:03.615551 kubelet[2962]: I0120 02:36:03.615116 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wrz6w" podStartSLOduration=10.488548026 podStartE2EDuration="3m45.615086864s" podCreationTimestamp="2026-01-20 02:32:18 +0000 UTC" firstStartedPulling="2026-01-20 02:32:23.451720347 +0000 UTC m=+27.178497999" lastFinishedPulling="2026-01-20 02:35:58.578259185 +0000 UTC m=+242.305036837" observedRunningTime="2026-01-20 02:36:03.572527139 +0000 UTC m=+247.299304801" watchObservedRunningTime="2026-01-20 02:36:03.615086864 +0000 UTC m=+247.341864516" Jan 20 02:36:04.126056 systemd[1]: Created slice kubepods-burstable-pod0c425765_80c0_4c88_9b91_ddc8edfbcbe1.slice - libcontainer container kubepods-burstable-pod0c425765_80c0_4c88_9b91_ddc8edfbcbe1.slice. Jan 20 02:36:04.734256 containerd[1582]: time="2026-01-20T02:36:04.734057006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-87d86,Uid:0c425765-80c0-4c88-9b91-ddc8edfbcbe1,Namespace:kube-system,Attempt:0,}" Jan 20 02:36:04.872698 systemd[1]: Created slice kubepods-burstable-pode05144e9_5a86_48de_95c8_50e7065113f1.slice - libcontainer container kubepods-burstable-pode05144e9_5a86_48de_95c8_50e7065113f1.slice. Jan 20 02:36:05.050352 containerd[1582]: time="2026-01-20T02:36:05.050303164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4pc6f,Uid:e05144e9-5a86-48de-95c8-50e7065113f1,Namespace:kube-system,Attempt:0,}" Jan 20 02:36:20.922776 systemd-networkd[1465]: cilium_host: Link UP Jan 20 02:36:20.937121 systemd-networkd[1465]: cilium_net: Link UP Jan 20 02:36:20.943826 systemd-networkd[1465]: cilium_net: Gained carrier Jan 20 02:36:20.944171 systemd-networkd[1465]: cilium_host: Gained carrier Jan 20 02:36:21.155508 systemd-networkd[1465]: cilium_host: Gained IPv6LL Jan 20 02:36:21.825960 systemd-networkd[1465]: cilium_net: Gained IPv6LL Jan 20 02:36:24.457585 systemd-networkd[1465]: cilium_vxlan: Link UP Jan 20 02:36:24.457602 systemd-networkd[1465]: cilium_vxlan: Gained carrier Jan 20 02:36:26.268556 kernel: NET: Registered PF_ALG protocol family Jan 20 02:36:26.309091 systemd-networkd[1465]: cilium_vxlan: Gained IPv6LL Jan 20 02:36:38.062184 kubelet[2962]: E0120 02:36:38.016706 2962 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.926s" Jan 20 02:36:39.721941 kubelet[2962]: E0120 02:36:39.721891 2962 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.654s" Jan 20 02:36:39.909004 kubelet[2962]: E0120 02:36:39.903682 2962 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43410->127.0.0.1:38351: write tcp 127.0.0.1:43410->127.0.0.1:38351: write: broken pipe Jan 20 02:36:44.776015 systemd-networkd[1465]: lxc_health: Link UP Jan 20 02:36:44.895651 systemd-networkd[1465]: lxc_health: Gained carrier Jan 20 02:36:46.943355 systemd-networkd[1465]: lxc_health: Gained IPv6LL Jan 20 02:36:47.325486 containerd[1582]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jan 20 02:36:47.389665 systemd[1]: run-netns-cni\x2d1013f965\x2d1fc7\x2de3e6\x2d7218\x2d0306f8d7e0dd.mount: Deactivated successfully. Jan 20 02:36:47.512269 containerd[1582]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jan 20 02:36:47.506297 systemd[1]: run-netns-cni\x2d778d8776\x2d75f4\x2d479c\x2d8389\x2dbff56b62120f.mount: Deactivated successfully. Jan 20 02:36:47.571784 containerd[1582]: time="2026-01-20T02:36:47.540158386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4pc6f,Uid:e05144e9-5a86-48de-95c8-50e7065113f1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99f8f8d97bbbb1a79e638e03503cf6631d0ffb9a0f76c5c53f31fada56177098\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Jan 20 02:36:47.572298 containerd[1582]: time="2026-01-20T02:36:47.540543531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-87d86,Uid:0c425765-80c0-4c88-9b91-ddc8edfbcbe1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"731e093dba20831b5f2576989e1a373c9ef5723a9cdc04cd251c03c7d3604216\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Jan 20 02:36:47.583683 kubelet[2962]: E0120 02:36:47.572651 2962 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 20 02:36:47.583683 kubelet[2962]: rpc error: code = Unknown desc = failed to setup network for sandbox "99f8f8d97bbbb1a79e638e03503cf6631d0ffb9a0f76c5c53f31fada56177098": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 02:36:47.583683 kubelet[2962]: Is the agent running? Jan 20 02:36:47.583683 kubelet[2962]: > Jan 20 02:36:47.583683 kubelet[2962]: E0120 02:36:47.572740 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 20 02:36:47.583683 kubelet[2962]: rpc error: code = Unknown desc = failed to setup network for sandbox "99f8f8d97bbbb1a79e638e03503cf6631d0ffb9a0f76c5c53f31fada56177098": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 02:36:47.583683 kubelet[2962]: Is the agent running? Jan 20 02:36:47.583683 kubelet[2962]: > pod="kube-system/coredns-668d6bf9bc-4pc6f" Jan 20 02:36:47.583683 kubelet[2962]: E0120 02:36:47.572767 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err=< Jan 20 02:36:47.583683 kubelet[2962]: rpc error: code = Unknown desc = failed to setup network for sandbox "99f8f8d97bbbb1a79e638e03503cf6631d0ffb9a0f76c5c53f31fada56177098": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 02:36:47.583683 kubelet[2962]: Is the agent running? Jan 20 02:36:47.583683 kubelet[2962]: > pod="kube-system/coredns-668d6bf9bc-4pc6f" Jan 20 02:36:47.585681 kubelet[2962]: E0120 02:36:47.572825 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4pc6f_kube-system(e05144e9-5a86-48de-95c8-50e7065113f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4pc6f_kube-system(e05144e9-5a86-48de-95c8-50e7065113f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99f8f8d97bbbb1a79e638e03503cf6631d0ffb9a0f76c5c53f31fada56177098\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-668d6bf9bc-4pc6f" podUID="e05144e9-5a86-48de-95c8-50e7065113f1" Jan 20 02:36:47.585681 kubelet[2962]: E0120 02:36:47.576207 2962 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 20 02:36:47.585681 kubelet[2962]: rpc error: code = Unknown desc = failed to setup network for sandbox "731e093dba20831b5f2576989e1a373c9ef5723a9cdc04cd251c03c7d3604216": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 02:36:47.585681 kubelet[2962]: Is the agent running? Jan 20 02:36:47.585681 kubelet[2962]: > Jan 20 02:36:47.585681 kubelet[2962]: E0120 02:36:47.576260 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 20 02:36:47.585681 kubelet[2962]: rpc error: code = Unknown desc = failed to setup network for sandbox "731e093dba20831b5f2576989e1a373c9ef5723a9cdc04cd251c03c7d3604216": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 02:36:47.585681 kubelet[2962]: Is the agent running? Jan 20 02:36:47.585681 kubelet[2962]: > pod="kube-system/coredns-668d6bf9bc-87d86" Jan 20 02:36:47.590077 kubelet[2962]: E0120 02:36:47.576287 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err=< Jan 20 02:36:47.590077 kubelet[2962]: rpc error: code = Unknown desc = failed to setup network for sandbox "731e093dba20831b5f2576989e1a373c9ef5723a9cdc04cd251c03c7d3604216": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 02:36:47.590077 kubelet[2962]: Is the agent running? Jan 20 02:36:47.590077 kubelet[2962]: > pod="kube-system/coredns-668d6bf9bc-87d86" Jan 20 02:36:47.590077 kubelet[2962]: E0120 02:36:47.576331 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-87d86_kube-system(0c425765-80c0-4c88-9b91-ddc8edfbcbe1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-87d86_kube-system(0c425765-80c0-4c88-9b91-ddc8edfbcbe1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"731e093dba20831b5f2576989e1a373c9ef5723a9cdc04cd251c03c7d3604216\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-668d6bf9bc-87d86" podUID="0c425765-80c0-4c88-9b91-ddc8edfbcbe1" Jan 20 02:36:53.371991 kubelet[2962]: I0120 02:36:53.371807 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvk8m\" (UniqueName: \"kubernetes.io/projected/e05144e9-5a86-48de-95c8-50e7065113f1-kube-api-access-mvk8m\") pod \"e05144e9-5a86-48de-95c8-50e7065113f1\" (UID: \"e05144e9-5a86-48de-95c8-50e7065113f1\") " Jan 20 02:36:53.371991 kubelet[2962]: I0120 02:36:53.371969 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e05144e9-5a86-48de-95c8-50e7065113f1-config-volume\") pod \"e05144e9-5a86-48de-95c8-50e7065113f1\" (UID: \"e05144e9-5a86-48de-95c8-50e7065113f1\") " Jan 20 02:36:53.407823 kubelet[2962]: I0120 02:36:53.407502 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e05144e9-5a86-48de-95c8-50e7065113f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "e05144e9-5a86-48de-95c8-50e7065113f1" (UID: "e05144e9-5a86-48de-95c8-50e7065113f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 02:36:53.504022 systemd[1]: var-lib-kubelet-pods-e05144e9\x2d5a86\x2d48de\x2d95c8\x2d50e7065113f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmvk8m.mount: Deactivated successfully. Jan 20 02:36:53.530483 kubelet[2962]: I0120 02:36:53.527810 2962 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e05144e9-5a86-48de-95c8-50e7065113f1-config-volume\") on node \"localhost\" DevicePath \"\"" Jan 20 02:36:53.530823 kubelet[2962]: I0120 02:36:53.530778 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e05144e9-5a86-48de-95c8-50e7065113f1-kube-api-access-mvk8m" (OuterVolumeSpecName: "kube-api-access-mvk8m") pod "e05144e9-5a86-48de-95c8-50e7065113f1" (UID: "e05144e9-5a86-48de-95c8-50e7065113f1"). InnerVolumeSpecName "kube-api-access-mvk8m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:36:53.667856 kubelet[2962]: I0120 02:36:53.662187 2962 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mvk8m\" (UniqueName: \"kubernetes.io/projected/e05144e9-5a86-48de-95c8-50e7065113f1-kube-api-access-mvk8m\") on node \"localhost\" DevicePath \"\"" Jan 20 02:36:53.810513 kubelet[2962]: I0120 02:36:53.764562 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c425765-80c0-4c88-9b91-ddc8edfbcbe1-config-volume\") pod \"0c425765-80c0-4c88-9b91-ddc8edfbcbe1\" (UID: \"0c425765-80c0-4c88-9b91-ddc8edfbcbe1\") " Jan 20 02:36:53.810513 kubelet[2962]: I0120 02:36:53.773338 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm5s7\" (UniqueName: \"kubernetes.io/projected/0c425765-80c0-4c88-9b91-ddc8edfbcbe1-kube-api-access-qm5s7\") pod \"0c425765-80c0-4c88-9b91-ddc8edfbcbe1\" (UID: \"0c425765-80c0-4c88-9b91-ddc8edfbcbe1\") " Jan 20 02:36:53.810513 kubelet[2962]: I0120 02:36:53.773531 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8ttt\" (UniqueName: \"kubernetes.io/projected/fb86ee63-29f5-4877-9cb3-729ab02899ee-kube-api-access-n8ttt\") pod \"coredns-668d6bf9bc-lrwrw\" (UID: \"fb86ee63-29f5-4877-9cb3-729ab02899ee\") " pod="kube-system/coredns-668d6bf9bc-lrwrw" Jan 20 02:36:53.810513 kubelet[2962]: I0120 02:36:53.773581 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb86ee63-29f5-4877-9cb3-729ab02899ee-config-volume\") pod \"coredns-668d6bf9bc-lrwrw\" (UID: \"fb86ee63-29f5-4877-9cb3-729ab02899ee\") " pod="kube-system/coredns-668d6bf9bc-lrwrw" Jan 20 02:36:53.810513 kubelet[2962]: I0120 02:36:53.774250 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c425765-80c0-4c88-9b91-ddc8edfbcbe1-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c425765-80c0-4c88-9b91-ddc8edfbcbe1" (UID: "0c425765-80c0-4c88-9b91-ddc8edfbcbe1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 02:36:53.913483 kubelet[2962]: I0120 02:36:53.900868 2962 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c425765-80c0-4c88-9b91-ddc8edfbcbe1-config-volume\") on node \"localhost\" DevicePath \"\"" Jan 20 02:36:54.008964 kubelet[2962]: I0120 02:36:54.004047 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c425765-80c0-4c88-9b91-ddc8edfbcbe1-kube-api-access-qm5s7" (OuterVolumeSpecName: "kube-api-access-qm5s7") pod "0c425765-80c0-4c88-9b91-ddc8edfbcbe1" (UID: "0c425765-80c0-4c88-9b91-ddc8edfbcbe1"). InnerVolumeSpecName "kube-api-access-qm5s7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:36:54.026349 kubelet[2962]: I0120 02:36:54.024526 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm5s7\" (UniqueName: \"kubernetes.io/projected/0c425765-80c0-4c88-9b91-ddc8edfbcbe1-kube-api-access-qm5s7\") pod \"0c425765-80c0-4c88-9b91-ddc8edfbcbe1\" (UID: \"0c425765-80c0-4c88-9b91-ddc8edfbcbe1\") " Jan 20 02:36:54.026349 kubelet[2962]: W0120 02:36:54.024811 2962 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0c425765-80c0-4c88-9b91-ddc8edfbcbe1/volumes/kubernetes.io~projected/kube-api-access-qm5s7 Jan 20 02:36:54.026349 kubelet[2962]: I0120 02:36:54.024843 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c425765-80c0-4c88-9b91-ddc8edfbcbe1-kube-api-access-qm5s7" (OuterVolumeSpecName: "kube-api-access-qm5s7") pod "0c425765-80c0-4c88-9b91-ddc8edfbcbe1" (UID: "0c425765-80c0-4c88-9b91-ddc8edfbcbe1"). InnerVolumeSpecName "kube-api-access-qm5s7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:36:54.035604 systemd[1]: var-lib-kubelet-pods-0c425765\x2d80c0\x2d4c88\x2d9b91\x2dddc8edfbcbe1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqm5s7.mount: Deactivated successfully. Jan 20 02:36:54.068627 systemd[1]: Removed slice kubepods-burstable-pode05144e9_5a86_48de_95c8_50e7065113f1.slice - libcontainer container kubepods-burstable-pode05144e9_5a86_48de_95c8_50e7065113f1.slice. Jan 20 02:36:54.181339 kubelet[2962]: I0120 02:36:54.178871 2962 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qm5s7\" (UniqueName: \"kubernetes.io/projected/0c425765-80c0-4c88-9b91-ddc8edfbcbe1-kube-api-access-qm5s7\") on node \"localhost\" DevicePath \"\"" Jan 20 02:36:54.190297 systemd[1]: Created slice kubepods-burstable-podfb86ee63_29f5_4877_9cb3_729ab02899ee.slice - libcontainer container kubepods-burstable-podfb86ee63_29f5_4877_9cb3_729ab02899ee.slice. Jan 20 02:36:54.429607 kubelet[2962]: I0120 02:36:54.428967 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88602cb5-391e-4d2c-ad06-ebba549d3258-config-volume\") pod \"coredns-668d6bf9bc-rjnvk\" (UID: \"88602cb5-391e-4d2c-ad06-ebba549d3258\") " pod="kube-system/coredns-668d6bf9bc-rjnvk" Jan 20 02:36:54.429607 kubelet[2962]: I0120 02:36:54.429024 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk7fv\" (UniqueName: \"kubernetes.io/projected/88602cb5-391e-4d2c-ad06-ebba549d3258-kube-api-access-vk7fv\") pod \"coredns-668d6bf9bc-rjnvk\" (UID: \"88602cb5-391e-4d2c-ad06-ebba549d3258\") " pod="kube-system/coredns-668d6bf9bc-rjnvk" Jan 20 02:36:54.981056 containerd[1582]: time="2026-01-20T02:36:54.887900820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lrwrw,Uid:fb86ee63-29f5-4877-9cb3-729ab02899ee,Namespace:kube-system,Attempt:0,}" Jan 20 02:36:55.104969 systemd[1]: Created slice kubepods-burstable-pod88602cb5_391e_4d2c_ad06_ebba549d3258.slice - libcontainer container kubepods-burstable-pod88602cb5_391e_4d2c_ad06_ebba549d3258.slice. Jan 20 02:36:55.166182 systemd[1]: Removed slice kubepods-burstable-pod0c425765_80c0_4c88_9b91_ddc8edfbcbe1.slice - libcontainer container kubepods-burstable-pod0c425765_80c0_4c88_9b91_ddc8edfbcbe1.slice. Jan 20 02:36:59.281912 systemd-networkd[1465]: lxc9a6ae2f6178f: Link UP Jan 20 02:36:59.337125 containerd[1582]: time="2026-01-20T02:36:59.336113024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rjnvk,Uid:88602cb5-391e-4d2c-ad06-ebba549d3258,Namespace:kube-system,Attempt:0,}" Jan 20 02:36:59.470621 kernel: eth0: renamed from tmp20d30 Jan 20 02:36:59.516722 systemd-networkd[1465]: lxc9a6ae2f6178f: Gained carrier Jan 20 02:37:01.068647 systemd-networkd[1465]: lxcd26e340c940d: Link UP Jan 20 02:37:01.118804 systemd-networkd[1465]: lxc9a6ae2f6178f: Gained IPv6LL Jan 20 02:37:01.359925 kernel: eth0: renamed from tmp70e05 Jan 20 02:37:01.396627 systemd-networkd[1465]: lxcd26e340c940d: Gained carrier Jan 20 02:37:01.508992 kubelet[2962]: I0120 02:37:01.489863 2962 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c425765-80c0-4c88-9b91-ddc8edfbcbe1" path="/var/lib/kubelet/pods/0c425765-80c0-4c88-9b91-ddc8edfbcbe1/volumes" Jan 20 02:37:01.511846 kubelet[2962]: I0120 02:37:01.511637 2962 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e05144e9-5a86-48de-95c8-50e7065113f1" path="/var/lib/kubelet/pods/e05144e9-5a86-48de-95c8-50e7065113f1/volumes" Jan 20 02:37:03.038586 systemd-networkd[1465]: lxcd26e340c940d: Gained IPv6LL Jan 20 02:37:09.664801 sudo[1800]: pam_unix(sudo:session): session closed for user root Jan 20 02:37:09.694621 sshd[1799]: Connection closed by 10.0.0.1 port 53830 Jan 20 02:37:09.697949 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:09.792785 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:53830.service: Deactivated successfully. Jan 20 02:37:09.808641 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 02:37:09.809021 systemd[1]: session-9.scope: Consumed 24.977s CPU time, 226.1M memory peak. Jan 20 02:37:09.831462 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Jan 20 02:37:09.879635 systemd-logind[1548]: Removed session 9. Jan 20 02:37:11.813197 containerd[1582]: time="2026-01-20T02:37:11.812763090Z" level=warning msg="container event discarded" container=a5c28bd0650c94151b26960cfbfbe368568f0c1cb964bd3a84b239ef18d9b18e type=CONTAINER_CREATED_EVENT Jan 20 02:37:11.813197 containerd[1582]: time="2026-01-20T02:37:11.812843039Z" level=warning msg="container event discarded" container=a5c28bd0650c94151b26960cfbfbe368568f0c1cb964bd3a84b239ef18d9b18e type=CONTAINER_STARTED_EVENT Jan 20 02:37:12.903634 containerd[1582]: time="2026-01-20T02:37:12.903539759Z" level=warning msg="container event discarded" container=743f08211d202c02fd7532f7592347162028315c5cbc03bc57edb25de528cf8b type=CONTAINER_CREATED_EVENT Jan 20 02:37:15.429681 containerd[1582]: time="2026-01-20T02:37:15.429194539Z" level=warning msg="container event discarded" container=743f08211d202c02fd7532f7592347162028315c5cbc03bc57edb25de528cf8b type=CONTAINER_STARTED_EVENT Jan 20 02:37:23.075013 containerd[1582]: time="2026-01-20T02:37:23.074846526Z" level=warning msg="container event discarded" container=a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26 type=CONTAINER_CREATED_EVENT Jan 20 02:37:23.075013 containerd[1582]: time="2026-01-20T02:37:23.074909113Z" level=warning msg="container event discarded" container=a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26 type=CONTAINER_STARTED_EVENT Jan 20 02:37:23.458343 containerd[1582]: time="2026-01-20T02:37:23.455915235Z" level=warning msg="container event discarded" container=10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3 type=CONTAINER_CREATED_EVENT Jan 20 02:37:23.458343 containerd[1582]: time="2026-01-20T02:37:23.456046771Z" level=warning msg="container event discarded" container=10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3 type=CONTAINER_STARTED_EVENT Jan 20 02:37:36.680248 containerd[1582]: time="2026-01-20T02:37:36.680172248Z" level=info msg="connecting to shim 70e05b8e21c39d3ddc971c94112f2646900987093716f13ff90632ae45e9e7f7" address="unix:///run/containerd/s/dc174b5867f2f71dae4a761647d3755ab0a440f2f54c3c438f61d0f45497d6dc" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:37:37.518985 containerd[1582]: time="2026-01-20T02:37:37.517115260Z" level=info msg="connecting to shim 20d305f3f9ce9734a8b0d51c1d8703c64ccd8f57b793007dd8b7309843fb20c2" address="unix:///run/containerd/s/ecf995358a199df37431562c1a3f8d114bd4567d4c7dee996383345055b01b1c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:37:37.719203 systemd[1]: Started cri-containerd-70e05b8e21c39d3ddc971c94112f2646900987093716f13ff90632ae45e9e7f7.scope - libcontainer container 70e05b8e21c39d3ddc971c94112f2646900987093716f13ff90632ae45e9e7f7. Jan 20 02:37:38.623621 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:37:38.809300 systemd[1]: Started cri-containerd-20d305f3f9ce9734a8b0d51c1d8703c64ccd8f57b793007dd8b7309843fb20c2.scope - libcontainer container 20d305f3f9ce9734a8b0d51c1d8703c64ccd8f57b793007dd8b7309843fb20c2. Jan 20 02:37:40.049083 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:37:41.083176 containerd[1582]: time="2026-01-20T02:37:41.080514924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rjnvk,Uid:88602cb5-391e-4d2c-ad06-ebba549d3258,Namespace:kube-system,Attempt:0,} returns sandbox id \"70e05b8e21c39d3ddc971c94112f2646900987093716f13ff90632ae45e9e7f7\"" Jan 20 02:37:41.504977 containerd[1582]: time="2026-01-20T02:37:41.487911086Z" level=info msg="CreateContainer within sandbox \"70e05b8e21c39d3ddc971c94112f2646900987093716f13ff90632ae45e9e7f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:37:41.921598 containerd[1582]: time="2026-01-20T02:37:41.921544631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lrwrw,Uid:fb86ee63-29f5-4877-9cb3-729ab02899ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"20d305f3f9ce9734a8b0d51c1d8703c64ccd8f57b793007dd8b7309843fb20c2\"" Jan 20 02:37:42.003897 containerd[1582]: time="2026-01-20T02:37:42.003770658Z" level=info msg="CreateContainer within sandbox \"20d305f3f9ce9734a8b0d51c1d8703c64ccd8f57b793007dd8b7309843fb20c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:37:42.419525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143230613.mount: Deactivated successfully. Jan 20 02:37:42.566542 containerd[1582]: time="2026-01-20T02:37:42.559333838Z" level=info msg="Container 290f528615da515e8367eae661a04334172adefe2afe044c21c9a010eafebbf6: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:37:42.613751 containerd[1582]: time="2026-01-20T02:37:42.608603167Z" level=info msg="Container 2b042af7075df2be9d2366cb737a293392f4a166475e5f09d9f5c4383ece71b9: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:37:42.810596 containerd[1582]: time="2026-01-20T02:37:42.810536582Z" level=info msg="CreateContainer within sandbox \"70e05b8e21c39d3ddc971c94112f2646900987093716f13ff90632ae45e9e7f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"290f528615da515e8367eae661a04334172adefe2afe044c21c9a010eafebbf6\"" Jan 20 02:37:42.951164 containerd[1582]: time="2026-01-20T02:37:42.947097967Z" level=info msg="StartContainer for \"290f528615da515e8367eae661a04334172adefe2afe044c21c9a010eafebbf6\"" Jan 20 02:37:42.979578 containerd[1582]: time="2026-01-20T02:37:42.979521699Z" level=info msg="connecting to shim 290f528615da515e8367eae661a04334172adefe2afe044c21c9a010eafebbf6" address="unix:///run/containerd/s/dc174b5867f2f71dae4a761647d3755ab0a440f2f54c3c438f61d0f45497d6dc" protocol=ttrpc version=3 Jan 20 02:37:43.486766 containerd[1582]: time="2026-01-20T02:37:43.486708465Z" level=info msg="CreateContainer within sandbox \"20d305f3f9ce9734a8b0d51c1d8703c64ccd8f57b793007dd8b7309843fb20c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b042af7075df2be9d2366cb737a293392f4a166475e5f09d9f5c4383ece71b9\"" Jan 20 02:37:43.507268 containerd[1582]: time="2026-01-20T02:37:43.504654840Z" level=info msg="StartContainer for \"2b042af7075df2be9d2366cb737a293392f4a166475e5f09d9f5c4383ece71b9\"" Jan 20 02:37:43.537568 containerd[1582]: time="2026-01-20T02:37:43.532718741Z" level=info msg="connecting to shim 2b042af7075df2be9d2366cb737a293392f4a166475e5f09d9f5c4383ece71b9" address="unix:///run/containerd/s/ecf995358a199df37431562c1a3f8d114bd4567d4c7dee996383345055b01b1c" protocol=ttrpc version=3 Jan 20 02:37:43.829729 systemd[1]: Started cri-containerd-290f528615da515e8367eae661a04334172adefe2afe044c21c9a010eafebbf6.scope - libcontainer container 290f528615da515e8367eae661a04334172adefe2afe044c21c9a010eafebbf6. Jan 20 02:37:44.386685 systemd[1]: Started cri-containerd-2b042af7075df2be9d2366cb737a293392f4a166475e5f09d9f5c4383ece71b9.scope - libcontainer container 2b042af7075df2be9d2366cb737a293392f4a166475e5f09d9f5c4383ece71b9. Jan 20 02:37:45.716658 containerd[1582]: time="2026-01-20T02:37:45.716601811Z" level=info msg="StartContainer for \"290f528615da515e8367eae661a04334172adefe2afe044c21c9a010eafebbf6\" returns successfully" Jan 20 02:37:46.075926 containerd[1582]: time="2026-01-20T02:37:46.071816858Z" level=info msg="StartContainer for \"2b042af7075df2be9d2366cb737a293392f4a166475e5f09d9f5c4383ece71b9\" returns successfully" Jan 20 02:37:46.572996 kubelet[2962]: I0120 02:37:46.567780 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rjnvk" podStartSLOduration=52.567754527 podStartE2EDuration="52.567754527s" podCreationTimestamp="2026-01-20 02:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:37:46.50856397 +0000 UTC m=+350.235341632" watchObservedRunningTime="2026-01-20 02:37:46.567754527 +0000 UTC m=+350.294532199" Jan 20 02:37:47.169955 kubelet[2962]: I0120 02:37:47.157639 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lrwrw" podStartSLOduration=54.157608594 podStartE2EDuration="54.157608594s" podCreationTimestamp="2026-01-20 02:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:37:47.115605028 +0000 UTC m=+350.842382700" watchObservedRunningTime="2026-01-20 02:37:47.157608594 +0000 UTC m=+350.884386246" Jan 20 02:37:47.748051 containerd[1582]: time="2026-01-20T02:37:47.747842819Z" level=warning msg="container event discarded" container=36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457 type=CONTAINER_STOPPED_EVENT Jan 20 02:37:48.479828 containerd[1582]: time="2026-01-20T02:37:48.474856283Z" level=warning msg="container event discarded" container=b3ee7466acc31409e4cc6cb3400486bb8fe7288812a08c9b5e0f57bbe96435b0 type=CONTAINER_STOPPED_EVENT Jan 20 02:37:50.403625 containerd[1582]: time="2026-01-20T02:37:50.402350392Z" level=warning msg="container event discarded" container=edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53 type=CONTAINER_CREATED_EVENT Jan 20 02:37:51.345188 containerd[1582]: time="2026-01-20T02:37:51.345070321Z" level=warning msg="container event discarded" container=b6251eaacc76c5fef651af58722641cf320439e001b6f2551782e45a20b40766 type=CONTAINER_CREATED_EVENT Jan 20 02:37:56.217610 kubelet[2962]: E0120 02:37:56.210746 2962 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.122s" Jan 20 02:37:57.807721 containerd[1582]: time="2026-01-20T02:37:57.787655145Z" level=warning msg="container event discarded" container=edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53 type=CONTAINER_STARTED_EVENT Jan 20 02:38:00.362936 containerd[1582]: time="2026-01-20T02:38:00.227714122Z" level=warning msg="container event discarded" container=b6251eaacc76c5fef651af58722641cf320439e001b6f2551782e45a20b40766 type=CONTAINER_STARTED_EVENT Jan 20 02:38:19.393580 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 02:38:22.098695 systemd[1]: cri-containerd-edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53.scope: Deactivated successfully. Jan 20 02:38:22.136746 systemd[1]: cri-containerd-edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53.scope: Consumed 13.906s CPU time, 24.1M memory peak, 736K read from disk. Jan 20 02:38:22.219958 systemd[1]: cri-containerd-92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7.scope: Deactivated successfully. Jan 20 02:38:22.281114 systemd[1]: cri-containerd-92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7.scope: Consumed 2.377s CPU time, 31.5M memory peak, 4K written to disk. Jan 20 02:38:22.769805 containerd[1582]: time="2026-01-20T02:38:22.759902982Z" level=info msg="received container exit event container_id:\"edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53\" id:\"edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53\" pid:3396 exit_status:1 exited_at:{seconds:1768876702 nanos:624285589}" Jan 20 02:38:22.804481 containerd[1582]: time="2026-01-20T02:38:22.804024880Z" level=info msg="received container exit event container_id:\"92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7\" id:\"92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7\" pid:3829 exit_status:1 exited_at:{seconds:1768876702 nanos:632979061}" Jan 20 02:38:22.886817 systemd-tmpfiles[4819]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:38:22.886853 systemd-tmpfiles[4819]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:38:22.887906 systemd-tmpfiles[4819]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:38:22.947559 systemd-tmpfiles[4819]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 02:38:22.996004 systemd-tmpfiles[4819]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 02:38:22.997124 systemd-tmpfiles[4819]: ACLs are not supported, ignoring. Jan 20 02:38:23.017620 systemd-tmpfiles[4819]: ACLs are not supported, ignoring. Jan 20 02:38:23.303282 systemd-tmpfiles[4819]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:38:23.310782 systemd-tmpfiles[4819]: Skipping /boot Jan 20 02:38:23.530554 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 02:38:23.571293 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 02:38:23.717792 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Jan 20 02:38:24.122503 kubelet[2962]: E0120 02:38:24.111883 2962 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="25.069s" Jan 20 02:38:25.276048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7-rootfs.mount: Deactivated successfully. Jan 20 02:38:25.974064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53-rootfs.mount: Deactivated successfully. Jan 20 02:38:26.231040 kubelet[2962]: I0120 02:38:26.226599 2962 scope.go:117] "RemoveContainer" containerID="92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7" Jan 20 02:38:26.334079 kubelet[2962]: I0120 02:38:26.327338 2962 scope.go:117] "RemoveContainer" containerID="36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457" Jan 20 02:38:26.334079 kubelet[2962]: I0120 02:38:26.328007 2962 scope.go:117] "RemoveContainer" containerID="edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53" Jan 20 02:38:26.357574 kubelet[2962]: E0120 02:38:26.355627 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(0b8273f45c576ca70f8db6fe540c065c)\"" pod="kube-system/kube-scheduler-localhost" podUID="0b8273f45c576ca70f8db6fe540c065c" Jan 20 02:38:26.389488 containerd[1582]: time="2026-01-20T02:38:26.385947643Z" level=info msg="CreateContainer within sandbox \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 20 02:38:26.435891 containerd[1582]: time="2026-01-20T02:38:26.435651559Z" level=info msg="RemoveContainer for \"36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457\"" Jan 20 02:38:26.849156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount937796775.mount: Deactivated successfully. Jan 20 02:38:26.893110 containerd[1582]: time="2026-01-20T02:38:26.891988903Z" level=info msg="RemoveContainer for \"36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457\" returns successfully" Jan 20 02:38:26.920591 containerd[1582]: time="2026-01-20T02:38:26.919699361Z" level=info msg="Container bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:38:27.115558 containerd[1582]: time="2026-01-20T02:38:27.111953179Z" level=info msg="CreateContainer within sandbox \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\"" Jan 20 02:38:27.128934 containerd[1582]: time="2026-01-20T02:38:27.124067089Z" level=info msg="StartContainer for \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\"" Jan 20 02:38:27.148671 containerd[1582]: time="2026-01-20T02:38:27.145029150Z" level=info msg="connecting to shim bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274" address="unix:///run/containerd/s/93bf0fb31df29d22ff4aa43116e9bc2f6ddfc35876e8b2f6feb6dc3f8250e7d1" protocol=ttrpc version=3 Jan 20 02:38:27.323822 systemd[1]: Started cri-containerd-bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274.scope - libcontainer container bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274. Jan 20 02:38:27.829077 containerd[1582]: time="2026-01-20T02:38:27.828805301Z" level=info msg="StartContainer for \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\" returns successfully" Jan 20 02:38:32.339553 kubelet[2962]: I0120 02:38:32.334490 2962 scope.go:117] "RemoveContainer" containerID="edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53" Jan 20 02:38:32.353275 kubelet[2962]: E0120 02:38:32.349892 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(0b8273f45c576ca70f8db6fe540c065c)\"" pod="kube-system/kube-scheduler-localhost" podUID="0b8273f45c576ca70f8db6fe540c065c" Jan 20 02:38:45.989793 kubelet[2962]: I0120 02:38:45.989199 2962 scope.go:117] "RemoveContainer" containerID="edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53" Jan 20 02:38:46.047191 containerd[1582]: time="2026-01-20T02:38:46.041183492Z" level=info msg="CreateContainer within sandbox \"21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jan 20 02:38:46.586163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2452758488.mount: Deactivated successfully. Jan 20 02:38:46.603815 containerd[1582]: time="2026-01-20T02:38:46.588128533Z" level=info msg="Container 9646d628fe808a0b2420c18628d03354735f41ceb528fe981c43c40728fba689: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:38:46.782606 containerd[1582]: time="2026-01-20T02:38:46.779796755Z" level=info msg="CreateContainer within sandbox \"21ff4eff6911f16655a3fa80f311d972c83b75a0c2643ac02e58ce9276a9a0b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"9646d628fe808a0b2420c18628d03354735f41ceb528fe981c43c40728fba689\"" Jan 20 02:38:46.789665 containerd[1582]: time="2026-01-20T02:38:46.784255204Z" level=info msg="StartContainer for \"9646d628fe808a0b2420c18628d03354735f41ceb528fe981c43c40728fba689\"" Jan 20 02:38:46.792108 containerd[1582]: time="2026-01-20T02:38:46.790509532Z" level=info msg="connecting to shim 9646d628fe808a0b2420c18628d03354735f41ceb528fe981c43c40728fba689" address="unix:///run/containerd/s/d4cd72ef33465427891f2c4e6222467b366be89a169ed5c7c3c5f56089797b10" protocol=ttrpc version=3 Jan 20 02:38:47.127853 systemd[1]: Started cri-containerd-9646d628fe808a0b2420c18628d03354735f41ceb528fe981c43c40728fba689.scope - libcontainer container 9646d628fe808a0b2420c18628d03354735f41ceb528fe981c43c40728fba689. Jan 20 02:38:48.399146 containerd[1582]: time="2026-01-20T02:38:48.393969663Z" level=info msg="StartContainer for \"9646d628fe808a0b2420c18628d03354735f41ceb528fe981c43c40728fba689\" returns successfully" Jan 20 02:39:30.007125 kubelet[2962]: E0120 02:39:30.007071 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:32.001692 kubelet[2962]: E0120 02:39:31.994054 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:43.015745 kubelet[2962]: E0120 02:39:42.999337 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:43.015745 kubelet[2962]: E0120 02:39:43.013314 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:47.997643 kubelet[2962]: E0120 02:39:47.986682 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:20.003074 kubelet[2962]: E0120 02:40:20.000078 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:23.018634 kubelet[2962]: E0120 02:40:23.013153 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:24.009251 kubelet[2962]: E0120 02:40:24.008033 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:28.276349 containerd[1582]: time="2026-01-20T02:40:28.185812696Z" level=warning msg="container event discarded" container=788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048 type=CONTAINER_CREATED_EVENT Jan 20 02:40:28.276349 containerd[1582]: time="2026-01-20T02:40:28.273893075Z" level=warning msg="container event discarded" container=788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048 type=CONTAINER_STARTED_EVENT Jan 20 02:40:28.276349 containerd[1582]: time="2026-01-20T02:40:28.275532771Z" level=warning msg="container event discarded" container=788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048 type=CONTAINER_STOPPED_EVENT Jan 20 02:40:33.687592 containerd[1582]: time="2026-01-20T02:40:33.630185432Z" level=warning msg="container event discarded" container=e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7 type=CONTAINER_CREATED_EVENT Jan 20 02:40:35.062748 kubelet[2962]: E0120 02:40:35.062338 2962 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.81s" Jan 20 02:40:35.900111 containerd[1582]: time="2026-01-20T02:40:35.895627279Z" level=warning msg="container event discarded" container=e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7 type=CONTAINER_STARTED_EVENT Jan 20 02:40:35.900111 containerd[1582]: time="2026-01-20T02:40:35.895784641Z" level=warning msg="container event discarded" container=e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7 type=CONTAINER_STOPPED_EVENT Jan 20 02:40:37.268858 containerd[1582]: time="2026-01-20T02:40:37.268016211Z" level=warning msg="container event discarded" container=ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3 type=CONTAINER_CREATED_EVENT Jan 20 02:40:39.932185 containerd[1582]: time="2026-01-20T02:40:39.931924000Z" level=warning msg="container event discarded" container=ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3 type=CONTAINER_STARTED_EVENT Jan 20 02:40:40.560737 containerd[1582]: time="2026-01-20T02:40:40.560633884Z" level=warning msg="container event discarded" container=ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3 type=CONTAINER_STOPPED_EVENT Jan 20 02:40:42.542569 containerd[1582]: time="2026-01-20T02:40:42.534149088Z" level=warning msg="container event discarded" container=42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b type=CONTAINER_CREATED_EVENT Jan 20 02:40:44.714176 containerd[1582]: time="2026-01-20T02:40:44.708798151Z" level=warning msg="container event discarded" container=42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b type=CONTAINER_STARTED_EVENT Jan 20 02:40:46.433636 containerd[1582]: time="2026-01-20T02:40:46.431074516Z" level=warning msg="container event discarded" container=42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b type=CONTAINER_STOPPED_EVENT Jan 20 02:40:47.021841 kubelet[2962]: E0120 02:40:47.018990 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:47.927757 containerd[1582]: time="2026-01-20T02:40:47.927655373Z" level=warning msg="container event discarded" container=bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60 type=CONTAINER_CREATED_EVENT Jan 20 02:40:47.989752 kubelet[2962]: E0120 02:40:47.988849 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:40:51.273488 containerd[1582]: time="2026-01-20T02:40:51.269841330Z" level=warning msg="container event discarded" container=bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60 type=CONTAINER_STARTED_EVENT Jan 20 02:41:02.022704 containerd[1582]: time="2026-01-20T02:41:01.897045918Z" level=warning msg="container event discarded" container=92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7 type=CONTAINER_CREATED_EVENT Jan 20 02:41:02.173073 containerd[1582]: time="2026-01-20T02:41:02.098761522Z" level=warning msg="container event discarded" container=92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7 type=CONTAINER_STARTED_EVENT Jan 20 02:41:02.598635 kubelet[2962]: E0120 02:41:02.592874 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:05.989711 kubelet[2962]: E0120 02:41:05.985888 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:17.007762 kubelet[2962]: E0120 02:41:17.002141 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:28.020522 kubelet[2962]: E0120 02:41:28.007843 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:39.989478 kubelet[2962]: E0120 02:41:39.989080 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:41:40.992870 kubelet[2962]: E0120 02:41:40.984982 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:12.209336 kubelet[2962]: E0120 02:42:12.207334 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:12.998647 kubelet[2962]: E0120 02:42:12.994282 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:19.004525 kubelet[2962]: E0120 02:42:19.002982 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:19.991589 kubelet[2962]: E0120 02:42:19.990190 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:25.007510 kubelet[2962]: E0120 02:42:25.007274 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:36.003023 kubelet[2962]: E0120 02:42:35.990112 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:41.093641 containerd[1582]: time="2026-01-20T02:42:41.092552421Z" level=warning msg="container event discarded" container=70e05b8e21c39d3ddc971c94112f2646900987093716f13ff90632ae45e9e7f7 type=CONTAINER_CREATED_EVENT Jan 20 02:42:41.093641 containerd[1582]: time="2026-01-20T02:42:41.092824228Z" level=warning msg="container event discarded" container=70e05b8e21c39d3ddc971c94112f2646900987093716f13ff90632ae45e9e7f7 type=CONTAINER_STARTED_EVENT Jan 20 02:42:41.942547 containerd[1582]: time="2026-01-20T02:42:41.935074042Z" level=warning msg="container event discarded" container=20d305f3f9ce9734a8b0d51c1d8703c64ccd8f57b793007dd8b7309843fb20c2 type=CONTAINER_CREATED_EVENT Jan 20 02:42:41.942547 containerd[1582]: time="2026-01-20T02:42:41.940316358Z" level=warning msg="container event discarded" container=20d305f3f9ce9734a8b0d51c1d8703c64ccd8f57b793007dd8b7309843fb20c2 type=CONTAINER_STARTED_EVENT Jan 20 02:42:42.005013 kubelet[2962]: E0120 02:42:41.997711 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:42:42.805580 containerd[1582]: time="2026-01-20T02:42:42.805486041Z" level=warning msg="container event discarded" container=290f528615da515e8367eae661a04334172adefe2afe044c21c9a010eafebbf6 type=CONTAINER_CREATED_EVENT Jan 20 02:42:43.380579 containerd[1582]: time="2026-01-20T02:42:43.379821340Z" level=warning msg="container event discarded" container=2b042af7075df2be9d2366cb737a293392f4a166475e5f09d9f5c4383ece71b9 type=CONTAINER_CREATED_EVENT Jan 20 02:42:45.608339 containerd[1582]: time="2026-01-20T02:42:45.606745234Z" level=warning msg="container event discarded" container=290f528615da515e8367eae661a04334172adefe2afe044c21c9a010eafebbf6 type=CONTAINER_STARTED_EVENT Jan 20 02:42:46.007236 containerd[1582]: time="2026-01-20T02:42:46.006980571Z" level=warning msg="container event discarded" container=2b042af7075df2be9d2366cb737a293392f4a166475e5f09d9f5c4383ece71b9 type=CONTAINER_STARTED_EVENT Jan 20 02:43:03.995920 kubelet[2962]: E0120 02:43:03.992081 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:11.306193 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:47566.service - OpenSSH per-connection server daemon (10.0.0.1:47566). Jan 20 02:43:12.093305 sshd[4965]: Accepted publickey for core from 10.0.0.1 port 47566 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:43:12.104875 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:12.162526 systemd-logind[1548]: New session 10 of user core. Jan 20 02:43:12.176016 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 02:43:13.345514 sshd[4968]: Connection closed by 10.0.0.1 port 47566 Jan 20 02:43:13.345147 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:13.367831 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:47566.service: Deactivated successfully. Jan 20 02:43:13.383186 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 02:43:13.397921 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Jan 20 02:43:13.422348 systemd-logind[1548]: Removed session 10. Jan 20 02:43:18.447779 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:54098.service - OpenSSH per-connection server daemon (10.0.0.1:54098). Jan 20 02:43:18.901929 sshd[4989]: Accepted publickey for core from 10.0.0.1 port 54098 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:43:18.911613 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:18.994817 systemd-logind[1548]: New session 11 of user core. Jan 20 02:43:19.073933 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 02:43:20.261331 sshd[4992]: Connection closed by 10.0.0.1 port 54098 Jan 20 02:43:20.262617 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:20.306132 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:54098.service: Deactivated successfully. Jan 20 02:43:20.343998 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 02:43:20.358481 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Jan 20 02:43:20.393914 systemd-logind[1548]: Removed session 11. Jan 20 02:43:24.997611 kubelet[2962]: E0120 02:43:24.990936 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:25.327063 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:58932.service - OpenSSH per-connection server daemon (10.0.0.1:58932). Jan 20 02:43:25.704859 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 58932 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:43:25.721071 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:25.797849 systemd-logind[1548]: New session 12 of user core. Jan 20 02:43:25.815998 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 02:43:25.922728 containerd[1582]: time="2026-01-20T02:43:25.922604426Z" level=warning msg="container event discarded" container=92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7 type=CONTAINER_STOPPED_EVENT Jan 20 02:43:26.111671 containerd[1582]: time="2026-01-20T02:43:26.111565839Z" level=warning msg="container event discarded" container=edd8b101444d5e0907db4d92789267d5f3d572234ef84f6c2ee47d97a7b14c53 type=CONTAINER_STOPPED_EVENT Jan 20 02:43:26.696825 sshd[5014]: Connection closed by 10.0.0.1 port 58932 Jan 20 02:43:26.696089 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:26.755748 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:58932.service: Deactivated successfully. Jan 20 02:43:26.777884 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 02:43:26.796655 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Jan 20 02:43:26.832960 systemd-logind[1548]: Removed session 12. Jan 20 02:43:26.927849 containerd[1582]: time="2026-01-20T02:43:26.927598098Z" level=warning msg="container event discarded" container=36f4c01cc78e57f0ef0d943c43e078d1fb2db8bbbccf14872e877be55a7c7457 type=CONTAINER_DELETED_EVENT Jan 20 02:43:26.995709 kubelet[2962]: E0120 02:43:26.994746 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:27.100133 containerd[1582]: time="2026-01-20T02:43:27.100040234Z" level=warning msg="container event discarded" container=bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274 type=CONTAINER_CREATED_EVENT Jan 20 02:43:27.828506 containerd[1582]: time="2026-01-20T02:43:27.827003862Z" level=warning msg="container event discarded" container=bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274 type=CONTAINER_STARTED_EVENT Jan 20 02:43:27.993775 kubelet[2962]: E0120 02:43:27.985880 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:31.814948 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:58948.service - OpenSSH per-connection server daemon (10.0.0.1:58948). Jan 20 02:43:32.113120 sshd[5029]: Accepted publickey for core from 10.0.0.1 port 58948 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:43:32.124724 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:32.149689 systemd-logind[1548]: New session 13 of user core. Jan 20 02:43:32.201600 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 02:43:33.001568 sshd[5032]: Connection closed by 10.0.0.1 port 58948 Jan 20 02:43:33.004798 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:33.026056 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:58948.service: Deactivated successfully. Jan 20 02:43:33.064189 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 02:43:33.095012 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Jan 20 02:43:33.112971 systemd-logind[1548]: Removed session 13. Jan 20 02:43:35.024934 kubelet[2962]: E0120 02:43:35.021191 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:37.991081 kubelet[2962]: E0120 02:43:37.985873 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:38.110948 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:50740.service - OpenSSH per-connection server daemon (10.0.0.1:50740). Jan 20 02:43:38.884792 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 50740 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:43:38.902349 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:38.967626 systemd-logind[1548]: New session 14 of user core. Jan 20 02:43:39.026771 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 02:43:40.629073 sshd[5051]: Connection closed by 10.0.0.1 port 50740 Jan 20 02:43:40.641145 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:40.703677 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:50740.service: Deactivated successfully. Jan 20 02:43:40.746061 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 02:43:40.780014 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Jan 20 02:43:40.809868 systemd-logind[1548]: Removed session 14. Jan 20 02:43:45.680155 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:48872.service - OpenSSH per-connection server daemon (10.0.0.1:48872). Jan 20 02:43:45.976900 sshd[5067]: Accepted publickey for core from 10.0.0.1 port 48872 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:43:45.994208 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:46.031617 systemd-logind[1548]: New session 15 of user core. Jan 20 02:43:46.076836 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 02:43:46.776763 containerd[1582]: time="2026-01-20T02:43:46.774711192Z" level=warning msg="container event discarded" container=9646d628fe808a0b2420c18628d03354735f41ceb528fe981c43c40728fba689 type=CONTAINER_CREATED_EVENT Jan 20 02:43:46.803756 sshd[5070]: Connection closed by 10.0.0.1 port 48872 Jan 20 02:43:46.806110 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:46.835128 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:48872.service: Deactivated successfully. Jan 20 02:43:46.856052 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 02:43:46.883631 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Jan 20 02:43:46.903052 systemd-logind[1548]: Removed session 15. Jan 20 02:43:48.383831 containerd[1582]: time="2026-01-20T02:43:48.382024964Z" level=warning msg="container event discarded" container=9646d628fe808a0b2420c18628d03354735f41ceb528fe981c43c40728fba689 type=CONTAINER_STARTED_EVENT Jan 20 02:43:48.998518 kubelet[2962]: E0120 02:43:48.995563 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:51.022492 kubelet[2962]: E0120 02:43:51.021965 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:43:51.902887 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:48888.service - OpenSSH per-connection server daemon (10.0.0.1:48888). Jan 20 02:43:52.456550 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 48888 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:43:52.471356 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:52.554800 systemd-logind[1548]: New session 16 of user core. Jan 20 02:43:52.591971 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 02:43:54.167530 sshd[5089]: Connection closed by 10.0.0.1 port 48888 Jan 20 02:43:54.171614 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Jan 20 02:43:54.193492 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:48888.service: Deactivated successfully. Jan 20 02:43:54.199941 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 02:43:54.225887 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Jan 20 02:43:54.244832 systemd-logind[1548]: Removed session 16. Jan 20 02:43:59.236237 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:43116.service - OpenSSH per-connection server daemon (10.0.0.1:43116). Jan 20 02:43:59.692724 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 43116 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:43:59.696490 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:43:59.768184 systemd-logind[1548]: New session 17 of user core. Jan 20 02:43:59.794687 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 02:44:00.708867 sshd[5107]: Connection closed by 10.0.0.1 port 43116 Jan 20 02:44:00.705857 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:00.762810 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:43116.service: Deactivated successfully. Jan 20 02:44:00.795135 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 02:44:00.848490 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Jan 20 02:44:00.866743 systemd-logind[1548]: Removed session 17. Jan 20 02:44:05.958180 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:35994.service - OpenSSH per-connection server daemon (10.0.0.1:35994). Jan 20 02:44:06.396257 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 35994 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:44:06.409068 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:06.451110 systemd-logind[1548]: New session 18 of user core. Jan 20 02:44:06.462872 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 02:44:07.575518 sshd[5128]: Connection closed by 10.0.0.1 port 35994 Jan 20 02:44:07.579822 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:07.619759 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:35994.service: Deactivated successfully. Jan 20 02:44:07.664217 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 02:44:07.695099 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Jan 20 02:44:07.731247 systemd-logind[1548]: Removed session 18. Jan 20 02:44:12.708743 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:36000.service - OpenSSH per-connection server daemon (10.0.0.1:36000). Jan 20 02:44:13.149926 sshd[5143]: Accepted publickey for core from 10.0.0.1 port 36000 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:44:13.170252 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:13.281663 systemd-logind[1548]: New session 19 of user core. Jan 20 02:44:13.323877 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 02:44:14.163962 sshd[5146]: Connection closed by 10.0.0.1 port 36000 Jan 20 02:44:14.167742 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:14.191805 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:36000.service: Deactivated successfully. Jan 20 02:44:14.200072 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 02:44:14.218539 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Jan 20 02:44:14.225664 systemd-logind[1548]: Removed session 19. Jan 20 02:44:19.211882 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:39420.service - OpenSSH per-connection server daemon (10.0.0.1:39420). Jan 20 02:44:19.466734 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 39420 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:44:19.479531 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:19.516739 systemd-logind[1548]: New session 20 of user core. Jan 20 02:44:19.548690 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 02:44:20.093713 sshd[5163]: Connection closed by 10.0.0.1 port 39420 Jan 20 02:44:20.097865 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:20.116045 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:39420.service: Deactivated successfully. Jan 20 02:44:20.128108 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 02:44:20.144939 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Jan 20 02:44:20.167137 systemd-logind[1548]: Removed session 20. Jan 20 02:44:27.211512 kubelet[2962]: E0120 02:44:27.211344 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:27.329926 systemd[1]: Started sshd@20-10.0.0.100:22-10.0.0.1:55866.service - OpenSSH per-connection server daemon (10.0.0.1:55866). Jan 20 02:44:30.551859 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 55866 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:44:30.574237 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:30.598756 systemd-logind[1548]: New session 21 of user core. Jan 20 02:44:30.620844 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 02:44:30.839769 kubelet[2962]: E0120 02:44:30.831037 2962 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.216s" Jan 20 02:44:32.287872 kubelet[2962]: E0120 02:44:32.284095 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:33.500985 kubelet[2962]: E0120 02:44:33.492256 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:34.276462 sshd[5182]: Connection closed by 10.0.0.1 port 55866 Jan 20 02:44:34.280573 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:34.313137 systemd[1]: sshd@20-10.0.0.100:22-10.0.0.1:55866.service: Deactivated successfully. Jan 20 02:44:34.321222 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Jan 20 02:44:34.354195 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 02:44:34.395237 systemd-logind[1548]: Removed session 21. Jan 20 02:44:36.992027 kubelet[2962]: E0120 02:44:36.991519 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:39.353972 systemd[1]: Started sshd@21-10.0.0.100:22-10.0.0.1:33896.service - OpenSSH per-connection server daemon (10.0.0.1:33896). Jan 20 02:44:40.019194 sshd[5200]: Accepted publickey for core from 10.0.0.1 port 33896 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:44:40.023103 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:40.069221 systemd-logind[1548]: New session 22 of user core. Jan 20 02:44:40.086756 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 02:44:40.964122 sshd[5204]: Connection closed by 10.0.0.1 port 33896 Jan 20 02:44:40.972737 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:40.993621 systemd[1]: sshd@21-10.0.0.100:22-10.0.0.1:33896.service: Deactivated successfully. Jan 20 02:44:41.009103 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 02:44:41.042505 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Jan 20 02:44:41.065471 systemd-logind[1548]: Removed session 22. Jan 20 02:44:46.094189 systemd[1]: Started sshd@22-10.0.0.100:22-10.0.0.1:35342.service - OpenSSH per-connection server daemon (10.0.0.1:35342). Jan 20 02:44:46.471788 sshd[5229]: Accepted publickey for core from 10.0.0.1 port 35342 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:44:46.486051 sshd-session[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:46.518757 systemd-logind[1548]: New session 23 of user core. Jan 20 02:44:46.535822 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 02:44:46.994868 kubelet[2962]: E0120 02:44:46.994641 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:47.215864 sshd[5232]: Connection closed by 10.0.0.1 port 35342 Jan 20 02:44:47.227966 sshd-session[5229]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:47.262264 systemd[1]: sshd@22-10.0.0.100:22-10.0.0.1:35342.service: Deactivated successfully. Jan 20 02:44:47.288112 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 02:44:47.301021 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Jan 20 02:44:47.312171 systemd-logind[1548]: Removed session 23. Jan 20 02:44:47.986589 kubelet[2962]: E0120 02:44:47.985038 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:44:52.262202 systemd[1]: Started sshd@23-10.0.0.100:22-10.0.0.1:35348.service - OpenSSH per-connection server daemon (10.0.0.1:35348). Jan 20 02:44:52.760724 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 35348 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:44:52.769822 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:52.826846 systemd-logind[1548]: New session 24 of user core. Jan 20 02:44:52.870185 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 02:44:53.870243 sshd[5251]: Connection closed by 10.0.0.1 port 35348 Jan 20 02:44:53.878100 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Jan 20 02:44:53.909863 systemd[1]: sshd@23-10.0.0.100:22-10.0.0.1:35348.service: Deactivated successfully. Jan 20 02:44:53.931170 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 02:44:53.964776 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Jan 20 02:44:53.984701 systemd-logind[1548]: Removed session 24. Jan 20 02:44:58.943069 systemd[1]: Started sshd@24-10.0.0.100:22-10.0.0.1:56022.service - OpenSSH per-connection server daemon (10.0.0.1:56022). Jan 20 02:44:59.375463 sshd[5271]: Accepted publickey for core from 10.0.0.1 port 56022 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:44:59.389705 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:44:59.428958 systemd-logind[1548]: New session 25 of user core. Jan 20 02:44:59.444835 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 02:45:00.171532 sshd[5274]: Connection closed by 10.0.0.1 port 56022 Jan 20 02:45:00.173931 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:00.208332 systemd[1]: sshd@24-10.0.0.100:22-10.0.0.1:56022.service: Deactivated successfully. Jan 20 02:45:00.236785 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 02:45:00.255007 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Jan 20 02:45:00.294865 systemd-logind[1548]: Removed session 25. Jan 20 02:45:05.246218 systemd[1]: Started sshd@25-10.0.0.100:22-10.0.0.1:47532.service - OpenSSH per-connection server daemon (10.0.0.1:47532). Jan 20 02:45:05.620474 sshd[5291]: Accepted publickey for core from 10.0.0.1 port 47532 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:45:05.632830 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:05.681491 systemd-logind[1548]: New session 26 of user core. Jan 20 02:45:05.699005 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 02:45:06.277633 sshd[5294]: Connection closed by 10.0.0.1 port 47532 Jan 20 02:45:06.274509 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:06.296020 systemd-logind[1548]: Session 26 logged out. Waiting for processes to exit. Jan 20 02:45:06.316529 systemd[1]: sshd@25-10.0.0.100:22-10.0.0.1:47532.service: Deactivated successfully. Jan 20 02:45:06.328009 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 02:45:06.346092 systemd-logind[1548]: Removed session 26. Jan 20 02:45:07.989503 kubelet[2962]: E0120 02:45:07.988807 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:09.001546 kubelet[2962]: E0120 02:45:08.999628 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:11.332163 systemd[1]: Started sshd@26-10.0.0.100:22-10.0.0.1:47540.service - OpenSSH per-connection server daemon (10.0.0.1:47540). Jan 20 02:45:11.683144 sshd[5311]: Accepted publickey for core from 10.0.0.1 port 47540 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:45:11.708653 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:11.777957 systemd-logind[1548]: New session 27 of user core. Jan 20 02:45:11.802856 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 02:45:12.657259 sshd[5314]: Connection closed by 10.0.0.1 port 47540 Jan 20 02:45:12.666902 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:12.704338 systemd[1]: sshd@26-10.0.0.100:22-10.0.0.1:47540.service: Deactivated successfully. Jan 20 02:45:12.708712 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 02:45:12.730031 systemd-logind[1548]: Session 27 logged out. Waiting for processes to exit. Jan 20 02:45:12.742615 systemd-logind[1548]: Removed session 27. Jan 20 02:45:17.902315 systemd[1]: Started sshd@27-10.0.0.100:22-10.0.0.1:35026.service - OpenSSH per-connection server daemon (10.0.0.1:35026). Jan 20 02:45:18.634234 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 35026 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:45:18.628988 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:18.675041 systemd-logind[1548]: New session 28 of user core. Jan 20 02:45:18.708982 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 02:45:19.687750 sshd[5335]: Connection closed by 10.0.0.1 port 35026 Jan 20 02:45:19.685136 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:19.709337 systemd[1]: sshd@27-10.0.0.100:22-10.0.0.1:35026.service: Deactivated successfully. Jan 20 02:45:19.740016 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 02:45:19.775620 systemd-logind[1548]: Session 28 logged out. Waiting for processes to exit. Jan 20 02:45:19.789589 systemd-logind[1548]: Removed session 28. Jan 20 02:45:24.837272 systemd[1]: Started sshd@28-10.0.0.100:22-10.0.0.1:40022.service - OpenSSH per-connection server daemon (10.0.0.1:40022). Jan 20 02:45:25.586839 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 40022 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:45:25.635177 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:25.700266 systemd-logind[1548]: New session 29 of user core. Jan 20 02:45:25.763185 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 02:45:27.126506 sshd[5357]: Connection closed by 10.0.0.1 port 40022 Jan 20 02:45:27.131062 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:27.179766 systemd[1]: sshd@28-10.0.0.100:22-10.0.0.1:40022.service: Deactivated successfully. Jan 20 02:45:27.225892 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 02:45:27.266722 systemd-logind[1548]: Session 29 logged out. Waiting for processes to exit. Jan 20 02:45:27.296032 systemd-logind[1548]: Removed session 29. Jan 20 02:45:32.211188 systemd[1]: Started sshd@29-10.0.0.100:22-10.0.0.1:40038.service - OpenSSH per-connection server daemon (10.0.0.1:40038). Jan 20 02:45:32.723275 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 40038 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:45:32.738054 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:32.773543 systemd-logind[1548]: New session 30 of user core. Jan 20 02:45:32.806058 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 02:45:33.908988 sshd[5375]: Connection closed by 10.0.0.1 port 40038 Jan 20 02:45:33.911812 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:33.976016 systemd[1]: sshd@29-10.0.0.100:22-10.0.0.1:40038.service: Deactivated successfully. Jan 20 02:45:34.018239 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 02:45:34.023324 systemd-logind[1548]: Session 30 logged out. Waiting for processes to exit. Jan 20 02:45:34.042318 systemd-logind[1548]: Removed session 30. Jan 20 02:45:34.998064 kubelet[2962]: E0120 02:45:34.993599 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:39.017251 systemd[1]: Started sshd@30-10.0.0.100:22-10.0.0.1:36648.service - OpenSSH per-connection server daemon (10.0.0.1:36648). Jan 20 02:45:39.062163 kubelet[2962]: E0120 02:45:39.033527 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:39.602971 sshd[5390]: Accepted publickey for core from 10.0.0.1 port 36648 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:45:39.608074 sshd-session[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:39.697283 systemd-logind[1548]: New session 31 of user core. Jan 20 02:45:39.788601 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 02:45:41.218741 sshd[5393]: Connection closed by 10.0.0.1 port 36648 Jan 20 02:45:41.210253 sshd-session[5390]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:41.319931 systemd[1]: sshd@30-10.0.0.100:22-10.0.0.1:36648.service: Deactivated successfully. Jan 20 02:45:41.357224 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 02:45:41.376342 systemd-logind[1548]: Session 31 logged out. Waiting for processes to exit. Jan 20 02:45:41.411317 systemd[1]: Started sshd@31-10.0.0.100:22-10.0.0.1:36656.service - OpenSSH per-connection server daemon (10.0.0.1:36656). Jan 20 02:45:41.446229 systemd-logind[1548]: Removed session 31. Jan 20 02:45:41.734088 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 36656 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:45:41.746016 sshd-session[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:41.836533 systemd-logind[1548]: New session 32 of user core. Jan 20 02:45:41.881733 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 02:45:41.988547 kubelet[2962]: E0120 02:45:41.985976 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:43.436580 sshd[5411]: Connection closed by 10.0.0.1 port 36656 Jan 20 02:45:43.452123 sshd-session[5408]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:43.507335 systemd[1]: sshd@31-10.0.0.100:22-10.0.0.1:36656.service: Deactivated successfully. Jan 20 02:45:43.517507 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 02:45:43.536686 systemd-logind[1548]: Session 32 logged out. Waiting for processes to exit. Jan 20 02:45:43.566170 systemd[1]: Started sshd@32-10.0.0.100:22-10.0.0.1:36672.service - OpenSSH per-connection server daemon (10.0.0.1:36672). Jan 20 02:45:43.582101 systemd-logind[1548]: Removed session 32. Jan 20 02:45:44.533358 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 36672 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:45:44.545779 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:44.614608 systemd-logind[1548]: New session 33 of user core. Jan 20 02:45:44.632984 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 02:45:45.379195 sshd[5427]: Connection closed by 10.0.0.1 port 36672 Jan 20 02:45:45.383521 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:45.395053 systemd[1]: sshd@32-10.0.0.100:22-10.0.0.1:36672.service: Deactivated successfully. Jan 20 02:45:45.411278 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 02:45:45.420785 systemd-logind[1548]: Session 33 logged out. Waiting for processes to exit. Jan 20 02:45:45.446341 systemd-logind[1548]: Removed session 33. Jan 20 02:45:47.985672 kubelet[2962]: E0120 02:45:47.985225 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:45:50.521647 systemd[1]: Started sshd@33-10.0.0.100:22-10.0.0.1:36748.service - OpenSSH per-connection server daemon (10.0.0.1:36748). Jan 20 02:45:50.918817 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 36748 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:45:50.935022 sshd-session[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:45:51.028997 systemd-logind[1548]: New session 34 of user core. Jan 20 02:45:51.057112 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 02:45:55.170592 sshd[5448]: Connection closed by 10.0.0.1 port 36748 Jan 20 02:45:55.148101 sshd-session[5442]: pam_unix(sshd:session): session closed for user core Jan 20 02:45:55.205015 systemd[1]: sshd@33-10.0.0.100:22-10.0.0.1:36748.service: Deactivated successfully. Jan 20 02:45:55.235776 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 02:45:55.282770 systemd-logind[1548]: Session 34 logged out. Waiting for processes to exit. Jan 20 02:45:55.291569 systemd-logind[1548]: Removed session 34. Jan 20 02:46:00.235633 systemd[1]: Started sshd@34-10.0.0.100:22-10.0.0.1:49272.service - OpenSSH per-connection server daemon (10.0.0.1:49272). Jan 20 02:46:00.579654 sshd[5464]: Accepted publickey for core from 10.0.0.1 port 49272 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:46:00.591498 sshd-session[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:00.646333 systemd-logind[1548]: New session 35 of user core. Jan 20 02:46:00.680046 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 02:46:01.578778 sshd[5467]: Connection closed by 10.0.0.1 port 49272 Jan 20 02:46:01.585898 sshd-session[5464]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:01.615624 systemd[1]: sshd@34-10.0.0.100:22-10.0.0.1:49272.service: Deactivated successfully. Jan 20 02:46:01.634298 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 02:46:01.671799 systemd-logind[1548]: Session 35 logged out. Waiting for processes to exit. Jan 20 02:46:01.693034 systemd-logind[1548]: Removed session 35. Jan 20 02:46:05.013546 kubelet[2962]: E0120 02:46:05.012810 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:06.692138 systemd[1]: Started sshd@35-10.0.0.100:22-10.0.0.1:34056.service - OpenSSH per-connection server daemon (10.0.0.1:34056). Jan 20 02:46:07.245267 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 34056 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:46:07.259966 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:07.311955 systemd-logind[1548]: New session 36 of user core. Jan 20 02:46:07.380661 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 02:46:09.168480 sshd[5485]: Connection closed by 10.0.0.1 port 34056 Jan 20 02:46:09.180787 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:09.277845 systemd[1]: sshd@35-10.0.0.100:22-10.0.0.1:34056.service: Deactivated successfully. Jan 20 02:46:09.318806 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 02:46:09.331231 systemd-logind[1548]: Session 36 logged out. Waiting for processes to exit. Jan 20 02:46:09.382163 systemd-logind[1548]: Removed session 36. Jan 20 02:46:11.993870 kubelet[2962]: E0120 02:46:11.991138 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:14.266947 systemd[1]: Started sshd@36-10.0.0.100:22-10.0.0.1:34064.service - OpenSSH per-connection server daemon (10.0.0.1:34064). Jan 20 02:46:14.698318 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 34064 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:46:14.700154 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:14.755001 systemd-logind[1548]: New session 37 of user core. Jan 20 02:46:14.802265 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 02:46:15.696721 sshd[5501]: Connection closed by 10.0.0.1 port 34064 Jan 20 02:46:15.700733 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:15.772320 systemd[1]: sshd@36-10.0.0.100:22-10.0.0.1:34064.service: Deactivated successfully. Jan 20 02:46:15.793181 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 02:46:15.811127 systemd-logind[1548]: Session 37 logged out. Waiting for processes to exit. Jan 20 02:46:15.829216 systemd-logind[1548]: Removed session 37. Jan 20 02:46:19.000221 kubelet[2962]: E0120 02:46:18.999919 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:20.834988 systemd[1]: Started sshd@37-10.0.0.100:22-10.0.0.1:39980.service - OpenSSH per-connection server daemon (10.0.0.1:39980). Jan 20 02:46:21.260708 sshd[5516]: Accepted publickey for core from 10.0.0.1 port 39980 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:46:21.267517 sshd-session[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:21.303796 systemd-logind[1548]: New session 38 of user core. Jan 20 02:46:21.329891 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 02:46:22.229597 sshd[5519]: Connection closed by 10.0.0.1 port 39980 Jan 20 02:46:22.232868 sshd-session[5516]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:22.287010 systemd[1]: sshd@37-10.0.0.100:22-10.0.0.1:39980.service: Deactivated successfully. Jan 20 02:46:22.333871 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 02:46:22.404350 systemd-logind[1548]: Session 38 logged out. Waiting for processes to exit. Jan 20 02:46:22.447692 systemd-logind[1548]: Removed session 38. Jan 20 02:46:22.992702 kubelet[2962]: E0120 02:46:22.990238 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:27.362638 systemd[1]: Started sshd@38-10.0.0.100:22-10.0.0.1:57300.service - OpenSSH per-connection server daemon (10.0.0.1:57300). Jan 20 02:46:28.224993 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 57300 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:46:28.238593 sshd-session[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:28.308832 systemd-logind[1548]: New session 39 of user core. Jan 20 02:46:28.332933 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 02:46:29.377585 sshd[5535]: Connection closed by 10.0.0.1 port 57300 Jan 20 02:46:29.378725 sshd-session[5532]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:29.415352 systemd[1]: sshd@38-10.0.0.100:22-10.0.0.1:57300.service: Deactivated successfully. Jan 20 02:46:29.435850 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 02:46:29.469269 systemd-logind[1548]: Session 39 logged out. Waiting for processes to exit. Jan 20 02:46:29.484038 systemd-logind[1548]: Removed session 39. Jan 20 02:46:34.458887 systemd[1]: Started sshd@39-10.0.0.100:22-10.0.0.1:43678.service - OpenSSH per-connection server daemon (10.0.0.1:43678). Jan 20 02:46:34.816490 sshd[5548]: Accepted publickey for core from 10.0.0.1 port 43678 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:46:34.826542 sshd-session[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:34.875759 systemd-logind[1548]: New session 40 of user core. Jan 20 02:46:34.892798 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 02:46:35.418320 sshd[5551]: Connection closed by 10.0.0.1 port 43678 Jan 20 02:46:35.425495 sshd-session[5548]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:35.454958 systemd[1]: sshd@39-10.0.0.100:22-10.0.0.1:43678.service: Deactivated successfully. Jan 20 02:46:35.474720 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 02:46:35.490647 systemd-logind[1548]: Session 40 logged out. Waiting for processes to exit. Jan 20 02:46:35.503222 systemd-logind[1548]: Removed session 40. Jan 20 02:46:40.521786 systemd[1]: Started sshd@40-10.0.0.100:22-10.0.0.1:43692.service - OpenSSH per-connection server daemon (10.0.0.1:43692). Jan 20 02:46:40.867759 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 43692 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:46:40.871567 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:40.920097 systemd-logind[1548]: New session 41 of user core. Jan 20 02:46:40.950565 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 02:46:41.692206 sshd[5567]: Connection closed by 10.0.0.1 port 43692 Jan 20 02:46:41.693868 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:41.732512 systemd[1]: sshd@40-10.0.0.100:22-10.0.0.1:43692.service: Deactivated successfully. Jan 20 02:46:41.755678 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 02:46:41.770642 systemd-logind[1548]: Session 41 logged out. Waiting for processes to exit. Jan 20 02:46:41.791706 systemd-logind[1548]: Removed session 41. Jan 20 02:46:45.993499 kubelet[2962]: E0120 02:46:45.985591 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:46.788531 systemd[1]: Started sshd@41-10.0.0.100:22-10.0.0.1:51290.service - OpenSSH per-connection server daemon (10.0.0.1:51290). Jan 20 02:46:47.246752 sshd[5582]: Accepted publickey for core from 10.0.0.1 port 51290 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:46:47.276771 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:47.346719 systemd-logind[1548]: New session 42 of user core. Jan 20 02:46:47.433086 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 02:46:49.644537 sshd[5585]: Connection closed by 10.0.0.1 port 51290 Jan 20 02:46:49.646718 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:49.698895 systemd[1]: sshd@41-10.0.0.100:22-10.0.0.1:51290.service: Deactivated successfully. Jan 20 02:46:49.745633 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 02:46:49.790876 systemd-logind[1548]: Session 42 logged out. Waiting for processes to exit. Jan 20 02:46:49.794331 systemd-logind[1548]: Removed session 42. Jan 20 02:46:53.007585 kubelet[2962]: E0120 02:46:53.005829 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:46:54.757328 systemd[1]: Started sshd@42-10.0.0.100:22-10.0.0.1:52134.service - OpenSSH per-connection server daemon (10.0.0.1:52134). Jan 20 02:46:55.231170 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 52134 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:46:55.252348 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:46:55.345589 systemd-logind[1548]: New session 43 of user core. Jan 20 02:46:55.383579 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 02:46:56.511658 sshd[5604]: Connection closed by 10.0.0.1 port 52134 Jan 20 02:46:56.553955 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Jan 20 02:46:56.601358 systemd[1]: sshd@42-10.0.0.100:22-10.0.0.1:52134.service: Deactivated successfully. Jan 20 02:46:56.655782 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 02:46:56.725199 systemd-logind[1548]: Session 43 logged out. Waiting for processes to exit. Jan 20 02:46:56.877767 systemd-logind[1548]: Removed session 43. Jan 20 02:47:01.585796 systemd[1]: Started sshd@43-10.0.0.100:22-10.0.0.1:52148.service - OpenSSH per-connection server daemon (10.0.0.1:52148). Jan 20 02:47:02.051205 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 52148 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:47:02.049017 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:02.096319 systemd-logind[1548]: New session 44 of user core. Jan 20 02:47:02.143978 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 02:47:03.023523 sshd[5622]: Connection closed by 10.0.0.1 port 52148 Jan 20 02:47:03.023096 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:03.092789 systemd[1]: sshd@43-10.0.0.100:22-10.0.0.1:52148.service: Deactivated successfully. Jan 20 02:47:03.125756 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 02:47:03.142695 systemd-logind[1548]: Session 44 logged out. Waiting for processes to exit. Jan 20 02:47:03.158561 systemd-logind[1548]: Removed session 44. Jan 20 02:47:08.231893 systemd[1]: Started sshd@44-10.0.0.100:22-10.0.0.1:57464.service - OpenSSH per-connection server daemon (10.0.0.1:57464). Jan 20 02:47:08.762709 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 57464 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:47:08.776754 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:08.846170 systemd-logind[1548]: New session 45 of user core. Jan 20 02:47:08.872062 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 02:47:10.201522 sshd[5638]: Connection closed by 10.0.0.1 port 57464 Jan 20 02:47:10.203727 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:10.254727 systemd[1]: sshd@44-10.0.0.100:22-10.0.0.1:57464.service: Deactivated successfully. Jan 20 02:47:10.301190 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 02:47:10.314494 systemd-logind[1548]: Session 45 logged out. Waiting for processes to exit. Jan 20 02:47:10.355759 systemd-logind[1548]: Removed session 45. Jan 20 02:47:10.994720 kubelet[2962]: E0120 02:47:10.992062 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:11.002921 kubelet[2962]: E0120 02:47:10.997100 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:14.990496 kubelet[2962]: E0120 02:47:14.986190 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:15.321721 systemd[1]: Started sshd@45-10.0.0.100:22-10.0.0.1:45752.service - OpenSSH per-connection server daemon (10.0.0.1:45752). Jan 20 02:47:15.894802 sshd[5651]: Accepted publickey for core from 10.0.0.1 port 45752 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:47:15.906958 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:16.101178 systemd-logind[1548]: New session 46 of user core. Jan 20 02:47:16.165838 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 02:47:17.415825 sshd[5654]: Connection closed by 10.0.0.1 port 45752 Jan 20 02:47:17.420206 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:17.469131 systemd-logind[1548]: Session 46 logged out. Waiting for processes to exit. Jan 20 02:47:17.490823 systemd[1]: sshd@45-10.0.0.100:22-10.0.0.1:45752.service: Deactivated successfully. Jan 20 02:47:17.559894 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 02:47:17.618528 systemd-logind[1548]: Removed session 46. Jan 20 02:47:22.456123 systemd[1]: Started sshd@46-10.0.0.100:22-10.0.0.1:45754.service - OpenSSH per-connection server daemon (10.0.0.1:45754). Jan 20 02:47:22.894236 sshd[5670]: Accepted publickey for core from 10.0.0.1 port 45754 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:47:22.922595 sshd-session[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:22.980098 systemd-logind[1548]: New session 47 of user core. Jan 20 02:47:23.024835 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 02:47:24.173020 sshd[5673]: Connection closed by 10.0.0.1 port 45754 Jan 20 02:47:24.177807 sshd-session[5670]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:24.236872 systemd-logind[1548]: Session 47 logged out. Waiting for processes to exit. Jan 20 02:47:24.253107 systemd[1]: sshd@46-10.0.0.100:22-10.0.0.1:45754.service: Deactivated successfully. Jan 20 02:47:24.301280 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 02:47:24.346235 systemd-logind[1548]: Removed session 47. Jan 20 02:47:29.261996 systemd[1]: Started sshd@47-10.0.0.100:22-10.0.0.1:39616.service - OpenSSH per-connection server daemon (10.0.0.1:39616). Jan 20 02:47:29.704284 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 39616 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:47:29.714056 sshd-session[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:29.799526 systemd-logind[1548]: New session 48 of user core. Jan 20 02:47:29.818651 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 02:47:31.008916 kubelet[2962]: E0120 02:47:30.996675 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:31.369651 sshd[5689]: Connection closed by 10.0.0.1 port 39616 Jan 20 02:47:31.377801 sshd-session[5686]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:31.420021 systemd[1]: sshd@47-10.0.0.100:22-10.0.0.1:39616.service: Deactivated successfully. Jan 20 02:47:31.461916 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 02:47:31.477312 systemd-logind[1548]: Session 48 logged out. Waiting for processes to exit. Jan 20 02:47:31.520761 systemd-logind[1548]: Removed session 48. Jan 20 02:47:36.025965 kubelet[2962]: E0120 02:47:36.012666 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:36.481998 systemd[1]: Started sshd@48-10.0.0.100:22-10.0.0.1:47274.service - OpenSSH per-connection server daemon (10.0.0.1:47274). Jan 20 02:47:37.238633 sshd[5703]: Accepted publickey for core from 10.0.0.1 port 47274 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:47:37.281989 sshd-session[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:37.360461 systemd-logind[1548]: New session 49 of user core. Jan 20 02:47:37.393335 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 02:47:39.069968 sshd[5706]: Connection closed by 10.0.0.1 port 47274 Jan 20 02:47:39.071980 sshd-session[5703]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:39.159254 systemd[1]: sshd@48-10.0.0.100:22-10.0.0.1:47274.service: Deactivated successfully. Jan 20 02:47:39.202080 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 02:47:39.259869 systemd-logind[1548]: Session 49 logged out. Waiting for processes to exit. Jan 20 02:47:39.287829 systemd-logind[1548]: Removed session 49. Jan 20 02:47:44.147270 systemd[1]: Started sshd@49-10.0.0.100:22-10.0.0.1:47282.service - OpenSSH per-connection server daemon (10.0.0.1:47282). Jan 20 02:47:44.480648 sshd[5719]: Accepted publickey for core from 10.0.0.1 port 47282 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:47:44.502154 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:44.545037 systemd-logind[1548]: New session 50 of user core. Jan 20 02:47:44.579813 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 02:47:45.515578 sshd[5722]: Connection closed by 10.0.0.1 port 47282 Jan 20 02:47:45.527813 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:45.563999 systemd[1]: sshd@49-10.0.0.100:22-10.0.0.1:47282.service: Deactivated successfully. Jan 20 02:47:45.589058 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 02:47:45.612107 systemd-logind[1548]: Session 50 logged out. Waiting for processes to exit. Jan 20 02:47:45.635856 systemd-logind[1548]: Removed session 50. Jan 20 02:47:50.618944 systemd[1]: Started sshd@50-10.0.0.100:22-10.0.0.1:38886.service - OpenSSH per-connection server daemon (10.0.0.1:38886). Jan 20 02:47:51.113059 kubelet[2962]: E0120 02:47:51.107862 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:47:51.442791 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 38886 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:47:51.476609 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:51.562715 systemd-logind[1548]: New session 51 of user core. Jan 20 02:47:51.606675 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 02:47:52.539686 sshd[5741]: Connection closed by 10.0.0.1 port 38886 Jan 20 02:47:52.541761 sshd-session[5738]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:52.576821 systemd-logind[1548]: Session 51 logged out. Waiting for processes to exit. Jan 20 02:47:52.591239 systemd[1]: sshd@50-10.0.0.100:22-10.0.0.1:38886.service: Deactivated successfully. Jan 20 02:47:52.641719 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 02:47:52.660691 systemd-logind[1548]: Removed session 51. Jan 20 02:47:57.647029 systemd[1]: Started sshd@51-10.0.0.100:22-10.0.0.1:38410.service - OpenSSH per-connection server daemon (10.0.0.1:38410). Jan 20 02:47:58.306766 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 38410 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:47:58.331026 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:47:58.439742 systemd-logind[1548]: New session 52 of user core. Jan 20 02:47:58.491229 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 02:47:59.814709 sshd[5757]: Connection closed by 10.0.0.1 port 38410 Jan 20 02:47:59.815952 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Jan 20 02:47:59.853954 systemd[1]: sshd@51-10.0.0.100:22-10.0.0.1:38410.service: Deactivated successfully. Jan 20 02:47:59.878096 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 02:47:59.900738 systemd-logind[1548]: Session 52 logged out. Waiting for processes to exit. Jan 20 02:47:59.929109 systemd-logind[1548]: Removed session 52. Jan 20 02:47:59.996730 kubelet[2962]: E0120 02:47:59.990122 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:04.891056 systemd[1]: Started sshd@52-10.0.0.100:22-10.0.0.1:56026.service - OpenSSH per-connection server daemon (10.0.0.1:56026). Jan 20 02:48:05.499073 sshd[5773]: Accepted publickey for core from 10.0.0.1 port 56026 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:48:05.508212 sshd-session[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:05.629289 systemd-logind[1548]: New session 53 of user core. Jan 20 02:48:05.681685 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 02:48:07.008299 kubelet[2962]: E0120 02:48:07.000065 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:07.245636 sshd[5776]: Connection closed by 10.0.0.1 port 56026 Jan 20 02:48:07.253913 sshd-session[5773]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:07.318281 systemd[1]: sshd@52-10.0.0.100:22-10.0.0.1:56026.service: Deactivated successfully. Jan 20 02:48:07.381086 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 02:48:07.477967 systemd-logind[1548]: Session 53 logged out. Waiting for processes to exit. Jan 20 02:48:07.555232 systemd-logind[1548]: Removed session 53. Jan 20 02:48:12.337182 systemd[1]: Started sshd@53-10.0.0.100:22-10.0.0.1:56040.service - OpenSSH per-connection server daemon (10.0.0.1:56040). Jan 20 02:48:12.925171 sshd[5789]: Accepted publickey for core from 10.0.0.1 port 56040 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:48:12.948218 sshd-session[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:13.022102 systemd-logind[1548]: New session 54 of user core. Jan 20 02:48:13.050743 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 02:48:14.040148 sshd[5792]: Connection closed by 10.0.0.1 port 56040 Jan 20 02:48:14.045601 sshd-session[5789]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:14.085781 systemd[1]: sshd@53-10.0.0.100:22-10.0.0.1:56040.service: Deactivated successfully. Jan 20 02:48:14.106060 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 02:48:14.127622 systemd-logind[1548]: Session 54 logged out. Waiting for processes to exit. Jan 20 02:48:14.140617 systemd-logind[1548]: Removed session 54. Jan 20 02:48:16.009581 kubelet[2962]: E0120 02:48:16.006688 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:16.017740 kubelet[2962]: E0120 02:48:16.015670 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:17.989504 kubelet[2962]: E0120 02:48:17.985942 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:19.166796 systemd[1]: Started sshd@54-10.0.0.100:22-10.0.0.1:50442.service - OpenSSH per-connection server daemon (10.0.0.1:50442). Jan 20 02:48:20.287998 sshd[5805]: Accepted publickey for core from 10.0.0.1 port 50442 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:48:20.291164 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:20.336976 systemd-logind[1548]: New session 55 of user core. Jan 20 02:48:20.374599 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 02:48:21.508934 sshd[5810]: Connection closed by 10.0.0.1 port 50442 Jan 20 02:48:21.515779 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:21.538303 systemd[1]: sshd@54-10.0.0.100:22-10.0.0.1:50442.service: Deactivated successfully. Jan 20 02:48:21.589724 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 02:48:21.604253 systemd-logind[1548]: Session 55 logged out. Waiting for processes to exit. Jan 20 02:48:21.632939 systemd-logind[1548]: Removed session 55. Jan 20 02:48:26.573981 systemd[1]: Started sshd@55-10.0.0.100:22-10.0.0.1:56284.service - OpenSSH per-connection server daemon (10.0.0.1:56284). Jan 20 02:48:26.850135 sshd[5824]: Accepted publickey for core from 10.0.0.1 port 56284 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:48:26.867196 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:26.920001 systemd-logind[1548]: New session 56 of user core. Jan 20 02:48:26.938810 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 02:48:27.807835 sshd[5827]: Connection closed by 10.0.0.1 port 56284 Jan 20 02:48:27.809944 sshd-session[5824]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:27.903243 systemd[1]: sshd@55-10.0.0.100:22-10.0.0.1:56284.service: Deactivated successfully. Jan 20 02:48:27.938008 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 02:48:27.964684 systemd-logind[1548]: Session 56 logged out. Waiting for processes to exit. Jan 20 02:48:28.037801 systemd-logind[1548]: Removed session 56. Jan 20 02:48:39.577309 systemd[1]: Started sshd@56-10.0.0.100:22-10.0.0.1:56290.service - OpenSSH per-connection server daemon (10.0.0.1:56290). Jan 20 02:48:40.410746 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 56290 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:48:40.433892 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:40.543646 systemd-logind[1548]: New session 57 of user core. Jan 20 02:48:40.586902 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 02:48:41.677850 sshd[5845]: Connection closed by 10.0.0.1 port 56290 Jan 20 02:48:41.683161 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:41.758082 systemd[1]: sshd@56-10.0.0.100:22-10.0.0.1:56290.service: Deactivated successfully. Jan 20 02:48:42.079797 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 02:48:42.107598 kubelet[2962]: E0120 02:48:42.101270 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:42.137709 systemd-logind[1548]: Session 57 logged out. Waiting for processes to exit. Jan 20 02:48:42.183240 systemd[1]: Started sshd@57-10.0.0.100:22-10.0.0.1:39144.service - OpenSSH per-connection server daemon (10.0.0.1:39144). Jan 20 02:48:42.269803 systemd-logind[1548]: Removed session 57. Jan 20 02:48:43.076689 sshd[5859]: Accepted publickey for core from 10.0.0.1 port 39144 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:48:43.104263 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:43.204712 systemd-logind[1548]: New session 58 of user core. Jan 20 02:48:43.228889 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 02:48:46.789993 sshd[5862]: Connection closed by 10.0.0.1 port 39144 Jan 20 02:48:46.798594 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:46.844964 systemd[1]: sshd@57-10.0.0.100:22-10.0.0.1:39144.service: Deactivated successfully. Jan 20 02:48:46.862295 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 02:48:46.869117 systemd-logind[1548]: Session 58 logged out. Waiting for processes to exit. Jan 20 02:48:46.895100 systemd[1]: Started sshd@58-10.0.0.100:22-10.0.0.1:54952.service - OpenSSH per-connection server daemon (10.0.0.1:54952). Jan 20 02:48:46.925262 systemd-logind[1548]: Removed session 58. Jan 20 02:48:47.723037 sshd[5875]: Accepted publickey for core from 10.0.0.1 port 54952 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:48:47.719751 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:47.845029 systemd-logind[1548]: New session 59 of user core. Jan 20 02:48:47.889352 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 02:48:47.994254 kubelet[2962]: E0120 02:48:47.993278 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:48:54.038620 sshd[5878]: Connection closed by 10.0.0.1 port 54952 Jan 20 02:48:54.043093 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:54.127976 systemd[1]: sshd@58-10.0.0.100:22-10.0.0.1:54952.service: Deactivated successfully. Jan 20 02:48:54.180705 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 02:48:54.181179 systemd[1]: session-59.scope: Consumed 1.177s CPU time, 44.5M memory peak. Jan 20 02:48:54.214198 systemd-logind[1548]: Session 59 logged out. Waiting for processes to exit. Jan 20 02:48:54.261131 systemd[1]: Started sshd@59-10.0.0.100:22-10.0.0.1:54960.service - OpenSSH per-connection server daemon (10.0.0.1:54960). Jan 20 02:48:54.325725 systemd-logind[1548]: Removed session 59. Jan 20 02:48:55.254208 sshd[5903]: Accepted publickey for core from 10.0.0.1 port 54960 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:48:55.261825 sshd-session[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:55.341196 systemd-logind[1548]: New session 60 of user core. Jan 20 02:48:55.378343 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 02:48:57.285998 sshd[5906]: Connection closed by 10.0.0.1 port 54960 Jan 20 02:48:57.297960 sshd-session[5903]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:57.411655 systemd[1]: sshd@59-10.0.0.100:22-10.0.0.1:54960.service: Deactivated successfully. Jan 20 02:48:57.435674 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 02:48:57.450864 systemd-logind[1548]: Session 60 logged out. Waiting for processes to exit. Jan 20 02:48:57.499043 systemd[1]: Started sshd@60-10.0.0.100:22-10.0.0.1:52852.service - OpenSSH per-connection server daemon (10.0.0.1:52852). Jan 20 02:48:57.502092 systemd-logind[1548]: Removed session 60. Jan 20 02:48:57.941168 sshd[5918]: Accepted publickey for core from 10.0.0.1 port 52852 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:48:57.944058 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:48:57.981607 systemd-logind[1548]: New session 61 of user core. Jan 20 02:48:58.025858 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 02:48:58.877320 sshd[5921]: Connection closed by 10.0.0.1 port 52852 Jan 20 02:48:58.883909 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Jan 20 02:48:58.913543 systemd[1]: sshd@60-10.0.0.100:22-10.0.0.1:52852.service: Deactivated successfully. Jan 20 02:48:58.929623 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 02:48:58.934266 systemd-logind[1548]: Session 61 logged out. Waiting for processes to exit. Jan 20 02:48:58.948318 systemd-logind[1548]: Removed session 61. Jan 20 02:49:03.944862 systemd[1]: Started sshd@61-10.0.0.100:22-10.0.0.1:52854.service - OpenSSH per-connection server daemon (10.0.0.1:52854). Jan 20 02:49:04.382922 sshd[5937]: Accepted publickey for core from 10.0.0.1 port 52854 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:49:04.396937 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:04.442698 systemd-logind[1548]: New session 62 of user core. Jan 20 02:49:04.498115 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 02:49:04.997899 kubelet[2962]: E0120 02:49:04.990351 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:05.765180 sshd[5940]: Connection closed by 10.0.0.1 port 52854 Jan 20 02:49:05.773958 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:05.824671 systemd-logind[1548]: Session 62 logged out. Waiting for processes to exit. Jan 20 02:49:05.850634 systemd[1]: sshd@61-10.0.0.100:22-10.0.0.1:52854.service: Deactivated successfully. Jan 20 02:49:05.886313 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 02:49:05.939871 systemd-logind[1548]: Removed session 62. Jan 20 02:49:10.931194 systemd[1]: Started sshd@62-10.0.0.100:22-10.0.0.1:34228.service - OpenSSH per-connection server daemon (10.0.0.1:34228). Jan 20 02:49:11.792559 sshd[5953]: Accepted publickey for core from 10.0.0.1 port 34228 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:49:11.835174 sshd-session[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:11.958926 systemd-logind[1548]: New session 63 of user core. Jan 20 02:49:11.995327 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 02:49:13.235105 sshd[5956]: Connection closed by 10.0.0.1 port 34228 Jan 20 02:49:13.240262 sshd-session[5953]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:13.304550 systemd[1]: sshd@62-10.0.0.100:22-10.0.0.1:34228.service: Deactivated successfully. Jan 20 02:49:13.358168 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 02:49:13.434681 systemd-logind[1548]: Session 63 logged out. Waiting for processes to exit. Jan 20 02:49:13.456877 systemd-logind[1548]: Removed session 63. Jan 20 02:49:15.995224 kubelet[2962]: E0120 02:49:15.986194 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:16.986915 kubelet[2962]: E0120 02:49:16.986091 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:18.306160 systemd[1]: Started sshd@63-10.0.0.100:22-10.0.0.1:45458.service - OpenSSH per-connection server daemon (10.0.0.1:45458). Jan 20 02:49:18.677905 sshd[5970]: Accepted publickey for core from 10.0.0.1 port 45458 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:49:18.695166 sshd-session[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:18.801825 systemd-logind[1548]: New session 64 of user core. Jan 20 02:49:18.817166 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 20 02:49:19.990125 sshd[5973]: Connection closed by 10.0.0.1 port 45458 Jan 20 02:49:20.006032 sshd-session[5970]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:20.037227 systemd[1]: sshd@63-10.0.0.100:22-10.0.0.1:45458.service: Deactivated successfully. Jan 20 02:49:20.058984 systemd[1]: session-64.scope: Deactivated successfully. Jan 20 02:49:20.085887 systemd-logind[1548]: Session 64 logged out. Waiting for processes to exit. Jan 20 02:49:20.101141 systemd-logind[1548]: Removed session 64. Jan 20 02:49:25.160250 kubelet[2962]: E0120 02:49:25.160208 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:25.187559 kubelet[2962]: E0120 02:49:25.160686 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:25.226734 systemd[1]: Started sshd@64-10.0.0.100:22-10.0.0.1:34534.service - OpenSSH per-connection server daemon (10.0.0.1:34534). Jan 20 02:49:25.683582 sshd[5989]: Accepted publickey for core from 10.0.0.1 port 34534 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:49:25.710895 sshd-session[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:25.802061 systemd-logind[1548]: New session 65 of user core. Jan 20 02:49:25.854798 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 20 02:49:26.980038 sshd[5992]: Connection closed by 10.0.0.1 port 34534 Jan 20 02:49:26.982318 sshd-session[5989]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:27.042198 systemd-logind[1548]: Session 65 logged out. Waiting for processes to exit. Jan 20 02:49:27.078261 systemd[1]: sshd@64-10.0.0.100:22-10.0.0.1:34534.service: Deactivated successfully. Jan 20 02:49:27.115905 systemd[1]: session-65.scope: Deactivated successfully. Jan 20 02:49:27.182288 systemd-logind[1548]: Removed session 65. Jan 20 02:49:31.067916 kubelet[2962]: E0120 02:49:31.055816 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:49:32.170578 systemd[1]: Started sshd@65-10.0.0.100:22-10.0.0.1:34548.service - OpenSSH per-connection server daemon (10.0.0.1:34548). Jan 20 02:49:32.959015 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 34548 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:49:33.011337 sshd-session[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:33.197136 systemd-logind[1548]: New session 66 of user core. Jan 20 02:49:33.272708 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 20 02:49:35.238256 sshd[6009]: Connection closed by 10.0.0.1 port 34548 Jan 20 02:49:35.227040 sshd-session[6006]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:35.285946 systemd[1]: sshd@65-10.0.0.100:22-10.0.0.1:34548.service: Deactivated successfully. Jan 20 02:49:35.336014 systemd[1]: session-66.scope: Deactivated successfully. Jan 20 02:49:35.412884 systemd-logind[1548]: Session 66 logged out. Waiting for processes to exit. Jan 20 02:49:35.438946 systemd-logind[1548]: Removed session 66. Jan 20 02:49:40.434851 systemd[1]: Started sshd@66-10.0.0.100:22-10.0.0.1:39486.service - OpenSSH per-connection server daemon (10.0.0.1:39486). Jan 20 02:49:41.321348 sshd[6023]: Accepted publickey for core from 10.0.0.1 port 39486 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:49:41.357697 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:41.434723 systemd-logind[1548]: New session 67 of user core. Jan 20 02:49:41.495711 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 20 02:49:42.539982 sshd[6027]: Connection closed by 10.0.0.1 port 39486 Jan 20 02:49:42.537133 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:42.594874 systemd[1]: sshd@66-10.0.0.100:22-10.0.0.1:39486.service: Deactivated successfully. Jan 20 02:49:42.631164 systemd[1]: session-67.scope: Deactivated successfully. Jan 20 02:49:42.664096 systemd-logind[1548]: Session 67 logged out. Waiting for processes to exit. Jan 20 02:49:42.682725 systemd-logind[1548]: Removed session 67. Jan 20 02:49:47.673935 systemd[1]: Started sshd@67-10.0.0.100:22-10.0.0.1:35972.service - OpenSSH per-connection server daemon (10.0.0.1:35972). Jan 20 02:49:48.125640 sshd[6040]: Accepted publickey for core from 10.0.0.1 port 35972 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:49:48.145569 sshd-session[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:48.244253 systemd-logind[1548]: New session 68 of user core. Jan 20 02:49:48.281203 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 20 02:49:49.322642 sshd[6043]: Connection closed by 10.0.0.1 port 35972 Jan 20 02:49:49.319874 sshd-session[6040]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:49.373183 systemd[1]: sshd@67-10.0.0.100:22-10.0.0.1:35972.service: Deactivated successfully. Jan 20 02:49:49.434245 systemd[1]: session-68.scope: Deactivated successfully. Jan 20 02:49:49.499182 systemd-logind[1548]: Session 68 logged out. Waiting for processes to exit. Jan 20 02:49:49.539117 systemd-logind[1548]: Removed session 68. Jan 20 02:49:54.443719 systemd[1]: Started sshd@68-10.0.0.100:22-10.0.0.1:35978.service - OpenSSH per-connection server daemon (10.0.0.1:35978). Jan 20 02:49:55.256209 sshd[6058]: Accepted publickey for core from 10.0.0.1 port 35978 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:49:55.274995 sshd-session[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:49:55.329752 systemd-logind[1548]: New session 69 of user core. Jan 20 02:49:55.386714 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 20 02:49:56.498715 sshd[6061]: Connection closed by 10.0.0.1 port 35978 Jan 20 02:49:56.497951 sshd-session[6058]: pam_unix(sshd:session): session closed for user core Jan 20 02:49:56.538891 systemd[1]: sshd@68-10.0.0.100:22-10.0.0.1:35978.service: Deactivated successfully. Jan 20 02:49:56.556252 systemd[1]: session-69.scope: Deactivated successfully. Jan 20 02:49:56.583098 systemd-logind[1548]: Session 69 logged out. Waiting for processes to exit. Jan 20 02:49:56.596014 systemd-logind[1548]: Removed session 69. Jan 20 02:50:01.637288 systemd[1]: Started sshd@69-10.0.0.100:22-10.0.0.1:38736.service - OpenSSH per-connection server daemon (10.0.0.1:38736). Jan 20 02:50:02.032348 kubelet[2962]: E0120 02:50:02.031819 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:02.544018 sshd[6077]: Accepted publickey for core from 10.0.0.1 port 38736 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:50:02.574761 sshd-session[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:02.650091 systemd-logind[1548]: New session 70 of user core. Jan 20 02:50:02.694026 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 20 02:50:04.133325 sshd[6080]: Connection closed by 10.0.0.1 port 38736 Jan 20 02:50:04.163285 sshd-session[6077]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:04.262984 systemd[1]: sshd@69-10.0.0.100:22-10.0.0.1:38736.service: Deactivated successfully. Jan 20 02:50:04.284096 systemd-logind[1548]: Session 70 logged out. Waiting for processes to exit. Jan 20 02:50:04.316921 systemd[1]: session-70.scope: Deactivated successfully. Jan 20 02:50:04.371059 systemd-logind[1548]: Removed session 70. Jan 20 02:50:05.999972 kubelet[2962]: E0120 02:50:05.999926 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:09.179201 systemd[1]: Started sshd@70-10.0.0.100:22-10.0.0.1:46996.service - OpenSSH per-connection server daemon (10.0.0.1:46996). Jan 20 02:50:10.060266 sshd[6095]: Accepted publickey for core from 10.0.0.1 port 46996 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:50:10.112845 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:10.235151 systemd-logind[1548]: New session 71 of user core. Jan 20 02:50:10.296301 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 20 02:50:11.582063 sshd[6098]: Connection closed by 10.0.0.1 port 46996 Jan 20 02:50:11.581839 sshd-session[6095]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:11.656005 systemd[1]: sshd@70-10.0.0.100:22-10.0.0.1:46996.service: Deactivated successfully. Jan 20 02:50:11.692165 systemd[1]: session-71.scope: Deactivated successfully. Jan 20 02:50:11.731592 systemd-logind[1548]: Session 71 logged out. Waiting for processes to exit. Jan 20 02:50:11.796699 systemd-logind[1548]: Removed session 71. Jan 20 02:50:16.651157 systemd[1]: Started sshd@71-10.0.0.100:22-10.0.0.1:56628.service - OpenSSH per-connection server daemon (10.0.0.1:56628). Jan 20 02:50:17.055800 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 56628 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:50:17.062223 sshd-session[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:17.115637 systemd-logind[1548]: New session 72 of user core. Jan 20 02:50:17.149113 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 20 02:50:17.960932 sshd[6115]: Connection closed by 10.0.0.1 port 56628 Jan 20 02:50:17.962766 sshd-session[6112]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:18.016727 systemd[1]: sshd@71-10.0.0.100:22-10.0.0.1:56628.service: Deactivated successfully. Jan 20 02:50:18.068298 systemd[1]: session-72.scope: Deactivated successfully. Jan 20 02:50:18.093265 systemd-logind[1548]: Session 72 logged out. Waiting for processes to exit. Jan 20 02:50:18.115537 systemd-logind[1548]: Removed session 72. Jan 20 02:50:21.012909 kubelet[2962]: E0120 02:50:21.012607 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:23.086927 systemd[1]: Started sshd@72-10.0.0.100:22-10.0.0.1:56638.service - OpenSSH per-connection server daemon (10.0.0.1:56638). Jan 20 02:50:23.538200 sshd[6130]: Accepted publickey for core from 10.0.0.1 port 56638 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:50:23.556131 sshd-session[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:23.613093 systemd-logind[1548]: New session 73 of user core. Jan 20 02:50:23.634168 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 20 02:50:24.536718 sshd[6133]: Connection closed by 10.0.0.1 port 56638 Jan 20 02:50:24.541150 sshd-session[6130]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:24.617305 systemd[1]: sshd@72-10.0.0.100:22-10.0.0.1:56638.service: Deactivated successfully. Jan 20 02:50:24.674268 systemd[1]: session-73.scope: Deactivated successfully. Jan 20 02:50:24.776681 systemd-logind[1548]: Session 73 logged out. Waiting for processes to exit. Jan 20 02:50:24.804240 systemd-logind[1548]: Removed session 73. Jan 20 02:50:29.625971 systemd[1]: Started sshd@73-10.0.0.100:22-10.0.0.1:45300.service - OpenSSH per-connection server daemon (10.0.0.1:45300). Jan 20 02:50:30.470219 sshd[6148]: Accepted publickey for core from 10.0.0.1 port 45300 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:50:30.486099 sshd-session[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:30.524924 systemd-logind[1548]: New session 74 of user core. Jan 20 02:50:30.548853 systemd[1]: Started session-74.scope - Session 74 of User core. Jan 20 02:50:31.401772 sshd[6151]: Connection closed by 10.0.0.1 port 45300 Jan 20 02:50:31.404070 sshd-session[6148]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:31.446687 systemd[1]: sshd@73-10.0.0.100:22-10.0.0.1:45300.service: Deactivated successfully. Jan 20 02:50:31.479514 systemd[1]: session-74.scope: Deactivated successfully. Jan 20 02:50:31.498492 systemd-logind[1548]: Session 74 logged out. Waiting for processes to exit. Jan 20 02:50:31.520289 systemd-logind[1548]: Removed session 74. Jan 20 02:50:33.992076 kubelet[2962]: E0120 02:50:33.991233 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:34.006582 kubelet[2962]: E0120 02:50:33.994968 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:36.541943 systemd[1]: Started sshd@74-10.0.0.100:22-10.0.0.1:46422.service - OpenSSH per-connection server daemon (10.0.0.1:46422). Jan 20 02:50:37.009056 kubelet[2962]: E0120 02:50:37.007933 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:37.369671 sshd[6165]: Accepted publickey for core from 10.0.0.1 port 46422 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:50:37.371323 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:37.469258 systemd-logind[1548]: New session 75 of user core. Jan 20 02:50:37.514350 systemd[1]: Started session-75.scope - Session 75 of User core. Jan 20 02:50:38.903676 sshd[6168]: Connection closed by 10.0.0.1 port 46422 Jan 20 02:50:38.901843 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:38.947881 systemd[1]: sshd@74-10.0.0.100:22-10.0.0.1:46422.service: Deactivated successfully. Jan 20 02:50:38.983183 systemd[1]: session-75.scope: Deactivated successfully. Jan 20 02:50:39.043183 systemd-logind[1548]: Session 75 logged out. Waiting for processes to exit. Jan 20 02:50:39.066991 systemd-logind[1548]: Removed session 75. Jan 20 02:50:42.003816 kubelet[2962]: E0120 02:50:42.003248 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:43.984636 systemd[1]: Started sshd@75-10.0.0.100:22-10.0.0.1:46424.service - OpenSSH per-connection server daemon (10.0.0.1:46424). Jan 20 02:50:43.996185 kubelet[2962]: E0120 02:50:43.995703 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:50:44.363520 sshd[6183]: Accepted publickey for core from 10.0.0.1 port 46424 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:50:44.379329 sshd-session[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:44.455163 systemd-logind[1548]: New session 76 of user core. Jan 20 02:50:44.474869 systemd[1]: Started session-76.scope - Session 76 of User core. Jan 20 02:50:45.541541 sshd[6186]: Connection closed by 10.0.0.1 port 46424 Jan 20 02:50:45.557662 sshd-session[6183]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:45.604336 systemd[1]: sshd@75-10.0.0.100:22-10.0.0.1:46424.service: Deactivated successfully. Jan 20 02:50:45.645254 systemd[1]: session-76.scope: Deactivated successfully. Jan 20 02:50:45.676353 systemd-logind[1548]: Session 76 logged out. Waiting for processes to exit. Jan 20 02:50:45.706086 systemd-logind[1548]: Removed session 76. Jan 20 02:50:50.617962 systemd[1]: Started sshd@76-10.0.0.100:22-10.0.0.1:56662.service - OpenSSH per-connection server daemon (10.0.0.1:56662). Jan 20 02:50:50.984220 sshd[6202]: Accepted publickey for core from 10.0.0.1 port 56662 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:50:50.994848 sshd-session[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:51.027096 systemd-logind[1548]: New session 77 of user core. Jan 20 02:50:51.065119 systemd[1]: Started session-77.scope - Session 77 of User core. Jan 20 02:50:52.120246 sshd[6205]: Connection closed by 10.0.0.1 port 56662 Jan 20 02:50:52.124882 sshd-session[6202]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:52.158302 systemd[1]: sshd@76-10.0.0.100:22-10.0.0.1:56662.service: Deactivated successfully. Jan 20 02:50:52.196148 systemd[1]: session-77.scope: Deactivated successfully. Jan 20 02:50:52.248347 systemd-logind[1548]: Session 77 logged out. Waiting for processes to exit. Jan 20 02:50:52.277795 systemd-logind[1548]: Removed session 77. Jan 20 02:50:57.164922 systemd[1]: Started sshd@77-10.0.0.100:22-10.0.0.1:39602.service - OpenSSH per-connection server daemon (10.0.0.1:39602). Jan 20 02:50:57.606487 sshd[6218]: Accepted publickey for core from 10.0.0.1 port 39602 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:50:57.619095 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:50:57.644204 systemd-logind[1548]: New session 78 of user core. Jan 20 02:50:57.684277 systemd[1]: Started session-78.scope - Session 78 of User core. Jan 20 02:50:58.325599 sshd[6221]: Connection closed by 10.0.0.1 port 39602 Jan 20 02:50:58.334892 sshd-session[6218]: pam_unix(sshd:session): session closed for user core Jan 20 02:50:58.373027 systemd[1]: sshd@77-10.0.0.100:22-10.0.0.1:39602.service: Deactivated successfully. Jan 20 02:50:58.392240 systemd[1]: session-78.scope: Deactivated successfully. Jan 20 02:50:58.418792 systemd-logind[1548]: Session 78 logged out. Waiting for processes to exit. Jan 20 02:50:58.445764 systemd-logind[1548]: Removed session 78. Jan 20 02:51:03.412186 systemd[1]: Started sshd@78-10.0.0.100:22-10.0.0.1:39606.service - OpenSSH per-connection server daemon (10.0.0.1:39606). Jan 20 02:51:03.814707 sshd[6236]: Accepted publickey for core from 10.0.0.1 port 39606 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:51:03.819534 sshd-session[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:03.858762 systemd-logind[1548]: New session 79 of user core. Jan 20 02:51:03.914087 systemd[1]: Started session-79.scope - Session 79 of User core. Jan 20 02:51:04.807521 sshd[6239]: Connection closed by 10.0.0.1 port 39606 Jan 20 02:51:04.810164 sshd-session[6236]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:04.823997 systemd[1]: sshd@78-10.0.0.100:22-10.0.0.1:39606.service: Deactivated successfully. Jan 20 02:51:04.830906 systemd[1]: session-79.scope: Deactivated successfully. Jan 20 02:51:04.839176 systemd-logind[1548]: Session 79 logged out. Waiting for processes to exit. Jan 20 02:51:04.868270 systemd-logind[1548]: Removed session 79. Jan 20 02:51:05.037509 kubelet[2962]: E0120 02:51:05.037223 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:09.859875 systemd[1]: Started sshd@79-10.0.0.100:22-10.0.0.1:44188.service - OpenSSH per-connection server daemon (10.0.0.1:44188). Jan 20 02:51:10.157694 sshd[6254]: Accepted publickey for core from 10.0.0.1 port 44188 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:51:10.169885 sshd-session[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:10.224121 systemd-logind[1548]: New session 80 of user core. Jan 20 02:51:10.245290 systemd[1]: Started session-80.scope - Session 80 of User core. Jan 20 02:51:10.967196 sshd[6257]: Connection closed by 10.0.0.1 port 44188 Jan 20 02:51:10.982843 sshd-session[6254]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:11.044166 systemd[1]: sshd@79-10.0.0.100:22-10.0.0.1:44188.service: Deactivated successfully. Jan 20 02:51:11.054870 systemd[1]: session-80.scope: Deactivated successfully. Jan 20 02:51:11.066056 systemd-logind[1548]: Session 80 logged out. Waiting for processes to exit. Jan 20 02:51:11.077896 systemd[1]: Started sshd@80-10.0.0.100:22-10.0.0.1:44198.service - OpenSSH per-connection server daemon (10.0.0.1:44198). Jan 20 02:51:11.102969 systemd-logind[1548]: Removed session 80. Jan 20 02:51:11.461028 sshd[6271]: Accepted publickey for core from 10.0.0.1 port 44198 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:51:11.465148 sshd-session[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:11.524223 systemd-logind[1548]: New session 81 of user core. Jan 20 02:51:11.542088 systemd[1]: Started session-81.scope - Session 81 of User core. Jan 20 02:51:13.015508 kubelet[2962]: E0120 02:51:13.015029 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:21.669926 containerd[1582]: time="2026-01-20T02:51:21.669277883Z" level=info msg="StopContainer for \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\" with timeout 30 (s)" Jan 20 02:51:21.723678 containerd[1582]: time="2026-01-20T02:51:21.716939827Z" level=info msg="Stop container \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\" with signal terminated" Jan 20 02:51:22.088284 systemd[1]: cri-containerd-bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274.scope: Deactivated successfully. Jan 20 02:51:22.089222 systemd[1]: cri-containerd-bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274.scope: Consumed 5.440s CPU time, 29.7M memory peak, 4K written to disk. Jan 20 02:51:22.144715 containerd[1582]: time="2026-01-20T02:51:22.120990353Z" level=info msg="received container exit event container_id:\"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\" id:\"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\" pid:4866 exited_at:{seconds:1768877482 nanos:107167963}" Jan 20 02:51:22.280609 containerd[1582]: time="2026-01-20T02:51:22.280552076Z" level=info msg="StopContainer for \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\" with timeout 2 (s)" Jan 20 02:51:22.310189 containerd[1582]: time="2026-01-20T02:51:22.302945608Z" level=info msg="Stop container \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\" with signal terminated" Jan 20 02:51:22.309966 sshd-session[6271]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:22.310977 sshd[6274]: Connection closed by 10.0.0.1 port 44198 Jan 20 02:51:22.435942 systemd[1]: sshd@80-10.0.0.100:22-10.0.0.1:44198.service: Deactivated successfully. Jan 20 02:51:22.444055 containerd[1582]: time="2026-01-20T02:51:22.443929634Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:51:22.458053 systemd[1]: session-81.scope: Deactivated successfully. Jan 20 02:51:22.462253 systemd[1]: session-81.scope: Consumed 1.966s CPU time, 28.6M memory peak. Jan 20 02:51:22.483625 systemd-logind[1548]: Session 81 logged out. Waiting for processes to exit. Jan 20 02:51:22.506109 systemd[1]: Started sshd@81-10.0.0.100:22-10.0.0.1:37486.service - OpenSSH per-connection server daemon (10.0.0.1:37486). Jan 20 02:51:22.560038 systemd-logind[1548]: Removed session 81. Jan 20 02:51:22.821091 systemd-networkd[1465]: lxc_health: Link DOWN Jan 20 02:51:22.822251 systemd-networkd[1465]: lxc_health: Lost carrier Jan 20 02:51:23.238293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274-rootfs.mount: Deactivated successfully. Jan 20 02:51:23.240299 systemd[1]: cri-containerd-bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60.scope: Deactivated successfully. Jan 20 02:51:23.242196 systemd[1]: cri-containerd-bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60.scope: Consumed 41.715s CPU time, 142.3M memory peak, 2.9M read from disk, 13.3M written to disk. Jan 20 02:51:23.247185 containerd[1582]: time="2026-01-20T02:51:23.247142331Z" level=info msg="received container exit event container_id:\"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\" id:\"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\" pid:3689 exited_at:{seconds:1768877483 nanos:246105388}" Jan 20 02:51:23.524317 sshd[6329]: Accepted publickey for core from 10.0.0.1 port 37486 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:51:23.540270 sshd-session[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:23.559596 containerd[1582]: time="2026-01-20T02:51:23.543559013Z" level=info msg="StopContainer for \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\" returns successfully" Jan 20 02:51:23.620235 systemd-logind[1548]: New session 82 of user core. Jan 20 02:51:23.664072 containerd[1582]: time="2026-01-20T02:51:23.648987070Z" level=info msg="StopPodSandbox for \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\"" Jan 20 02:51:23.677772 systemd[1]: Started session-82.scope - Session 82 of User core. Jan 20 02:51:23.692288 containerd[1582]: time="2026-01-20T02:51:23.678122062Z" level=info msg="Container to stop \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:51:23.692288 containerd[1582]: time="2026-01-20T02:51:23.678172416Z" level=info msg="Container to stop \"92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:51:23.909001 kubelet[2962]: I0120 02:51:23.896571 2962 scope.go:117] "RemoveContainer" containerID="92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7" Jan 20 02:51:23.920592 containerd[1582]: time="2026-01-20T02:51:23.913853530Z" level=info msg="RemoveContainer for \"92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7\"" Jan 20 02:51:24.095730 systemd[1]: cri-containerd-10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3.scope: Deactivated successfully. Jan 20 02:51:24.198140 containerd[1582]: time="2026-01-20T02:51:24.193892241Z" level=info msg="received sandbox exit event container_id:\"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" id:\"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" exit_status:137 exited_at:{seconds:1768877484 nanos:146948361}" monitor_name=podsandbox Jan 20 02:51:24.334946 containerd[1582]: time="2026-01-20T02:51:24.334746083Z" level=info msg="RemoveContainer for \"92f8076feaf44d7a5a2f598adf8976109b44ccf0dad8ad0142d2c4f485ae3ae7\" returns successfully" Jan 20 02:51:24.475598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60-rootfs.mount: Deactivated successfully. Jan 20 02:51:24.782555 containerd[1582]: time="2026-01-20T02:51:24.769131213Z" level=info msg="StopContainer for \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\" returns successfully" Jan 20 02:51:24.818264 containerd[1582]: time="2026-01-20T02:51:24.811997755Z" level=info msg="StopPodSandbox for \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\"" Jan 20 02:51:24.818264 containerd[1582]: time="2026-01-20T02:51:24.812173150Z" level=info msg="Container to stop \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:51:24.818264 containerd[1582]: time="2026-01-20T02:51:24.812196864Z" level=info msg="Container to stop \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:51:24.818264 containerd[1582]: time="2026-01-20T02:51:24.812208806Z" level=info msg="Container to stop \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:51:24.818264 containerd[1582]: time="2026-01-20T02:51:24.812222100Z" level=info msg="Container to stop \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:51:24.818264 containerd[1582]: time="2026-01-20T02:51:24.812233802Z" level=info msg="Container to stop \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:51:25.238900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3-rootfs.mount: Deactivated successfully. Jan 20 02:51:25.253209 systemd[1]: cri-containerd-a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26.scope: Deactivated successfully. Jan 20 02:51:25.320718 containerd[1582]: time="2026-01-20T02:51:25.318697651Z" level=info msg="shim disconnected" id=10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3 namespace=k8s.io Jan 20 02:51:25.320718 containerd[1582]: time="2026-01-20T02:51:25.318740118Z" level=warning msg="cleaning up after shim disconnected" id=10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3 namespace=k8s.io Jan 20 02:51:25.320718 containerd[1582]: time="2026-01-20T02:51:25.318838071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 02:51:25.350676 containerd[1582]: time="2026-01-20T02:51:25.342929233Z" level=info msg="received sandbox exit event container_id:\"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" id:\"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" exit_status:137 exited_at:{seconds:1768877485 nanos:342248631}" monitor_name=podsandbox Jan 20 02:51:26.047582 kubelet[2962]: E0120 02:51:26.043053 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:51:26.197289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3-shm.mount: Deactivated successfully. Jan 20 02:51:26.217620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26-rootfs.mount: Deactivated successfully. Jan 20 02:51:26.244059 containerd[1582]: time="2026-01-20T02:51:26.244009080Z" level=info msg="TearDown network for sandbox \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" successfully" Jan 20 02:51:26.244665 containerd[1582]: time="2026-01-20T02:51:26.244634430Z" level=info msg="StopPodSandbox for \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" returns successfully" Jan 20 02:51:26.288836 containerd[1582]: time="2026-01-20T02:51:26.281122097Z" level=info msg="received sandbox container exit event sandbox_id:\"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" exit_status:137 exited_at:{seconds:1768877484 nanos:146948361}" monitor_name=criService Jan 20 02:51:26.414056 kubelet[2962]: I0120 02:51:26.412612 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b072ca2a-6aa0-458a-88fb-0b9971bf97b9-cilium-config-path\") pod \"b072ca2a-6aa0-458a-88fb-0b9971bf97b9\" (UID: \"b072ca2a-6aa0-458a-88fb-0b9971bf97b9\") " Jan 20 02:51:26.414056 kubelet[2962]: I0120 02:51:26.412769 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkm2g\" (UniqueName: \"kubernetes.io/projected/b072ca2a-6aa0-458a-88fb-0b9971bf97b9-kube-api-access-vkm2g\") pod \"b072ca2a-6aa0-458a-88fb-0b9971bf97b9\" (UID: \"b072ca2a-6aa0-458a-88fb-0b9971bf97b9\") " Jan 20 02:51:26.468584 containerd[1582]: time="2026-01-20T02:51:26.431283048Z" level=info msg="shim disconnected" id=a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26 namespace=k8s.io Jan 20 02:51:26.468584 containerd[1582]: time="2026-01-20T02:51:26.439781407Z" level=warning msg="cleaning up after shim disconnected" id=a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26 namespace=k8s.io Jan 20 02:51:26.468584 containerd[1582]: time="2026-01-20T02:51:26.439817665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 02:51:26.511003 systemd[1]: var-lib-kubelet-pods-b072ca2a\x2d6aa0\x2d458a\x2d88fb\x2d0b9971bf97b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvkm2g.mount: Deactivated successfully. Jan 20 02:51:26.545284 kubelet[2962]: I0120 02:51:26.545227 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b072ca2a-6aa0-458a-88fb-0b9971bf97b9-kube-api-access-vkm2g" (OuterVolumeSpecName: "kube-api-access-vkm2g") pod "b072ca2a-6aa0-458a-88fb-0b9971bf97b9" (UID: "b072ca2a-6aa0-458a-88fb-0b9971bf97b9"). InnerVolumeSpecName "kube-api-access-vkm2g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:51:26.583817 kubelet[2962]: I0120 02:51:26.521308 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b072ca2a-6aa0-458a-88fb-0b9971bf97b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b072ca2a-6aa0-458a-88fb-0b9971bf97b9" (UID: "b072ca2a-6aa0-458a-88fb-0b9971bf97b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 02:51:26.641091 kubelet[2962]: I0120 02:51:26.641014 2962 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b072ca2a-6aa0-458a-88fb-0b9971bf97b9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:26.642809 kubelet[2962]: I0120 02:51:26.641313 2962 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vkm2g\" (UniqueName: \"kubernetes.io/projected/b072ca2a-6aa0-458a-88fb-0b9971bf97b9-kube-api-access-vkm2g\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:26.888890 containerd[1582]: time="2026-01-20T02:51:26.888704311Z" level=info msg="received sandbox container exit event sandbox_id:\"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" exit_status:137 exited_at:{seconds:1768877485 nanos:342248631}" monitor_name=criService Jan 20 02:51:26.910313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26-shm.mount: Deactivated successfully. Jan 20 02:51:26.939583 containerd[1582]: time="2026-01-20T02:51:26.930270118Z" level=info msg="TearDown network for sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" successfully" Jan 20 02:51:26.939583 containerd[1582]: time="2026-01-20T02:51:26.938701152Z" level=info msg="StopPodSandbox for \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" returns successfully" Jan 20 02:51:27.132724 systemd[1]: Removed slice kubepods-besteffort-podb072ca2a_6aa0_458a_88fb_0b9971bf97b9.slice - libcontainer container kubepods-besteffort-podb072ca2a_6aa0_458a_88fb_0b9971bf97b9.slice. Jan 20 02:51:27.133047 systemd[1]: kubepods-besteffort-podb072ca2a_6aa0_458a_88fb_0b9971bf97b9.slice: Consumed 7.965s CPU time, 31.7M memory peak, 8K written to disk. Jan 20 02:51:27.186200 kubelet[2962]: I0120 02:51:27.172315 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-run\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.186200 kubelet[2962]: I0120 02:51:27.173268 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:51:27.186200 kubelet[2962]: I0120 02:51:27.180773 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:51:27.186200 kubelet[2962]: I0120 02:51:27.180719 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-host-proc-sys-net\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.201947 kubelet[2962]: I0120 02:51:27.197498 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cni-path" (OuterVolumeSpecName: "cni-path") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:51:27.201947 kubelet[2962]: I0120 02:51:27.197649 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cni-path\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.201947 kubelet[2962]: I0120 02:51:27.197713 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnq6c\" (UniqueName: \"kubernetes.io/projected/edb7e671-ac61-412f-b840-58c11df66d8f-kube-api-access-dnq6c\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.201947 kubelet[2962]: I0120 02:51:27.197742 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-bpf-maps\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.201947 kubelet[2962]: I0120 02:51:27.197762 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-cgroup\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.201947 kubelet[2962]: I0120 02:51:27.197791 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-config-path\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.202546 kubelet[2962]: I0120 02:51:27.197813 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-hostproc\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.202546 kubelet[2962]: I0120 02:51:27.197837 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edb7e671-ac61-412f-b840-58c11df66d8f-clustermesh-secrets\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.202546 kubelet[2962]: I0120 02:51:27.197864 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-xtables-lock\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.202546 kubelet[2962]: I0120 02:51:27.197957 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-host-proc-sys-kernel\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.202546 kubelet[2962]: I0120 02:51:27.197987 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-etc-cni-netd\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.202546 kubelet[2962]: I0120 02:51:27.198011 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-lib-modules\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.202801 kubelet[2962]: I0120 02:51:27.198035 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edb7e671-ac61-412f-b840-58c11df66d8f-hubble-tls\") pod \"edb7e671-ac61-412f-b840-58c11df66d8f\" (UID: \"edb7e671-ac61-412f-b840-58c11df66d8f\") " Jan 20 02:51:27.202801 kubelet[2962]: I0120 02:51:27.198087 2962 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.202801 kubelet[2962]: I0120 02:51:27.198102 2962 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.202801 kubelet[2962]: I0120 02:51:27.198119 2962 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.203185 kubelet[2962]: I0120 02:51:27.203085 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-hostproc" (OuterVolumeSpecName: "hostproc") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:51:27.207903 kubelet[2962]: I0120 02:51:27.207219 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:51:27.208225 kubelet[2962]: I0120 02:51:27.208058 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:51:27.208225 kubelet[2962]: I0120 02:51:27.208111 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:51:27.208225 kubelet[2962]: I0120 02:51:27.208142 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:51:27.208225 kubelet[2962]: I0120 02:51:27.208166 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:51:27.222921 kubelet[2962]: I0120 02:51:27.215634 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:51:27.313121 systemd[1]: var-lib-kubelet-pods-edb7e671\x2dac61\x2d412f\x2db840\x2d58c11df66d8f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 02:51:27.317799 kubelet[2962]: I0120 02:51:27.316836 2962 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.317799 kubelet[2962]: I0120 02:51:27.316876 2962 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.317799 kubelet[2962]: I0120 02:51:27.316892 2962 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.317799 kubelet[2962]: I0120 02:51:27.316907 2962 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.317799 kubelet[2962]: I0120 02:51:27.316919 2962 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.317799 kubelet[2962]: I0120 02:51:27.316933 2962 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.317799 kubelet[2962]: I0120 02:51:27.316945 2962 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.338895 kubelet[2962]: I0120 02:51:27.337662 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 02:51:27.423059 kubelet[2962]: I0120 02:51:27.415696 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb7e671-ac61-412f-b840-58c11df66d8f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:51:27.462130 systemd[1]: var-lib-kubelet-pods-edb7e671\x2dac61\x2d412f\x2db840\x2d58c11df66d8f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddnq6c.mount: Deactivated successfully. Jan 20 02:51:27.462310 systemd[1]: var-lib-kubelet-pods-edb7e671\x2dac61\x2d412f\x2db840\x2d58c11df66d8f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 02:51:27.485725 kubelet[2962]: I0120 02:51:27.434261 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb7e671-ac61-412f-b840-58c11df66d8f-kube-api-access-dnq6c" (OuterVolumeSpecName: "kube-api-access-dnq6c") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "kube-api-access-dnq6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:51:27.485725 kubelet[2962]: I0120 02:51:27.451221 2962 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edb7e671-ac61-412f-b840-58c11df66d8f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.623019 kubelet[2962]: I0120 02:51:27.618904 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edb7e671-ac61-412f-b840-58c11df66d8f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "edb7e671-ac61-412f-b840-58c11df66d8f" (UID: "edb7e671-ac61-412f-b840-58c11df66d8f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 02:51:27.630874 kubelet[2962]: I0120 02:51:27.629705 2962 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edb7e671-ac61-412f-b840-58c11df66d8f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.630874 kubelet[2962]: I0120 02:51:27.629742 2962 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edb7e671-ac61-412f-b840-58c11df66d8f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.630874 kubelet[2962]: I0120 02:51:27.629769 2962 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dnq6c\" (UniqueName: \"kubernetes.io/projected/edb7e671-ac61-412f-b840-58c11df66d8f-kube-api-access-dnq6c\") on node \"localhost\" DevicePath \"\"" Jan 20 02:51:27.772788 kubelet[2962]: I0120 02:51:27.769613 2962 scope.go:117] "RemoveContainer" containerID="bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60" Jan 20 02:51:28.011680 containerd[1582]: time="2026-01-20T02:51:28.005749032Z" level=info msg="RemoveContainer for \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\"" Jan 20 02:51:28.095158 kubelet[2962]: I0120 02:51:28.095019 2962 scope.go:117] "RemoveContainer" containerID="e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7" Jan 20 02:51:28.112188 systemd[1]: Removed slice kubepods-burstable-podedb7e671_ac61_412f_b840_58c11df66d8f.slice - libcontainer container kubepods-burstable-podedb7e671_ac61_412f_b840_58c11df66d8f.slice. Jan 20 02:51:28.119077 systemd[1]: kubepods-burstable-podedb7e671_ac61_412f_b840_58c11df66d8f.slice: Consumed 42.398s CPU time, 142.6M memory peak, 2.9M read from disk, 13.3M written to disk. Jan 20 02:51:28.226806 containerd[1582]: time="2026-01-20T02:51:28.226762947Z" level=info msg="RemoveContainer for \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\" returns successfully" Jan 20 02:51:28.236269 kubelet[2962]: I0120 02:51:28.236222 2962 scope.go:117] "RemoveContainer" containerID="42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b" Jan 20 02:51:28.408238 containerd[1582]: time="2026-01-20T02:51:28.408095511Z" level=info msg="RemoveContainer for \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\"" Jan 20 02:51:28.412256 containerd[1582]: time="2026-01-20T02:51:28.412220985Z" level=info msg="RemoveContainer for \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\"" Jan 20 02:51:28.615007 containerd[1582]: time="2026-01-20T02:51:28.608020079Z" level=info msg="RemoveContainer for \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\" returns successfully" Jan 20 02:51:28.615166 kubelet[2962]: I0120 02:51:28.611675 2962 scope.go:117] "RemoveContainer" containerID="788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048" Jan 20 02:51:28.676745 containerd[1582]: time="2026-01-20T02:51:28.665827393Z" level=info msg="RemoveContainer for \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\"" Jan 20 02:51:28.679099 containerd[1582]: time="2026-01-20T02:51:28.666753230Z" level=info msg="RemoveContainer for \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\" returns successfully" Jan 20 02:51:28.679217 kubelet[2962]: I0120 02:51:28.678116 2962 scope.go:117] "RemoveContainer" containerID="ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3" Jan 20 02:51:28.757687 containerd[1582]: time="2026-01-20T02:51:28.756563628Z" level=info msg="RemoveContainer for \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\" returns successfully" Jan 20 02:51:28.758191 kubelet[2962]: I0120 02:51:28.758153 2962 scope.go:117] "RemoveContainer" containerID="42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b" Jan 20 02:51:28.768841 containerd[1582]: time="2026-01-20T02:51:28.768779767Z" level=error msg="ContainerStatus for \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\": not found" Jan 20 02:51:28.821661 kubelet[2962]: E0120 02:51:28.802226 2962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\": not found" containerID="42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b" Jan 20 02:51:28.821661 kubelet[2962]: E0120 02:51:28.816292 2962 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b\": not found" containerID="42eb5fd99bf299019171f289a8c6d00027cfde06fc5783101f8876ad12b3bf0b" Jan 20 02:51:28.821661 kubelet[2962]: I0120 02:51:28.821170 2962 scope.go:117] "RemoveContainer" containerID="bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60" Jan 20 02:51:28.831858 containerd[1582]: time="2026-01-20T02:51:28.831797310Z" level=error msg="ContainerStatus for \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\": not found" Jan 20 02:51:28.892627 kubelet[2962]: E0120 02:51:28.880245 2962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\": not found" containerID="bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60" Jan 20 02:51:28.892627 kubelet[2962]: E0120 02:51:28.880291 2962 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60\": not found" containerID="bfcd7b1d67f56f8e5a72f637a624d9ccb2cf56c1e58abae14a3efa2cf53d9f60" Jan 20 02:51:28.896794 containerd[1582]: time="2026-01-20T02:51:28.896749722Z" level=info msg="RemoveContainer for \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\"" Jan 20 02:51:28.937267 kubelet[2962]: I0120 02:51:28.911850 2962 scope.go:117] "RemoveContainer" containerID="bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274" Jan 20 02:51:29.038853 containerd[1582]: time="2026-01-20T02:51:29.038804843Z" level=info msg="RemoveContainer for \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\" returns successfully" Jan 20 02:51:29.060195 kubelet[2962]: I0120 02:51:29.059930 2962 scope.go:117] "RemoveContainer" containerID="e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7" Jan 20 02:51:29.071833 containerd[1582]: time="2026-01-20T02:51:29.071771234Z" level=error msg="ContainerStatus for \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\": not found" Jan 20 02:51:29.092260 kubelet[2962]: E0120 02:51:29.092205 2962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\": not found" containerID="e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7" Jan 20 02:51:29.093680 kubelet[2962]: I0120 02:51:29.092947 2962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7"} err="failed to get container status \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"e254107108bfe537b841ec52af8b43cc024633832af529813be1748b63e125b7\": not found" Jan 20 02:51:29.096603 kubelet[2962]: I0120 02:51:29.096578 2962 scope.go:117] "RemoveContainer" containerID="788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048" Jan 20 02:51:29.101092 containerd[1582]: time="2026-01-20T02:51:29.071816876Z" level=info msg="RemoveContainer for \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\"" Jan 20 02:51:29.102845 containerd[1582]: time="2026-01-20T02:51:29.102798867Z" level=error msg="ContainerStatus for \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\": not found" Jan 20 02:51:29.103709 kubelet[2962]: E0120 02:51:29.103678 2962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\": not found" containerID="788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048" Jan 20 02:51:29.103853 kubelet[2962]: I0120 02:51:29.103819 2962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048"} err="failed to get container status \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\": rpc error: code = NotFound desc = an error occurred when try to find container \"788e4d194d0eac74dd82b3b49f94514cb44178cb6177cbadaf5998270d967048\": not found" Jan 20 02:51:29.103936 kubelet[2962]: I0120 02:51:29.103919 2962 scope.go:117] "RemoveContainer" containerID="bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274" Jan 20 02:51:29.110890 containerd[1582]: time="2026-01-20T02:51:29.110670433Z" level=info msg="RemoveContainer for \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\"" Jan 20 02:51:29.111273 containerd[1582]: time="2026-01-20T02:51:29.111234950Z" level=error msg="RemoveContainer for \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\" failed" error="rpc error: code = Unknown desc = failed to set removing state for container \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\": container is already in removing state" Jan 20 02:51:29.113673 kubelet[2962]: E0120 02:51:29.113213 2962 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\": container is already in removing state" containerID="bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274" Jan 20 02:51:29.113673 kubelet[2962]: I0120 02:51:29.113540 2962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274"} err="rpc error: code = Unknown desc = failed to set removing state for container \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\": container is already in removing state" Jan 20 02:51:29.116202 kubelet[2962]: I0120 02:51:29.115988 2962 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b072ca2a-6aa0-458a-88fb-0b9971bf97b9" path="/var/lib/kubelet/pods/b072ca2a-6aa0-458a-88fb-0b9971bf97b9/volumes" Jan 20 02:51:29.117613 kubelet[2962]: I0120 02:51:29.117209 2962 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb7e671-ac61-412f-b840-58c11df66d8f" path="/var/lib/kubelet/pods/edb7e671-ac61-412f-b840-58c11df66d8f/volumes" Jan 20 02:51:29.236126 containerd[1582]: time="2026-01-20T02:51:29.218768702Z" level=info msg="RemoveContainer for \"bc14dc16891fcfcbbcaf46034e0a570a38ae94126b8871952130d02d55d8f274\" returns successfully" Jan 20 02:51:29.259722 kubelet[2962]: I0120 02:51:29.259681 2962 scope.go:117] "RemoveContainer" containerID="ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3" Jan 20 02:51:29.306007 containerd[1582]: time="2026-01-20T02:51:29.305942568Z" level=error msg="ContainerStatus for \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\": not found" Jan 20 02:51:29.318991 kubelet[2962]: E0120 02:51:29.317980 2962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\": not found" containerID="ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3" Jan 20 02:51:29.318991 kubelet[2962]: E0120 02:51:29.318033 2962 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3\": not found" containerID="ee968e2231375b1d0717a107ffe43cb6e8fb9b29912afe9bc88bca263c5712c3" Jan 20 02:51:29.339164 containerd[1582]: time="2026-01-20T02:51:29.336773327Z" level=info msg="StopPodSandbox for \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\"" Jan 20 02:51:29.339164 containerd[1582]: time="2026-01-20T02:51:29.336975310Z" level=info msg="TearDown network for sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" successfully" Jan 20 02:51:29.339164 containerd[1582]: time="2026-01-20T02:51:29.336998002Z" level=info msg="StopPodSandbox for \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" returns successfully" Jan 20 02:51:29.365694 containerd[1582]: time="2026-01-20T02:51:29.349842566Z" level=info msg="RemovePodSandbox for \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\"" Jan 20 02:51:29.365694 containerd[1582]: time="2026-01-20T02:51:29.349911644Z" level=info msg="Forcibly stopping sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\"" Jan 20 02:51:29.380218 containerd[1582]: time="2026-01-20T02:51:29.368571619Z" level=info msg="TearDown network for sandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" successfully" Jan 20 02:51:29.387116 containerd[1582]: time="2026-01-20T02:51:29.383033764Z" level=info msg="Ensure that sandbox a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26 in task-service has been cleanup successfully" Jan 20 02:51:29.528710 containerd[1582]: time="2026-01-20T02:51:29.526109087Z" level=info msg="RemovePodSandbox \"a9e04e005b444ee52a3c882947dd66c0f22f9595e6dea777ba31fb679d0e5a26\" returns successfully" Jan 20 02:51:29.639799 containerd[1582]: time="2026-01-20T02:51:29.639742369Z" level=info msg="StopPodSandbox for \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\"" Jan 20 02:51:29.723701 containerd[1582]: time="2026-01-20T02:51:29.723635286Z" level=info msg="TearDown network for sandbox \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" successfully" Jan 20 02:51:29.725845 containerd[1582]: time="2026-01-20T02:51:29.725802928Z" level=info msg="StopPodSandbox for \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" returns successfully" Jan 20 02:51:29.767792 containerd[1582]: time="2026-01-20T02:51:29.765194546Z" level=info msg="RemovePodSandbox for \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\"" Jan 20 02:51:29.767792 containerd[1582]: time="2026-01-20T02:51:29.765258324Z" level=info msg="Forcibly stopping sandbox \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\"" Jan 20 02:51:29.767792 containerd[1582]: time="2026-01-20T02:51:29.766632614Z" level=info msg="TearDown network for sandbox \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" successfully" Jan 20 02:51:29.781890 containerd[1582]: time="2026-01-20T02:51:29.779257500Z" level=info msg="Ensure that sandbox 10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3 in task-service has been cleanup successfully" Jan 20 02:51:29.877580 containerd[1582]: time="2026-01-20T02:51:29.874939019Z" level=info msg="RemovePodSandbox \"10f0e9bb44182d06f3d52517ce9e63e40f313ffad4e027bcf7bd02be5dd11ef3\" returns successfully" Jan 20 02:51:31.129095 kubelet[2962]: E0120 02:51:31.126766 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:51:32.039821 sshd[6351]: Connection closed by 10.0.0.1 port 37486 Jan 20 02:51:32.025109 sshd-session[6329]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:32.136897 systemd[1]: sshd@81-10.0.0.100:22-10.0.0.1:37486.service: Deactivated successfully. Jan 20 02:51:32.191230 systemd[1]: session-82.scope: Deactivated successfully. Jan 20 02:51:32.192586 systemd[1]: session-82.scope: Consumed 1.502s CPU time, 26.9M memory peak. Jan 20 02:51:32.219030 systemd-logind[1548]: Session 82 logged out. Waiting for processes to exit. Jan 20 02:51:32.319093 systemd[1]: Started sshd@82-10.0.0.100:22-10.0.0.1:56844.service - OpenSSH per-connection server daemon (10.0.0.1:56844). Jan 20 02:51:32.475818 systemd-logind[1548]: Removed session 82. Jan 20 02:51:32.642106 kubelet[2962]: I0120 02:51:32.641758 2962 memory_manager.go:355] "RemoveStaleState removing state" podUID="b072ca2a-6aa0-458a-88fb-0b9971bf97b9" containerName="cilium-operator" Jan 20 02:51:32.646236 kubelet[2962]: I0120 02:51:32.645768 2962 memory_manager.go:355] "RemoveStaleState removing state" podUID="b072ca2a-6aa0-458a-88fb-0b9971bf97b9" containerName="cilium-operator" Jan 20 02:51:32.676741 kubelet[2962]: I0120 02:51:32.663743 2962 memory_manager.go:355] "RemoveStaleState removing state" podUID="edb7e671-ac61-412f-b840-58c11df66d8f" containerName="cilium-agent" Jan 20 02:51:32.814620 kubelet[2962]: I0120 02:51:32.805301 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ed55905-772c-43e1-bea3-d3cf6b324443-cilium-cgroup\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.828536 kubelet[2962]: I0120 02:51:32.814871 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ed55905-772c-43e1-bea3-d3cf6b324443-cni-path\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.828857 kubelet[2962]: I0120 02:51:32.828817 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ed55905-772c-43e1-bea3-d3cf6b324443-cilium-run\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.828991 kubelet[2962]: I0120 02:51:32.828968 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7ed55905-772c-43e1-bea3-d3cf6b324443-cilium-ipsec-secrets\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.829112 kubelet[2962]: I0120 02:51:32.829093 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ed55905-772c-43e1-bea3-d3cf6b324443-host-proc-sys-kernel\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.829219 kubelet[2962]: I0120 02:51:32.829197 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ed55905-772c-43e1-bea3-d3cf6b324443-etc-cni-netd\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.842920 kubelet[2962]: I0120 02:51:32.842877 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ed55905-772c-43e1-bea3-d3cf6b324443-hubble-tls\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.843173 kubelet[2962]: I0120 02:51:32.843144 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ed55905-772c-43e1-bea3-d3cf6b324443-xtables-lock\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.843296 kubelet[2962]: I0120 02:51:32.843263 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zrxr\" (UniqueName: \"kubernetes.io/projected/7ed55905-772c-43e1-bea3-d3cf6b324443-kube-api-access-7zrxr\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.858904 kubelet[2962]: I0120 02:51:32.858855 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ed55905-772c-43e1-bea3-d3cf6b324443-bpf-maps\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.859144 kubelet[2962]: I0120 02:51:32.859116 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ed55905-772c-43e1-bea3-d3cf6b324443-hostproc\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.859267 kubelet[2962]: I0120 02:51:32.859247 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ed55905-772c-43e1-bea3-d3cf6b324443-clustermesh-secrets\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.879826 kubelet[2962]: I0120 02:51:32.873794 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ed55905-772c-43e1-bea3-d3cf6b324443-cilium-config-path\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.879826 kubelet[2962]: I0120 02:51:32.873851 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ed55905-772c-43e1-bea3-d3cf6b324443-host-proc-sys-net\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.879826 kubelet[2962]: I0120 02:51:32.873878 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ed55905-772c-43e1-bea3-d3cf6b324443-lib-modules\") pod \"cilium-pwtkj\" (UID: \"7ed55905-772c-43e1-bea3-d3cf6b324443\") " pod="kube-system/cilium-pwtkj" Jan 20 02:51:32.890744 systemd[1]: Created slice kubepods-burstable-pod7ed55905_772c_43e1_bea3_d3cf6b324443.slice - libcontainer container kubepods-burstable-pod7ed55905_772c_43e1_bea3_d3cf6b324443.slice. Jan 20 02:51:33.304921 sshd[6439]: Accepted publickey for core from 10.0.0.1 port 56844 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:51:33.280260 sshd-session[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:33.562008 systemd-logind[1548]: New session 83 of user core. Jan 20 02:51:33.638830 systemd[1]: Started session-83.scope - Session 83 of User core. Jan 20 02:51:33.740105 kubelet[2962]: E0120 02:51:33.737772 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:33.996492 kubelet[2962]: I0120 02:51:33.982142 2962 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T02:51:33Z","lastTransitionTime":"2026-01-20T02:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 02:51:34.072862 containerd[1582]: time="2026-01-20T02:51:33.992840140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwtkj,Uid:7ed55905-772c-43e1-bea3-d3cf6b324443,Namespace:kube-system,Attempt:0,}" Jan 20 02:51:34.593872 sshd[6446]: Connection closed by 10.0.0.1 port 56844 Jan 20 02:51:34.620766 sshd-session[6439]: pam_unix(sshd:session): session closed for user core Jan 20 02:51:34.686195 systemd[1]: sshd@82-10.0.0.100:22-10.0.0.1:56844.service: Deactivated successfully. Jan 20 02:51:34.723270 systemd[1]: session-83.scope: Deactivated successfully. Jan 20 02:51:34.765125 systemd-logind[1548]: Session 83 logged out. Waiting for processes to exit. Jan 20 02:51:34.768135 systemd[1]: Started sshd@83-10.0.0.100:22-10.0.0.1:44202.service - OpenSSH per-connection server daemon (10.0.0.1:44202). Jan 20 02:51:35.492204 systemd-logind[1548]: Removed session 83. Jan 20 02:51:35.913665 containerd[1582]: time="2026-01-20T02:51:35.902569224Z" level=info msg="connecting to shim 5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b" address="unix:///run/containerd/s/360e71483ebb1ecbbc6dff14497fc6ebcc3f834191d35c63459d8753c307258e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:51:36.107538 sshd[6459]: Accepted publickey for core from 10.0.0.1 port 44202 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:51:36.120922 sshd-session[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:51:36.199828 kubelet[2962]: E0120 02:51:36.199567 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:51:36.267960 systemd-logind[1548]: New session 84 of user core. Jan 20 02:51:36.325628 systemd[1]: Started session-84.scope - Session 84 of User core. Jan 20 02:51:36.429913 systemd[1]: Started cri-containerd-5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b.scope - libcontainer container 5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b. Jan 20 02:51:36.908581 containerd[1582]: time="2026-01-20T02:51:36.906530157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwtkj,Uid:7ed55905-772c-43e1-bea3-d3cf6b324443,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\"" Jan 20 02:51:36.926028 kubelet[2962]: E0120 02:51:36.917579 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:36.975296 containerd[1582]: time="2026-01-20T02:51:36.974984175Z" level=info msg="CreateContainer within sandbox \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 02:51:37.162858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount713403653.mount: Deactivated successfully. Jan 20 02:51:37.184991 containerd[1582]: time="2026-01-20T02:51:37.179847154Z" level=info msg="Container 16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:51:37.220732 containerd[1582]: time="2026-01-20T02:51:37.220578874Z" level=info msg="CreateContainer within sandbox \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f\"" Jan 20 02:51:37.236822 containerd[1582]: time="2026-01-20T02:51:37.232562169Z" level=info msg="StartContainer for \"16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f\"" Jan 20 02:51:37.247681 containerd[1582]: time="2026-01-20T02:51:37.234302909Z" level=info msg="connecting to shim 16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f" address="unix:///run/containerd/s/360e71483ebb1ecbbc6dff14497fc6ebcc3f834191d35c63459d8753c307258e" protocol=ttrpc version=3 Jan 20 02:51:37.566250 systemd[1]: Started cri-containerd-16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f.scope - libcontainer container 16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f. Jan 20 02:51:38.486640 containerd[1582]: time="2026-01-20T02:51:38.483981125Z" level=info msg="StartContainer for \"16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f\" returns successfully" Jan 20 02:51:38.796570 systemd[1]: cri-containerd-16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f.scope: Deactivated successfully. Jan 20 02:51:38.812118 containerd[1582]: time="2026-01-20T02:51:38.812070868Z" level=info msg="received container exit event container_id:\"16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f\" id:\"16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f\" pid:6522 exited_at:{seconds:1768877498 nanos:811556620}" Jan 20 02:51:39.162086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16a969c910b76dfcdc592c82a5b16268d3148df8e90def51e72be5b48de1da7f-rootfs.mount: Deactivated successfully. Jan 20 02:51:39.305550 kubelet[2962]: E0120 02:51:39.303154 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:40.357564 kubelet[2962]: E0120 02:51:40.353668 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:40.452680 containerd[1582]: time="2026-01-20T02:51:40.445663680Z" level=info msg="CreateContainer within sandbox \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 02:51:40.855517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488948151.mount: Deactivated successfully. Jan 20 02:51:40.866527 containerd[1582]: time="2026-01-20T02:51:40.859133834Z" level=info msg="Container dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:51:40.986033 containerd[1582]: time="2026-01-20T02:51:40.985626794Z" level=info msg="CreateContainer within sandbox \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3\"" Jan 20 02:51:41.000982 containerd[1582]: time="2026-01-20T02:51:40.996628779Z" level=info msg="StartContainer for \"dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3\"" Jan 20 02:51:41.010639 containerd[1582]: time="2026-01-20T02:51:41.009260917Z" level=info msg="connecting to shim dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3" address="unix:///run/containerd/s/360e71483ebb1ecbbc6dff14497fc6ebcc3f834191d35c63459d8753c307258e" protocol=ttrpc version=3 Jan 20 02:51:41.222647 kubelet[2962]: E0120 02:51:41.221933 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:51:41.340507 systemd[1]: Started cri-containerd-dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3.scope - libcontainer container dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3. Jan 20 02:51:42.444589 containerd[1582]: time="2026-01-20T02:51:42.444538347Z" level=info msg="StartContainer for \"dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3\" returns successfully" Jan 20 02:51:42.651053 kubelet[2962]: E0120 02:51:42.649851 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:42.739878 systemd[1]: cri-containerd-dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3.scope: Deactivated successfully. Jan 20 02:51:42.782960 containerd[1582]: time="2026-01-20T02:51:42.781723640Z" level=info msg="received container exit event container_id:\"dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3\" id:\"dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3\" pid:6568 exited_at:{seconds:1768877502 nanos:760616236}" Jan 20 02:51:43.492232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dde77cb70b7cd386e92a33710ad7b8cb4a0f39f0cc7679d917d8b56b2ed875c3-rootfs.mount: Deactivated successfully. Jan 20 02:51:43.710599 kubelet[2962]: E0120 02:51:43.709583 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:43.785608 containerd[1582]: time="2026-01-20T02:51:43.785552183Z" level=info msg="CreateContainer within sandbox \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 02:51:44.159787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128042077.mount: Deactivated successfully. Jan 20 02:51:44.188931 containerd[1582]: time="2026-01-20T02:51:44.188879060Z" level=info msg="Container 12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:51:44.396649 containerd[1582]: time="2026-01-20T02:51:44.396487274Z" level=info msg="CreateContainer within sandbox \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842\"" Jan 20 02:51:44.420838 containerd[1582]: time="2026-01-20T02:51:44.405617244Z" level=info msg="StartContainer for \"12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842\"" Jan 20 02:51:44.483604 containerd[1582]: time="2026-01-20T02:51:44.454733449Z" level=info msg="connecting to shim 12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842" address="unix:///run/containerd/s/360e71483ebb1ecbbc6dff14497fc6ebcc3f834191d35c63459d8753c307258e" protocol=ttrpc version=3 Jan 20 02:51:45.098852 kubelet[2962]: E0120 02:51:45.093986 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:45.260107 systemd[1]: Started cri-containerd-12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842.scope - libcontainer container 12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842. Jan 20 02:51:45.853633 containerd[1582]: time="2026-01-20T02:51:45.847206930Z" level=info msg="StartContainer for \"12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842\" returns successfully" Jan 20 02:51:45.903044 systemd[1]: cri-containerd-12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842.scope: Deactivated successfully. Jan 20 02:51:45.941979 containerd[1582]: time="2026-01-20T02:51:45.941926231Z" level=info msg="received container exit event container_id:\"12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842\" id:\"12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842\" pid:6613 exited_at:{seconds:1768877505 nanos:929280515}" Jan 20 02:51:46.246137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12a7ded8d5445a13f1b27e9970b2b71629da56e2edd37aa868f4ada31e72f842-rootfs.mount: Deactivated successfully. Jan 20 02:51:46.276286 kubelet[2962]: E0120 02:51:46.276236 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:51:46.298747 kubelet[2962]: E0120 02:51:46.298212 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:47.373828 kubelet[2962]: E0120 02:51:47.366899 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:47.442778 containerd[1582]: time="2026-01-20T02:51:47.439069969Z" level=info msg="CreateContainer within sandbox \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 02:51:47.764280 containerd[1582]: time="2026-01-20T02:51:47.753736795Z" level=info msg="Container 082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:51:47.872738 containerd[1582]: time="2026-01-20T02:51:47.868880693Z" level=info msg="CreateContainer within sandbox \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e\"" Jan 20 02:51:47.884050 containerd[1582]: time="2026-01-20T02:51:47.883994696Z" level=info msg="StartContainer for \"082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e\"" Jan 20 02:51:47.901517 containerd[1582]: time="2026-01-20T02:51:47.901114070Z" level=info msg="connecting to shim 082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e" address="unix:///run/containerd/s/360e71483ebb1ecbbc6dff14497fc6ebcc3f834191d35c63459d8753c307258e" protocol=ttrpc version=3 Jan 20 02:51:48.323733 systemd[1]: Started cri-containerd-082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e.scope - libcontainer container 082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e. Jan 20 02:51:48.989600 kubelet[2962]: E0120 02:51:48.988992 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lrwrw" podUID="fb86ee63-29f5-4877-9cb3-729ab02899ee" Jan 20 02:51:49.197172 systemd[1]: cri-containerd-082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e.scope: Deactivated successfully. Jan 20 02:51:49.372882 containerd[1582]: time="2026-01-20T02:51:49.363687183Z" level=info msg="received container exit event container_id:\"082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e\" id:\"082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e\" pid:6651 exited_at:{seconds:1768877509 nanos:223180328}" Jan 20 02:51:49.372882 containerd[1582]: time="2026-01-20T02:51:49.364901667Z" level=info msg="StartContainer for \"082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e\" returns successfully" Jan 20 02:51:50.048714 kubelet[2962]: E0120 02:51:50.048644 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:50.542195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-082cbb337cf5646333ca6ddea790f0592ed8d696cd660c8b85b36fd675a3428e-rootfs.mount: Deactivated successfully. Jan 20 02:51:50.989829 kubelet[2962]: E0120 02:51:50.987056 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:50.989829 kubelet[2962]: E0120 02:51:50.987876 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lrwrw" podUID="fb86ee63-29f5-4877-9cb3-729ab02899ee" Jan 20 02:51:51.197968 kubelet[2962]: E0120 02:51:51.193265 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:51.269610 containerd[1582]: time="2026-01-20T02:51:51.252901788Z" level=info msg="CreateContainer within sandbox \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 02:51:51.357811 kubelet[2962]: E0120 02:51:51.333761 2962 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:51:51.540061 containerd[1582]: time="2026-01-20T02:51:51.528050477Z" level=info msg="Container 5faf036400a354aeb8ca129c63d6cf4b237b6a8b40ef8d4cdb76da1faf5a7683: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:51:51.727838 containerd[1582]: time="2026-01-20T02:51:51.724011794Z" level=info msg="CreateContainer within sandbox \"5c5626599dc976c3d1fa25fec61979de590c2dbf3520ad5c3f9399c0c540734b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5faf036400a354aeb8ca129c63d6cf4b237b6a8b40ef8d4cdb76da1faf5a7683\"" Jan 20 02:51:51.805599 containerd[1582]: time="2026-01-20T02:51:51.793932759Z" level=info msg="StartContainer for \"5faf036400a354aeb8ca129c63d6cf4b237b6a8b40ef8d4cdb76da1faf5a7683\"" Jan 20 02:51:51.833933 containerd[1582]: time="2026-01-20T02:51:51.825120426Z" level=info msg="connecting to shim 5faf036400a354aeb8ca129c63d6cf4b237b6a8b40ef8d4cdb76da1faf5a7683" address="unix:///run/containerd/s/360e71483ebb1ecbbc6dff14497fc6ebcc3f834191d35c63459d8753c307258e" protocol=ttrpc version=3 Jan 20 02:51:52.168607 systemd[1]: Started cri-containerd-5faf036400a354aeb8ca129c63d6cf4b237b6a8b40ef8d4cdb76da1faf5a7683.scope - libcontainer container 5faf036400a354aeb8ca129c63d6cf4b237b6a8b40ef8d4cdb76da1faf5a7683. Jan 20 02:51:53.075239 kubelet[2962]: E0120 02:51:53.068550 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lrwrw" podUID="fb86ee63-29f5-4877-9cb3-729ab02899ee" Jan 20 02:51:53.246663 containerd[1582]: time="2026-01-20T02:51:53.246536873Z" level=info msg="StartContainer for \"5faf036400a354aeb8ca129c63d6cf4b237b6a8b40ef8d4cdb76da1faf5a7683\" returns successfully" Jan 20 02:51:54.643875 kubelet[2962]: E0120 02:51:54.643159 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:55.018147 kubelet[2962]: I0120 02:51:55.007266 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pwtkj" podStartSLOduration=23.007105221 podStartE2EDuration="23.007105221s" podCreationTimestamp="2026-01-20 02:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:51:54.951831373 +0000 UTC m=+1198.678609026" watchObservedRunningTime="2026-01-20 02:51:55.007105221 +0000 UTC m=+1198.733882873" Jan 20 02:51:55.079845 kubelet[2962]: E0120 02:51:55.070102 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lrwrw" podUID="fb86ee63-29f5-4877-9cb3-729ab02899ee" Jan 20 02:51:55.756180 kubelet[2962]: E0120 02:51:55.756133 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:57.054927 kubelet[2962]: E0120 02:51:57.054887 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:59.022206 kubelet[2962]: E0120 02:51:59.011134 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:51:59.894684 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 20 02:52:03.771126 kubelet[2962]: E0120 02:52:03.771070 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:11.133169 kubelet[2962]: E0120 02:52:11.114171 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:19.995862 kubelet[2962]: E0120 02:52:19.988913 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:27.249079 systemd-networkd[1465]: lxc_health: Link UP Jan 20 02:52:27.354760 systemd-networkd[1465]: lxc_health: Gained carrier Jan 20 02:52:27.794359 kubelet[2962]: E0120 02:52:27.791617 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:28.402703 kubelet[2962]: E0120 02:52:28.390781 2962 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:52:28.674520 systemd-networkd[1465]: lxc_health: Gained IPv6LL Jan 20 02:52:35.986524 sshd[6486]: Connection closed by 10.0.0.1 port 44202 Jan 20 02:52:35.977802 sshd-session[6459]: pam_unix(sshd:session): session closed for user core Jan 20 02:52:36.044803 systemd[1]: sshd@83-10.0.0.100:22-10.0.0.1:44202.service: Deactivated successfully. Jan 20 02:52:36.057940 systemd[1]: session-84.scope: Deactivated successfully. Jan 20 02:52:36.058576 systemd[1]: session-84.scope: Consumed 2.048s CPU time, 28.1M memory peak. Jan 20 02:52:36.080500 systemd-logind[1548]: Session 84 logged out. Waiting for processes to exit. Jan 20 02:52:36.103075 systemd-logind[1548]: Removed session 84.