Jan 20 01:34:30.522888 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 01:34:30.523028 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:34:30.523047 kernel: BIOS-provided physical RAM map: Jan 20 01:34:30.523067 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 01:34:30.523078 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 01:34:30.523086 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 01:34:30.523099 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 01:34:30.523109 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 01:34:30.523162 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 01:34:30.523176 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 01:34:30.523185 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 20 01:34:30.523196 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 01:34:30.523213 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 01:34:30.523225 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 01:34:30.523236 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 01:34:30.523247 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 01:34:30.523297 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 01:34:30.523320 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 01:34:30.523330 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 01:34:30.523341 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 01:34:30.523351 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 01:34:30.523363 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 01:34:30.523372 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 01:34:30.523384 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:34:30.523394 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 01:34:30.523405 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 01:34:30.523461 kernel: NX (Execute Disable) protection: active Jan 20 01:34:30.523472 kernel: APIC: Static calls initialized Jan 20 01:34:30.523490 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 20 01:34:30.523500 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 20 01:34:30.523511 kernel: extended physical RAM map: Jan 20 01:34:30.523522 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 01:34:30.523534 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 01:34:30.523543 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 01:34:30.523555 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 01:34:30.523565 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 01:34:30.523577 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 01:34:30.523587 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 01:34:30.523599 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 20 01:34:30.523616 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 20 01:34:30.523634 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 20 01:34:30.523646 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 20 01:34:30.523656 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 20 01:34:30.523669 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 01:34:30.523685 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 01:34:30.523697 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 01:34:30.523709 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 01:34:30.523719 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 01:34:30.523732 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 01:34:30.523743 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 01:34:30.523756 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 01:34:30.523766 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 01:34:30.523778 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 01:34:30.523789 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 01:34:30.523801 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 01:34:30.523818 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:34:30.523829 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 01:34:30.523841 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 01:34:30.523892 kernel: efi: EFI v2.7 by EDK II Jan 20 01:34:30.523905 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 20 01:34:30.524005 kernel: random: crng init done Jan 20 01:34:30.524022 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 20 01:34:30.524067 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 20 01:34:30.524079 kernel: secureboot: Secure boot disabled Jan 20 01:34:30.524091 kernel: SMBIOS 2.8 present. Jan 20 01:34:30.524103 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 20 01:34:30.524120 kernel: DMI: Memory slots populated: 1/1 Jan 20 01:34:30.524130 kernel: Hypervisor detected: KVM Jan 20 01:34:30.524141 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 01:34:30.524150 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 01:34:30.524161 kernel: kvm-clock: using sched offset of 40287558194 cycles Jan 20 01:34:30.524172 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:34:30.524182 kernel: tsc: Detected 2445.426 MHz processor Jan 20 01:34:30.524193 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 01:34:30.524203 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 01:34:30.524213 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 01:34:30.524224 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 01:34:30.524241 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 01:34:30.524251 kernel: Using GB pages for direct mapping Jan 20 01:34:30.524261 kernel: ACPI: Early table checksum verification disabled Jan 20 01:34:30.524271 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 01:34:30.524281 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 01:34:30.524292 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:30.524302 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:30.524312 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 01:34:30.524326 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:30.524336 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:30.524346 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:30.524356 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:30.524366 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 01:34:30.524376 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 01:34:30.524386 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 01:34:30.524397 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 01:34:30.524407 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 01:34:30.531628 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 01:34:30.531642 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 01:34:30.531653 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 01:34:30.531666 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 01:34:30.531677 kernel: No NUMA configuration found Jan 20 01:34:30.531687 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 20 01:34:30.531699 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 20 01:34:30.531710 kernel: Zone ranges: Jan 20 01:34:30.531723 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 01:34:30.531741 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 20 01:34:30.531752 kernel: Normal empty Jan 20 01:34:30.531814 kernel: Device empty Jan 20 01:34:30.531825 kernel: Movable zone start for each node Jan 20 01:34:30.531836 kernel: Early memory node ranges Jan 20 01:34:30.531847 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 01:34:30.531899 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 01:34:30.531913 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 01:34:30.531923 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 20 01:34:30.532021 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 20 01:34:30.532033 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 20 01:34:30.532043 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 20 01:34:30.532053 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 20 01:34:30.532064 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 20 01:34:30.532114 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:34:30.532145 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 01:34:30.532161 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 01:34:30.532174 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:34:30.532185 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 20 01:34:30.532198 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 20 01:34:30.532210 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 20 01:34:30.532227 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 20 01:34:30.532238 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 20 01:34:30.532250 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 01:34:30.532263 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 01:34:30.532274 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 01:34:30.532289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 01:34:30.532300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 01:34:30.532310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 01:34:30.532320 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 01:34:30.532330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 01:34:30.532340 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 01:34:30.532350 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 01:34:30.532360 kernel: TSC deadline timer available Jan 20 01:34:30.532370 kernel: CPU topo: Max. logical packages: 1 Jan 20 01:34:30.532385 kernel: CPU topo: Max. logical dies: 1 Jan 20 01:34:30.532395 kernel: CPU topo: Max. dies per package: 1 Jan 20 01:34:30.532406 kernel: CPU topo: Max. threads per core: 1 Jan 20 01:34:30.532478 kernel: CPU topo: Num. cores per package: 4 Jan 20 01:34:30.532490 kernel: CPU topo: Num. threads per package: 4 Jan 20 01:34:30.532502 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 01:34:30.532514 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 01:34:30.532527 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 01:34:30.532538 kernel: kvm-guest: setup PV sched yield Jan 20 01:34:30.532556 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 20 01:34:30.532567 kernel: Booting paravirtualized kernel on KVM Jan 20 01:34:30.532578 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 01:34:30.532590 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 01:34:30.532602 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 01:34:30.532613 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 01:34:30.532627 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 01:34:30.532637 kernel: kvm-guest: PV spinlocks enabled Jan 20 01:34:30.532649 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 01:34:30.532702 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:34:30.532717 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:34:30.532730 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:34:30.532742 kernel: Fallback order for Node 0: 0 Jan 20 01:34:30.532754 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 20 01:34:30.532766 kernel: Policy zone: DMA32 Jan 20 01:34:30.532778 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:34:30.532789 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 01:34:30.532805 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 01:34:30.532816 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 01:34:30.532827 kernel: Dynamic Preempt: voluntary Jan 20 01:34:30.532838 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:34:30.532858 kernel: rcu: RCU event tracing is enabled. Jan 20 01:34:30.532870 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 01:34:30.532882 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:34:30.532893 kernel: Rude variant of Tasks RCU enabled. Jan 20 01:34:30.532904 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:34:30.532917 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:34:30.533007 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 01:34:30.533052 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:34:30.533064 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:34:30.533076 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:34:30.533088 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 01:34:30.533100 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:34:30.533111 kernel: Console: colour dummy device 80x25 Jan 20 01:34:30.533122 kernel: printk: legacy console [ttyS0] enabled Jan 20 01:34:30.533134 kernel: ACPI: Core revision 20240827 Jan 20 01:34:30.533154 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 01:34:30.533164 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 01:34:30.533176 kernel: x2apic enabled Jan 20 01:34:30.533186 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 01:34:30.533197 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 01:34:30.533208 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 01:34:30.533218 kernel: kvm-guest: setup PV IPIs Jan 20 01:34:30.533229 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 01:34:30.533240 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:34:30.533255 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 01:34:30.533265 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 01:34:30.533276 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 01:34:30.533286 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 01:34:30.533297 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 01:34:30.533307 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 01:34:30.533318 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 01:34:30.533330 kernel: Speculative Store Bypass: Vulnerable Jan 20 01:34:30.533347 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 01:34:30.533359 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 01:34:30.539370 kernel: active return thunk: srso_alias_return_thunk Jan 20 01:34:30.539454 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 01:34:30.539476 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 01:34:30.539490 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 01:34:30.539502 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 01:34:30.539515 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 01:34:30.539527 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 01:34:30.539551 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 01:34:30.539564 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 01:34:30.539576 kernel: Freeing SMP alternatives memory: 32K Jan 20 01:34:30.539590 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:34:30.539600 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 01:34:30.539613 kernel: landlock: Up and running. Jan 20 01:34:30.539625 kernel: SELinux: Initializing. Jan 20 01:34:30.539638 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:34:30.539650 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:34:30.539670 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 01:34:30.539683 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 01:34:30.539694 kernel: signal: max sigframe size: 1776 Jan 20 01:34:30.539707 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:34:30.539720 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:34:30.539732 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 01:34:30.539746 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 01:34:30.539758 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:34:30.539771 kernel: smpboot: x86: Booting SMP configuration: Jan 20 01:34:30.539789 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 01:34:30.539801 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 01:34:30.539814 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 01:34:30.539828 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145388K reserved, 0K cma-reserved) Jan 20 01:34:30.539840 kernel: devtmpfs: initialized Jan 20 01:34:30.539853 kernel: x86/mm: Memory block size: 128MB Jan 20 01:34:30.539864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 01:34:30.539877 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 01:34:30.539897 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 20 01:34:30.539909 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 01:34:30.539922 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 20 01:34:30.540017 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 01:34:30.540031 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:34:30.540045 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 01:34:30.540056 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:34:30.540069 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:34:30.540081 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:34:30.540100 kernel: audit: type=2000 audit(1768872823.443:1): state=initialized audit_enabled=0 res=1 Jan 20 01:34:30.540113 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:34:30.540126 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 01:34:30.540138 kernel: cpuidle: using governor menu Jan 20 01:34:30.540149 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:34:30.540163 kernel: dca service started, version 1.12.1 Jan 20 01:34:30.540174 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 20 01:34:30.540187 kernel: PCI: Using configuration type 1 for base access Jan 20 01:34:30.540199 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 01:34:30.540217 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:34:30.540231 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:34:30.540243 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:34:30.540255 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:34:30.540267 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:34:30.540279 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:34:30.540291 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:34:30.540303 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:34:30.540315 kernel: ACPI: Interpreter enabled Jan 20 01:34:30.540333 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 01:34:30.540346 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 01:34:30.540359 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 01:34:30.540372 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 01:34:30.540383 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 01:34:30.540396 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 01:34:30.545886 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 01:34:30.546217 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 01:34:30.553594 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 01:34:30.553632 kernel: PCI host bridge to bus 0000:00 Jan 20 01:34:30.554055 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 01:34:30.554256 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 01:34:30.557523 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 01:34:30.562587 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 20 01:34:30.562848 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 20 01:34:30.563160 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 20 01:34:30.563347 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 01:34:30.563722 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 01:34:30.564219 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 01:34:30.571073 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 20 01:34:30.571391 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 20 01:34:30.571710 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 01:34:30.572057 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 01:34:30.580243 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 01:34:30.580664 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 20 01:34:30.580882 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 20 01:34:30.581301 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 20 01:34:30.587896 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 01:34:30.588256 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 20 01:34:30.588553 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 20 01:34:30.588796 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 20 01:34:30.589254 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 01:34:30.596306 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 20 01:34:30.596598 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 20 01:34:30.596841 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 20 01:34:30.597139 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 20 01:34:30.598617 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 01:34:30.598826 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 01:34:30.599179 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 01:34:30.599371 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 20 01:34:30.599659 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 20 01:34:30.599903 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 01:34:30.600176 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 20 01:34:30.600193 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 01:34:30.600205 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 01:34:30.600215 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 01:34:30.600227 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 01:34:30.600238 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 01:34:30.600248 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 01:34:30.600266 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 01:34:30.600277 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 01:34:30.600288 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 01:34:30.600299 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 01:34:30.600309 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 01:34:30.600320 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 01:34:30.600332 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 01:34:30.600345 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 01:34:30.600357 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 01:34:30.600373 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 01:34:30.600385 kernel: iommu: Default domain type: Translated Jan 20 01:34:30.600396 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 01:34:30.602264 kernel: efivars: Registered efivars operations Jan 20 01:34:30.602291 kernel: PCI: Using ACPI for IRQ routing Jan 20 01:34:30.602304 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 01:34:30.602318 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 01:34:30.602329 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 20 01:34:30.602342 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 20 01:34:30.602353 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 20 01:34:30.602373 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 20 01:34:30.602384 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 20 01:34:30.602396 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 20 01:34:30.602453 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 20 01:34:30.602694 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 01:34:30.602916 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 01:34:30.603217 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 01:34:30.603245 kernel: vgaarb: loaded Jan 20 01:34:30.603258 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 01:34:30.603270 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 01:34:30.603284 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 01:34:30.603294 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:34:30.603308 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:34:30.603319 kernel: pnp: PnP ACPI init Jan 20 01:34:30.605365 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 20 01:34:30.605386 kernel: pnp: PnP ACPI: found 6 devices Jan 20 01:34:30.605404 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 01:34:30.605468 kernel: NET: Registered PF_INET protocol family Jan 20 01:34:30.605480 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:34:30.605491 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:34:30.605502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:34:30.605513 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:34:30.605547 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:34:30.605561 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:34:30.605575 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:34:30.605587 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:34:30.605598 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:34:30.605609 kernel: NET: Registered PF_XDP protocol family Jan 20 01:34:30.605813 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 20 01:34:30.606085 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 20 01:34:30.606266 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 01:34:30.608541 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 01:34:30.608887 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 01:34:30.609148 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 20 01:34:30.609319 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 20 01:34:30.609567 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 20 01:34:30.609587 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:34:30.609602 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:34:30.609614 kernel: Initialise system trusted keyrings Jan 20 01:34:30.609628 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:34:30.609639 kernel: Key type asymmetric registered Jan 20 01:34:30.609659 kernel: Asymmetric key parser 'x509' registered Jan 20 01:34:30.609671 kernel: hrtimer: interrupt took 3168394 ns Jan 20 01:34:30.609685 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:34:30.609698 kernel: io scheduler mq-deadline registered Jan 20 01:34:30.609712 kernel: io scheduler kyber registered Jan 20 01:34:30.609724 kernel: io scheduler bfq registered Jan 20 01:34:30.609737 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 01:34:30.609750 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 01:34:30.609764 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 01:34:30.609782 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 01:34:30.609797 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:34:30.609808 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 01:34:30.609819 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 01:34:30.609831 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 01:34:30.609843 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 01:34:30.610224 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 01:34:30.612653 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 01:34:30.613041 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T01:34:24 UTC (1768872864) Jan 20 01:34:30.613230 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 20 01:34:30.613254 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 01:34:30.613267 kernel: efifb: probing for efifb Jan 20 01:34:30.613279 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 20 01:34:30.613294 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 20 01:34:30.613306 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 01:34:30.613318 kernel: efifb: scrolling: redraw Jan 20 01:34:30.613329 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 01:34:30.613341 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 01:34:30.613353 kernel: fb0: EFI VGA frame buffer device Jan 20 01:34:30.613364 kernel: pstore: Using crash dump compression: deflate Jan 20 01:34:30.613376 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 01:34:30.613387 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:34:30.613402 kernel: Segment Routing with IPv6 Jan 20 01:34:30.613470 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:34:30.613484 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:34:30.613496 kernel: Key type dns_resolver registered Jan 20 01:34:30.613508 kernel: IPI shorthand broadcast: enabled Jan 20 01:34:30.613520 kernel: sched_clock: Marking stable (36655029027, 5686844348)->(46086355940, -3744482565) Jan 20 01:34:30.613531 kernel: registered taskstats version 1 Jan 20 01:34:30.613543 kernel: Loading compiled-in X.509 certificates Jan 20 01:34:30.613555 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 01:34:30.613567 kernel: Demotion targets for Node 0: null Jan 20 01:34:30.613583 kernel: Key type .fscrypt registered Jan 20 01:34:30.613596 kernel: Key type fscrypt-provisioning registered Jan 20 01:34:30.613607 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:34:30.613618 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:34:30.613629 kernel: ima: No architecture policies found Jan 20 01:34:30.613642 kernel: clk: Disabling unused clocks Jan 20 01:34:30.613654 kernel: Warning: unable to open an initial console. Jan 20 01:34:30.613666 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 01:34:30.613682 kernel: Write protecting the kernel read-only data: 40960k Jan 20 01:34:30.613694 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 01:34:30.613707 kernel: Run /init as init process Jan 20 01:34:30.613718 kernel: with arguments: Jan 20 01:34:30.613731 kernel: /init Jan 20 01:34:30.613743 kernel: with environment: Jan 20 01:34:30.613754 kernel: HOME=/ Jan 20 01:34:30.613766 kernel: TERM=linux Jan 20 01:34:30.613779 systemd[1]: Successfully made /usr/ read-only. Jan 20 01:34:30.613799 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:34:30.613812 systemd[1]: Detected virtualization kvm. Jan 20 01:34:30.613824 systemd[1]: Detected architecture x86-64. Jan 20 01:34:30.613837 systemd[1]: Running in initrd. Jan 20 01:34:30.613849 systemd[1]: No hostname configured, using default hostname. Jan 20 01:34:30.613861 systemd[1]: Hostname set to . Jan 20 01:34:30.613874 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:34:30.613889 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:34:30.613902 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:34:30.613914 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:34:30.614003 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:34:30.614020 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:34:30.614033 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:34:30.614047 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:34:30.614066 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:34:30.614079 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:34:30.614092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:34:30.614104 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:34:30.614117 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:34:30.614129 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:34:30.614142 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:34:30.614154 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:34:30.614167 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:34:30.614183 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:34:30.614195 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:34:30.614208 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 01:34:30.614220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:34:30.614233 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:34:30.614245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:34:30.614258 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:34:30.614270 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:34:30.614286 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:34:30.614299 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:34:30.614312 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 01:34:30.614324 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:34:30.614337 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:34:30.614349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:34:30.614362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:34:30.614374 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:34:30.616217 systemd-journald[203]: Collecting audit messages is disabled. Jan 20 01:34:30.616264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:34:30.616278 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:34:30.616291 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:34:30.616306 systemd-journald[203]: Journal started Jan 20 01:34:30.616333 systemd-journald[203]: Runtime Journal (/run/log/journal/3611e5afd1b6444db4b276b211ff61de) is 6M, max 48.1M, 42.1M free. Jan 20 01:34:30.627994 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:34:30.636786 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:34:30.710853 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:34:30.728139 systemd-modules-load[204]: Inserted module 'overlay' Jan 20 01:34:30.793173 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:34:30.802271 systemd-tmpfiles[216]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 01:34:30.836080 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:34:30.901545 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:34:30.926178 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:34:31.042987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:34:31.128782 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:34:31.185227 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:34:31.423797 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:34:31.459635 kernel: Bridge firewalling registered Jan 20 01:34:31.461108 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 20 01:34:31.476868 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:34:31.496875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:34:31.548226 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:34:31.659265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:34:31.691667 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:34:31.908164 systemd-resolved[287]: Positive Trust Anchors: Jan 20 01:34:31.911497 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:34:31.911544 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:34:32.125198 systemd-resolved[287]: Defaulting to hostname 'linux'. Jan 20 01:34:32.134304 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:34:32.148218 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:34:32.250014 kernel: SCSI subsystem initialized Jan 20 01:34:32.363843 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:34:32.466470 kernel: iscsi: registered transport (tcp) Jan 20 01:34:32.524777 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:34:32.524868 kernel: QLogic iSCSI HBA Driver Jan 20 01:34:32.639655 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:34:32.782212 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:34:32.813780 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:34:33.439166 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:34:33.474148 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:34:33.802187 kernel: raid6: avx2x4 gen() 12010 MB/s Jan 20 01:34:33.825076 kernel: raid6: avx2x2 gen() 6213 MB/s Jan 20 01:34:33.860783 kernel: raid6: avx2x1 gen() 6045 MB/s Jan 20 01:34:33.860889 kernel: raid6: using algorithm avx2x4 gen() 12010 MB/s Jan 20 01:34:33.908711 kernel: raid6: .... xor() 839 MB/s, rmw enabled Jan 20 01:34:33.909907 kernel: raid6: using avx2x2 recovery algorithm Jan 20 01:34:34.067150 kernel: xor: automatically using best checksumming function avx Jan 20 01:34:35.326233 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:34:35.445905 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:34:35.477048 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:34:35.701354 systemd-udevd[453]: Using default interface naming scheme 'v255'. Jan 20 01:34:35.750524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:34:35.799010 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:34:35.971651 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Jan 20 01:34:36.257037 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:34:36.328419 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:34:36.867722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:34:36.954830 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:34:37.720734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:34:37.726299 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:34:37.753390 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:34:37.835360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:34:37.875723 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:34:37.939828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:34:38.018210 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 01:34:38.018769 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 01:34:37.940193 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:34:38.033670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:34:38.169616 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 01:34:38.169656 kernel: GPT:9289727 != 19775487 Jan 20 01:34:38.169708 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 01:34:38.169724 kernel: GPT:9289727 != 19775487 Jan 20 01:34:38.169738 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 01:34:38.169753 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:34:38.221021 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 01:34:38.391992 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:34:38.525009 kernel: libata version 3.00 loaded. Jan 20 01:34:39.049095 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 01:34:39.276041 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:34:39.418206 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 01:34:39.474120 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 01:34:39.666771 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 01:34:39.736275 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 01:34:39.772049 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:34:39.876267 kernel: AES CTR mode by8 optimization enabled Jan 20 01:34:39.876845 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 01:34:39.877990 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 01:34:39.986860 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 01:34:40.008272 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 01:34:40.017704 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 01:34:40.107640 disk-uuid[548]: Primary Header is updated. Jan 20 01:34:40.107640 disk-uuid[548]: Secondary Entries is updated. Jan 20 01:34:40.107640 disk-uuid[548]: Secondary Header is updated. Jan 20 01:34:40.317883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:34:40.411045 kernel: scsi host0: ahci Jan 20 01:34:40.522747 kernel: scsi host1: ahci Jan 20 01:34:40.523414 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:34:40.543884 kernel: scsi host2: ahci Jan 20 01:34:40.567021 kernel: scsi host3: ahci Jan 20 01:34:40.603899 kernel: scsi host4: ahci Jan 20 01:34:40.617328 kernel: scsi host5: ahci Jan 20 01:34:40.617706 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Jan 20 01:34:40.678083 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Jan 20 01:34:40.678151 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Jan 20 01:34:40.769654 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Jan 20 01:34:40.769747 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Jan 20 01:34:40.771660 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Jan 20 01:34:41.137653 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:41.137734 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:41.166092 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:41.193859 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 01:34:41.196718 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:41.233174 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:41.274253 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:34:41.274705 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 01:34:41.274726 kernel: ata3.00: applying bridge limits Jan 20 01:34:41.321141 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:34:41.321313 kernel: ata3.00: configured for UDMA/100 Jan 20 01:34:41.366744 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 01:34:41.449629 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:34:41.477351 disk-uuid[567]: The operation has completed successfully. Jan 20 01:34:41.758713 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 01:34:41.759223 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:34:41.857691 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 01:34:43.102215 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:34:43.102847 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:34:43.133469 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:34:43.151624 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:34:43.239864 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:34:43.270339 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:34:43.319228 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:34:43.332111 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:34:43.362111 sh[639]: Success Jan 20 01:34:43.442065 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:34:43.575333 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:34:43.597406 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:34:43.609041 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 01:34:43.768708 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 01:34:44.118746 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:34:44.340869 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:34:44.478818 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:34:45.003551 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (659) Jan 20 01:34:45.031015 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 01:34:45.031460 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:34:45.639492 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:34:45.648727 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 01:34:47.224307 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:34:47.368747 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:34:47.369413 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:34:47.440062 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:34:47.450086 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:34:47.887923 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (694) Jan 20 01:34:47.917071 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:34:47.930515 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:34:47.973178 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:34:47.973278 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:34:48.025901 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:34:48.173805 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:34:48.254042 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:34:52.655921 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:34:53.336410 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:34:53.951894 ignition[763]: Ignition 2.22.0 Jan 20 01:34:53.952416 ignition[763]: Stage: fetch-offline Jan 20 01:34:53.952623 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:34:53.952647 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:34:54.005413 systemd-networkd[834]: lo: Link UP Jan 20 01:34:53.952857 ignition[763]: parsed url from cmdline: "" Jan 20 01:34:54.005422 systemd-networkd[834]: lo: Gained carrier Jan 20 01:34:53.952864 ignition[763]: no config URL provided Jan 20 01:34:54.026302 systemd-networkd[834]: Enumeration completed Jan 20 01:34:53.952874 ignition[763]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:34:54.029999 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:34:53.952890 ignition[763]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:34:54.042358 systemd-networkd[834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:34:53.956143 ignition[763]: op(1): [started] loading QEMU firmware config module Jan 20 01:34:54.042366 systemd-networkd[834]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:34:53.956153 ignition[763]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 01:34:54.047263 systemd-networkd[834]: eth0: Link UP Jan 20 01:34:54.531814 ignition[763]: op(1): [finished] loading QEMU firmware config module Jan 20 01:34:54.068261 systemd-networkd[834]: eth0: Gained carrier Jan 20 01:34:54.068284 systemd-networkd[834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:34:54.094625 systemd[1]: Reached target network.target - Network. Jan 20 01:34:54.450436 systemd-networkd[834]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:34:55.770072 systemd-networkd[834]: eth0: Gained IPv6LL Jan 20 01:34:56.666393 ignition[763]: parsing config with SHA512: 308361745002df40faef5a4e5a38cba304edb0b97dba1efd0a001b0d51089e0d97526eb9f54894c9b4fe9862efbb12d33cc07f7e977fd61335b3e6e7feaa2856 Jan 20 01:34:56.977123 unknown[763]: fetched base config from "system" Jan 20 01:34:56.978493 unknown[763]: fetched user config from "qemu" Jan 20 01:34:56.994742 ignition[763]: fetch-offline: fetch-offline passed Jan 20 01:34:57.010006 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:34:56.995182 ignition[763]: Ignition finished successfully Jan 20 01:34:57.018769 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 01:34:57.027430 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:34:57.649546 ignition[842]: Ignition 2.22.0 Jan 20 01:34:57.658236 ignition[842]: Stage: kargs Jan 20 01:34:57.676401 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:34:57.676430 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:34:57.697286 ignition[842]: kargs: kargs passed Jan 20 01:34:57.793397 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:34:57.697454 ignition[842]: Ignition finished successfully Jan 20 01:34:57.937498 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:34:58.562219 ignition[850]: Ignition 2.22.0 Jan 20 01:34:58.565295 ignition[850]: Stage: disks Jan 20 01:34:58.569495 ignition[850]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:34:58.569520 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:34:58.573571 ignition[850]: disks: disks passed Jan 20 01:34:58.573730 ignition[850]: Ignition finished successfully Jan 20 01:34:58.666415 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:34:58.738392 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:34:58.812187 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:34:58.885910 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:34:58.899176 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:34:58.899297 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:34:58.915043 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:34:59.231806 systemd-fsck[861]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 01:34:59.247150 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:34:59.306675 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:35:02.169370 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 01:35:02.177268 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:35:02.220778 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:35:02.309328 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:35:02.410285 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:35:02.438439 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 01:35:02.438520 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:35:02.438558 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:35:02.520121 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (870) Jan 20 01:35:02.673585 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:35:02.675534 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:35:02.726327 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:35:02.775287 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:35:02.839102 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:35:02.839142 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:35:02.893459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:35:03.493608 initrd-setup-root[894]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:35:03.581404 initrd-setup-root[901]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:35:03.665381 initrd-setup-root[908]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:35:03.747122 initrd-setup-root[915]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:35:05.433431 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:35:05.615578 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:35:05.655176 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:35:05.885398 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:35:05.958775 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:35:06.239885 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:35:06.876404 ignition[985]: INFO : Ignition 2.22.0 Jan 20 01:35:06.876404 ignition[985]: INFO : Stage: mount Jan 20 01:35:06.976197 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:35:06.976197 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:35:06.976197 ignition[985]: INFO : mount: mount passed Jan 20 01:35:06.976197 ignition[985]: INFO : Ignition finished successfully Jan 20 01:35:07.064250 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:35:07.149578 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:35:07.360224 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:35:07.669747 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Jan 20 01:35:07.724856 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:35:07.725030 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:35:07.842481 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:35:07.842573 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:35:07.876537 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:35:08.327052 ignition[1014]: INFO : Ignition 2.22.0 Jan 20 01:35:08.358035 ignition[1014]: INFO : Stage: files Jan 20 01:35:08.358035 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:35:08.358035 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:35:08.439556 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:35:08.439556 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:35:08.439556 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:35:08.573485 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:35:08.573485 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:35:08.674921 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:35:08.574895 unknown[1014]: wrote ssh authorized keys file for user: core Jan 20 01:35:08.824076 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:35:08.824076 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 01:35:09.103828 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:35:12.158797 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:35:12.158797 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 01:35:12.329870 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 20 01:35:13.252797 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 01:35:18.971416 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 01:35:19.051115 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:35:19.116117 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:35:19.301606 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:35:19.535709 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:35:19.535709 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:35:19.815688 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:35:19.815688 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:35:19.815688 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:35:19.815688 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:35:19.815688 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:35:19.815688 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 01:35:20.579432 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 01:35:20.579432 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 01:35:20.579432 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 01:35:20.579432 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 20 01:35:25.242443 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1203265263 wd_nsec: 1203265035 Jan 20 01:35:39.068295 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2740998441 wd_nsec: 2740997212 Jan 20 01:35:47.235370 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 01:35:47.340274 ignition[1014]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 20 01:35:47.340274 ignition[1014]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:35:47.340274 ignition[1014]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:35:47.340274 ignition[1014]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 20 01:35:47.340274 ignition[1014]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 20 01:35:47.340274 ignition[1014]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:35:47.340274 ignition[1014]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:35:47.340274 ignition[1014]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 20 01:35:47.340274 ignition[1014]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 01:35:48.751560 ignition[1014]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:35:49.460860 ignition[1014]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:35:49.555821 ignition[1014]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 01:35:49.555821 ignition[1014]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:35:49.555821 ignition[1014]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:35:49.727471 ignition[1014]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:35:49.727471 ignition[1014]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:35:49.727471 ignition[1014]: INFO : files: files passed Jan 20 01:35:49.727471 ignition[1014]: INFO : Ignition finished successfully Jan 20 01:35:49.752852 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:35:49.825511 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:35:49.933033 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:35:50.635438 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 01:35:51.013487 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:35:51.014047 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:35:51.127923 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:35:51.240336 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:35:51.240644 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:35:51.240644 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:35:51.465623 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:35:51.544146 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:35:52.026676 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:35:52.027045 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:35:52.093478 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:35:52.110886 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:35:52.135775 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:35:52.150041 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:35:52.827271 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:35:52.879346 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:35:53.123172 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:35:53.288621 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:35:53.477781 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:35:53.525910 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:35:53.526626 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:35:53.622138 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:35:53.822285 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:35:53.862489 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:35:53.949866 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:35:54.025079 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:35:54.167175 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:35:54.174468 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:35:54.206786 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:35:54.337333 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:35:54.511519 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:35:54.650197 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:35:54.806286 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:35:54.835898 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:35:55.080244 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:35:55.148407 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:35:55.286591 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:35:55.301569 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:35:55.332379 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:35:55.350512 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:35:55.451892 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:35:55.452339 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:35:55.514105 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:35:55.659436 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:35:55.695776 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:35:55.738731 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:35:55.748575 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:35:55.927524 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:35:55.976587 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:35:56.005625 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:35:56.005768 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:35:56.008212 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:35:56.008403 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:35:56.008685 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:35:56.009071 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:35:56.080771 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:35:56.474421 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:35:56.516255 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:35:56.741570 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:35:56.792739 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:35:56.793540 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:35:57.141451 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:35:57.161582 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:35:57.347533 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:35:57.347905 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:35:57.674807 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:35:57.789584 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:35:57.789888 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:35:57.957541 ignition[1070]: INFO : Ignition 2.22.0 Jan 20 01:35:57.957541 ignition[1070]: INFO : Stage: umount Jan 20 01:35:58.007368 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:35:58.007368 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:35:58.007368 ignition[1070]: INFO : umount: umount passed Jan 20 01:35:58.007368 ignition[1070]: INFO : Ignition finished successfully Jan 20 01:35:58.014803 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:35:58.015439 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:35:58.106434 systemd[1]: Stopped target network.target - Network. Jan 20 01:35:58.106572 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:35:58.106720 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:35:58.106869 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:35:58.111413 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:35:58.114516 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:35:58.114618 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:35:58.114784 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:35:58.119426 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:35:58.121750 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:35:58.121903 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:35:58.128610 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:35:58.132819 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:35:58.298635 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:35:58.299444 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:35:58.687251 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 01:35:58.691672 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:35:58.691881 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:35:58.851658 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 01:35:58.946850 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 01:35:59.022581 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:35:59.022718 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:35:59.315463 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:35:59.347383 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:35:59.347498 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:35:59.393259 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:35:59.393438 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:35:59.442217 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:35:59.442335 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:35:59.659321 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:35:59.659581 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:35:59.883502 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:35:59.980837 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 01:36:00.015121 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:36:00.245282 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:36:00.279526 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:36:00.371831 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:36:00.372227 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:36:00.478883 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:36:00.479306 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:36:00.527546 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:36:00.527665 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:36:00.561536 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:36:00.561650 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:36:00.604513 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:36:00.604640 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:36:00.701548 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:36:00.860709 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 01:36:00.875533 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:36:01.436618 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:36:01.441331 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:36:01.693745 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:36:01.694416 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:36:01.859777 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 01:36:01.861794 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 01:36:01.866352 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:36:01.872428 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:36:01.872746 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:36:02.031869 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:36:02.038812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:36:02.225272 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:36:02.239661 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:36:02.416687 systemd[1]: Switching root. Jan 20 01:36:02.632694 systemd-journald[203]: Journal stopped Jan 20 01:36:18.462298 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 20 01:36:18.462489 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:36:18.462564 kernel: SELinux: policy capability open_perms=1 Jan 20 01:36:18.462590 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:36:18.462614 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:36:18.462629 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:36:18.462645 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:36:18.462660 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:36:18.462680 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:36:18.462697 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 01:36:18.462713 kernel: audit: type=1403 audit(1768872963.898:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:36:18.462780 systemd[1]: Successfully loaded SELinux policy in 709.860ms. Jan 20 01:36:18.462814 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 38.910ms. Jan 20 01:36:18.462838 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:36:18.462859 systemd[1]: Detected virtualization kvm. Jan 20 01:36:18.462876 systemd[1]: Detected architecture x86-64. Jan 20 01:36:18.462893 systemd[1]: Detected first boot. Jan 20 01:36:18.462908 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:36:18.463002 zram_generator::config[1115]: No configuration found. Jan 20 01:36:18.463066 kernel: Guest personality initialized and is inactive Jan 20 01:36:18.463083 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 01:36:18.463097 kernel: Initialized host personality Jan 20 01:36:18.465251 kernel: NET: Registered PF_VSOCK protocol family Jan 20 01:36:18.465272 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:36:18.465296 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 01:36:18.465312 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:36:18.465327 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:36:18.465343 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:36:18.465416 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:36:18.465437 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:36:18.465453 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:36:18.465475 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:36:18.465491 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:36:18.465507 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:36:18.465524 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:36:18.465540 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:36:18.465556 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:36:18.465621 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:36:18.465638 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:36:18.465654 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:36:18.465670 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:36:18.465686 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:36:18.465702 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 01:36:18.465718 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:36:18.465783 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:36:18.465859 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:36:18.465877 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:36:18.465893 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:36:18.465909 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:36:18.466033 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:36:18.466055 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:36:18.466072 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:36:18.466088 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:36:18.468537 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:36:18.468612 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:36:18.468636 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 01:36:18.468654 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:36:18.468671 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:36:18.468687 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:36:18.468702 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:36:18.468718 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:36:18.468734 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:36:18.468750 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:36:18.468820 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:36:18.468848 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:36:18.468864 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:36:18.469009 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:36:18.469028 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:36:18.469044 systemd[1]: Reached target machines.target - Containers. Jan 20 01:36:18.469060 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:36:18.469075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:36:18.471385 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:36:18.471406 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:36:18.471422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:36:18.471439 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:36:18.471455 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:36:18.471470 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:36:18.471485 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:36:18.471501 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:36:18.472555 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:36:18.472576 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:36:18.472592 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:36:18.472608 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:36:18.472625 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:36:18.472642 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:36:18.472660 kernel: loop: module loaded Jan 20 01:36:18.472678 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:36:18.472755 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:36:18.472819 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:36:18.472908 systemd-journald[1200]: Collecting audit messages is disabled. Jan 20 01:36:18.473037 systemd-journald[1200]: Journal started Jan 20 01:36:18.473069 systemd-journald[1200]: Runtime Journal (/run/log/journal/3611e5afd1b6444db4b276b211ff61de) is 6M, max 48.1M, 42.1M free. Jan 20 01:36:13.157850 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:36:13.232817 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 01:36:13.242408 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:36:13.247651 systemd[1]: systemd-journald.service: Consumed 3.370s CPU time. Jan 20 01:36:19.366283 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 01:36:19.453243 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:36:19.615738 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 01:36:19.615867 systemd[1]: Stopped verity-setup.service. Jan 20 01:36:19.697078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:36:19.778054 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:36:19.798513 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:36:19.831035 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:36:19.857033 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:36:19.900090 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:36:19.942390 kernel: ACPI: bus type drm_connector registered Jan 20 01:36:19.943301 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:36:20.017835 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:36:20.052320 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:36:20.099100 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:36:20.141111 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:36:20.141593 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:36:20.176315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:36:20.177110 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:36:20.216864 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:36:20.217449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:36:20.254676 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:36:20.292602 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:36:20.343033 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:36:20.392673 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 01:36:20.585568 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:36:20.658727 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:36:20.714506 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:36:20.714669 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:36:20.748877 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 01:36:20.828088 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:36:20.856044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:36:20.866466 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:36:20.935573 kernel: fuse: init (API version 7.41) Jan 20 01:36:20.960170 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:36:21.005876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:36:21.028642 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:36:21.098591 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:36:21.246337 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:36:21.351640 systemd-journald[1200]: Time spent on flushing to /var/log/journal/3611e5afd1b6444db4b276b211ff61de is 1.012482s for 1061 entries. Jan 20 01:36:21.351640 systemd-journald[1200]: System Journal (/var/log/journal/3611e5afd1b6444db4b276b211ff61de) is 8M, max 195.6M, 187.6M free. Jan 20 01:36:22.666711 systemd-journald[1200]: Received client request to flush runtime journal. Jan 20 01:36:22.666785 kernel: loop0: detected capacity change from 0 to 110984 Jan 20 01:36:21.327757 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:36:21.417337 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:36:21.434737 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:36:21.492029 systemd[1]: modprobe@drm.service: Consumed 1.065s CPU time, 3.2M memory peak. Jan 20 01:36:21.499518 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:36:21.500270 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:36:21.604736 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:36:21.605453 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:36:21.664063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:36:22.140899 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:36:22.462457 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:36:22.541351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:36:22.706919 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:36:23.123111 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:36:23.341726 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:36:23.428423 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 01:36:23.520460 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:36:23.873086 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:36:23.971903 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:36:24.898234 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:36:24.905255 kernel: loop1: detected capacity change from 0 to 128560 Jan 20 01:36:24.909667 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:36:24.966439 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 01:36:25.042514 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:36:25.648219 kernel: loop2: detected capacity change from 0 to 229808 Jan 20 01:36:26.076540 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 20 01:36:26.076616 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jan 20 01:36:26.197695 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:36:26.772432 kernel: loop3: detected capacity change from 0 to 110984 Jan 20 01:36:27.233473 kernel: loop4: detected capacity change from 0 to 128560 Jan 20 01:36:27.524748 kernel: loop5: detected capacity change from 0 to 229808 Jan 20 01:36:28.129539 (sd-merge)[1260]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 01:36:28.164122 (sd-merge)[1260]: Merged extensions into '/usr'. Jan 20 01:36:28.371459 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:36:28.371490 systemd[1]: Reloading... Jan 20 01:36:30.192006 zram_generator::config[1283]: No configuration found. Jan 20 01:36:35.251605 systemd[1]: Reloading finished in 6855 ms. Jan 20 01:36:35.744081 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:36:35.849027 systemd[1]: Starting ensure-sysext.service... Jan 20 01:36:36.143712 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:36:36.510347 ldconfig[1225]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:36:36.545101 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:36:36.604313 systemd[1]: Reload requested from client PID 1322 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:36:36.604385 systemd[1]: Reloading... Jan 20 01:36:36.847627 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:36:36.847687 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:36:36.853581 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:36:36.854439 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:36:36.855846 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:36:36.867511 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 20 01:36:36.867616 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 20 01:36:36.973882 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:36:36.973911 systemd-tmpfiles[1323]: Skipping /boot Jan 20 01:36:37.361676 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:36:37.362208 systemd-tmpfiles[1323]: Skipping /boot Jan 20 01:36:37.548430 zram_generator::config[1351]: No configuration found. Jan 20 01:36:39.923568 systemd[1]: Reloading finished in 3315 ms. Jan 20 01:36:39.991661 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:36:40.220910 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:36:40.321923 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:36:40.376106 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:36:40.510861 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:36:40.552715 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:36:40.627055 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:36:40.701313 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:36:40.799581 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:36:40.799880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:36:40.822789 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:36:40.925515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:36:41.060118 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:36:41.093812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:36:41.094113 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:36:41.115167 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:36:41.167585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:36:41.196470 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:36:41.224028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:36:41.248121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:36:41.266570 augenrules[1419]: No rules Jan 20 01:36:41.276668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:36:41.292779 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:36:41.402804 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:36:41.403599 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:36:41.440573 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:36:41.440924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:36:41.565404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:36:41.570181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:36:41.605548 systemd-udevd[1400]: Using default interface naming scheme 'v255'. Jan 20 01:36:41.668171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:36:41.750822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:36:41.819797 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:36:41.844150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:36:41.844486 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:36:41.862486 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:36:41.904397 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:36:41.936495 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:36:41.992908 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:36:42.034082 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:36:42.082231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:36:42.087807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:36:42.138393 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:36:42.335623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:36:42.370719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:36:42.438807 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:36:42.451243 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:36:42.512091 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:36:43.316868 systemd[1]: Finished ensure-sysext.service. Jan 20 01:36:43.433082 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:36:43.461433 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:36:43.504436 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:36:43.565752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:36:43.610148 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:36:43.693081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:36:43.764733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:36:43.802016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:36:43.806011 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:36:43.833647 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:36:43.890890 augenrules[1472]: /sbin/augenrules: No change Jan 20 01:36:43.893652 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 01:36:43.930695 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:36:43.930757 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:36:43.934109 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:36:43.939607 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:36:43.958385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:36:43.958771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:36:43.976920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:36:44.013403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:36:44.085400 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:36:44.089870 augenrules[1495]: No rules Jan 20 01:36:44.090485 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:36:44.090866 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:36:44.135259 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:36:44.135799 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:36:44.169253 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:36:44.175813 systemd-resolved[1393]: Positive Trust Anchors: Jan 20 01:36:44.175886 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:36:44.176017 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:36:44.216735 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 01:36:44.218378 systemd-resolved[1393]: Defaulting to hostname 'linux'. Jan 20 01:36:44.231917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:36:44.246777 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:36:45.194738 systemd-networkd[1486]: lo: Link UP Jan 20 01:36:45.208159 systemd-networkd[1486]: lo: Gained carrier Jan 20 01:36:45.213724 systemd-networkd[1486]: Enumeration completed Jan 20 01:36:45.214096 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:36:45.222127 systemd[1]: Reached target network.target - Network. Jan 20 01:36:45.252195 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:36:45.252216 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:36:45.254251 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 01:36:45.308224 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:36:45.308622 systemd-networkd[1486]: eth0: Link UP Jan 20 01:36:45.308738 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:36:45.309134 systemd-networkd[1486]: eth0: Gained carrier Jan 20 01:36:45.309852 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:36:45.418195 systemd-networkd[1486]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:36:45.421710 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:36:45.482176 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:36:46.014057 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:36:46.065245 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 01:36:46.200518 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 01:36:47.478473 systemd-timesyncd[1488]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 01:36:47.492662 systemd-timesyncd[1488]: Initial clock synchronization to Tue 2026-01-20 01:36:47.477779 UTC. Jan 20 01:36:47.503100 systemd-resolved[1393]: Clock change detected. Flushing caches. Jan 20 01:36:47.509276 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:36:47.534210 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:36:47.573467 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 01:36:47.588356 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:36:47.628470 kernel: ACPI: button: Power Button [PWRF] Jan 20 01:36:47.628624 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:36:47.629622 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 01:36:47.696323 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:36:47.800245 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:36:47.800488 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:36:47.865894 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:36:47.913708 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:36:47.952223 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:36:47.998335 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:36:48.096809 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:36:48.282236 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:36:48.357267 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 01:36:48.397929 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 01:36:48.427065 systemd-networkd[1486]: eth0: Gained IPv6LL Jan 20 01:36:48.435353 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 01:36:48.722482 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:36:48.795991 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 01:36:48.849477 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:36:48.953875 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:36:49.322253 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:36:49.377008 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:36:49.412836 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:36:49.422932 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 01:36:49.423595 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 01:36:49.427106 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 01:36:49.498133 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:36:49.498188 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:36:49.516303 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:36:49.572573 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 01:36:49.705107 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:36:49.764712 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:36:49.837981 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:36:49.873692 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:36:49.904068 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:36:49.916985 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 01:36:49.995181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:36:50.205863 jq[1545]: false Jan 20 01:36:50.259954 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:36:50.385173 extend-filesystems[1547]: Found /dev/vda6 Jan 20 01:36:50.477299 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing passwd entry cache Jan 20 01:36:50.477299 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting users, quitting Jan 20 01:36:50.477299 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:36:50.477299 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing group entry cache Jan 20 01:36:50.358987 oslogin_cache_refresh[1548]: Refreshing passwd entry cache Jan 20 01:36:50.442294 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:36:50.489171 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting groups, quitting Jan 20 01:36:50.489171 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:36:50.407838 oslogin_cache_refresh[1548]: Failure getting users, quitting Jan 20 01:36:50.564137 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:36:50.407876 oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:36:50.407988 oslogin_cache_refresh[1548]: Refreshing group entry cache Jan 20 01:36:50.488857 oslogin_cache_refresh[1548]: Failure getting groups, quitting Jan 20 01:36:50.488881 oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:36:50.612118 extend-filesystems[1547]: Found /dev/vda9 Jan 20 01:36:50.675666 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:36:51.022475 extend-filesystems[1547]: Checking size of /dev/vda9 Jan 20 01:36:51.098107 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:36:51.125892 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:36:51.168254 extend-filesystems[1547]: Resized partition /dev/vda9 Jan 20 01:36:51.167355 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:36:51.169986 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:36:51.185082 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:36:51.292073 extend-filesystems[1571]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 01:36:51.473936 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 01:36:51.306091 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:36:51.646531 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:36:51.687254 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:36:51.695628 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:36:51.696460 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 01:36:51.710225 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 01:36:52.243578 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:36:52.268739 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:36:52.308655 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:36:52.385296 jq[1575]: true Jan 20 01:36:52.531734 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:36:52.532571 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:36:53.218607 update_engine[1569]: I20260120 01:36:53.205294 1569 main.cc:92] Flatcar Update Engine starting Jan 20 01:36:53.065049 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 01:36:53.071074 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 01:36:53.233978 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 01:36:53.224518 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 01:36:53.518873 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 01:36:53.518873 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 01:36:53.518873 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 01:36:53.521289 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:36:53.774737 jq[1590]: true Jan 20 01:36:53.775080 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Jan 20 01:36:53.530617 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:36:53.988472 tar[1589]: linux-amd64/LICENSE Jan 20 01:36:53.988472 tar[1589]: linux-amd64/helm Jan 20 01:36:54.228720 dbus-daemon[1542]: [system] SELinux support is enabled Jan 20 01:36:54.240870 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:36:54.300473 systemd-logind[1565]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 01:36:54.300519 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:36:54.303231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:36:54.307140 systemd-logind[1565]: New seat seat0. Jan 20 01:36:54.361857 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:36:54.417575 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:36:54.594518 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:36:54.594664 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:36:54.636238 update_engine[1569]: I20260120 01:36:54.635568 1569 update_check_scheduler.cc:74] Next update check in 7m21s Jan 20 01:36:54.639853 dbus-daemon[1542]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 01:36:54.652353 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:36:54.652544 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:36:54.695476 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:36:54.786743 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:36:56.235220 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:36:56.228641 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:36:56.254662 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:36:57.886853 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:36:57.887982 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:36:58.214763 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:36:58.431456 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:36:58.792186 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:36:58.837616 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:51686.service - OpenSSH per-connection server daemon (10.0.0.1:51686). Jan 20 01:36:59.372466 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:37:00.373223 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:37:00.373765 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:37:00.425710 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:37:02.098884 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:37:02.291775 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:37:02.351887 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 01:37:02.388088 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:37:02.989614 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 51686 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:37:03.089453 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:03.228008 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:37:03.305277 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:37:06.079773 containerd[1591]: time="2026-01-20T01:37:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 01:37:06.079773 containerd[1591]: time="2026-01-20T01:37:06.067273755Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 01:37:06.224155 systemd-logind[1565]: New session 1 of user core. Jan 20 01:37:06.701995 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:37:06.801180 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:37:07.063081 containerd[1591]: time="2026-01-20T01:37:07.062937748Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=11.611212ms Jan 20 01:37:07.063676 containerd[1591]: time="2026-01-20T01:37:07.063645710Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 01:37:07.063846 containerd[1591]: time="2026-01-20T01:37:07.063822390Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 01:37:07.084463 containerd[1591]: time="2026-01-20T01:37:07.080788106Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 01:37:07.094709 containerd[1591]: time="2026-01-20T01:37:07.089821868Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 01:37:07.094709 containerd[1591]: time="2026-01-20T01:37:07.089971096Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:37:07.094709 containerd[1591]: time="2026-01-20T01:37:07.091704552Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:37:07.094709 containerd[1591]: time="2026-01-20T01:37:07.091728126Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:37:07.138002 containerd[1591]: time="2026-01-20T01:37:07.103253929Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:37:07.138002 containerd[1591]: time="2026-01-20T01:37:07.117266264Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:37:07.138002 containerd[1591]: time="2026-01-20T01:37:07.117546126Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:37:07.138002 containerd[1591]: time="2026-01-20T01:37:07.117567316Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 01:37:07.138002 containerd[1591]: time="2026-01-20T01:37:07.117955641Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 01:37:07.138002 containerd[1591]: time="2026-01-20T01:37:07.134471127Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:37:07.138002 containerd[1591]: time="2026-01-20T01:37:07.134771147Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:37:07.138002 containerd[1591]: time="2026-01-20T01:37:07.134806413Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 01:37:07.138002 containerd[1591]: time="2026-01-20T01:37:07.134959739Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 01:37:07.154104 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:37:07.192467 systemd-logind[1565]: New session c1 of user core. Jan 20 01:37:07.193510 containerd[1591]: time="2026-01-20T01:37:07.193312006Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 01:37:07.220963 containerd[1591]: time="2026-01-20T01:37:07.197106067Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.361642099Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.361958539Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.361990669Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362014504Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362045712Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362136551Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362167409Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362187467Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362205761Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362220418Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362233292Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362254342Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362719290Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 01:37:07.372769 containerd[1591]: time="2026-01-20T01:37:07.362759405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.362784922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.362802144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.362820729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.362840566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.370685208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.370802377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.370927029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.370954961Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.370977734Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.377609232Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.378981453Z" level=info msg="Start snapshots syncer" Jan 20 01:37:07.866174 containerd[1591]: time="2026-01-20T01:37:07.383085995Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 01:37:07.893594 containerd[1591]: time="2026-01-20T01:37:07.891057764Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 01:37:07.893594 containerd[1591]: time="2026-01-20T01:37:07.891457069Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 01:37:07.894651 containerd[1591]: time="2026-01-20T01:37:07.891835546Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 01:37:07.894651 containerd[1591]: time="2026-01-20T01:37:07.892464971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 01:37:07.894651 containerd[1591]: time="2026-01-20T01:37:07.892669432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 01:37:07.894651 containerd[1591]: time="2026-01-20T01:37:07.892753620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 01:37:07.894651 containerd[1591]: time="2026-01-20T01:37:07.892773507Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.911831529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.911978283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912001466Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912048053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912066367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912084021Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912553457Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912582540Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912599102Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912612737Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912624790Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912638585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912670435Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 01:37:07.943027 containerd[1591]: time="2026-01-20T01:37:07.912698357Z" level=info msg="runtime interface created" Jan 20 01:37:07.944083 containerd[1591]: time="2026-01-20T01:37:07.912706302Z" level=info msg="created NRI interface" Jan 20 01:37:07.944083 containerd[1591]: time="2026-01-20T01:37:07.912718394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 01:37:07.944083 containerd[1591]: time="2026-01-20T01:37:07.912812720Z" level=info msg="Connect containerd service" Jan 20 01:37:07.944083 containerd[1591]: time="2026-01-20T01:37:07.912999529Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:37:07.944083 containerd[1591]: time="2026-01-20T01:37:07.917308652Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:37:12.270853 systemd[1668]: Queued start job for default target default.target. Jan 20 01:37:12.488207 systemd[1668]: Created slice app.slice - User Application Slice. Jan 20 01:37:12.493033 systemd[1668]: Reached target paths.target - Paths. Jan 20 01:37:12.493296 systemd[1668]: Reached target timers.target - Timers. Jan 20 01:37:12.570741 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:37:12.925866 tar[1589]: linux-amd64/README.md Jan 20 01:37:13.186116 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:37:13.195154 systemd[1668]: Reached target sockets.target - Sockets. Jan 20 01:37:13.197723 systemd[1668]: Reached target basic.target - Basic System. Jan 20 01:37:13.197865 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:37:13.209007 systemd[1668]: Reached target default.target - Main User Target. Jan 20 01:37:13.211643 systemd[1668]: Startup finished in 5.865s. Jan 20 01:37:13.438343 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:37:13.552166 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:37:14.205832 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:49284.service - OpenSSH per-connection server daemon (10.0.0.1:49284). Jan 20 01:37:14.294191 containerd[1591]: time="2026-01-20T01:37:14.294090614Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:37:14.320964 containerd[1591]: time="2026-01-20T01:37:14.297629224Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:37:14.320964 containerd[1591]: time="2026-01-20T01:37:14.297716798Z" level=info msg="Start subscribing containerd event" Jan 20 01:37:14.320964 containerd[1591]: time="2026-01-20T01:37:14.297784855Z" level=info msg="Start recovering state" Jan 20 01:37:14.320964 containerd[1591]: time="2026-01-20T01:37:14.298192446Z" level=info msg="Start event monitor" Jan 20 01:37:14.320964 containerd[1591]: time="2026-01-20T01:37:14.298217022Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:37:14.320964 containerd[1591]: time="2026-01-20T01:37:14.298288796Z" level=info msg="Start streaming server" Jan 20 01:37:14.320964 containerd[1591]: time="2026-01-20T01:37:14.298305026Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 01:37:14.320964 containerd[1591]: time="2026-01-20T01:37:14.298315976Z" level=info msg="runtime interface starting up..." Jan 20 01:37:14.320964 containerd[1591]: time="2026-01-20T01:37:14.298327067Z" level=info msg="starting plugins..." Jan 20 01:37:14.320964 containerd[1591]: time="2026-01-20T01:37:14.298531409Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 01:37:14.310859 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:37:14.335194 containerd[1591]: time="2026-01-20T01:37:14.308713544Z" level=info msg="containerd successfully booted in 8.246552s" Jan 20 01:37:15.777332 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 49284 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:37:15.899190 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:16.069530 systemd-logind[1565]: New session 2 of user core. Jan 20 01:37:16.218098 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:37:17.013821 sshd[1704]: Connection closed by 10.0.0.1 port 49284 Jan 20 01:37:17.035984 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:17.178933 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:49284.service: Deactivated successfully. Jan 20 01:37:17.190877 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 01:37:17.195512 systemd-logind[1565]: Session 2 logged out. Waiting for processes to exit. Jan 20 01:37:17.223156 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:33058.service - OpenSSH per-connection server daemon (10.0.0.1:33058). Jan 20 01:37:17.239618 systemd-logind[1565]: Removed session 2. Jan 20 01:37:18.017003 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 33058 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:37:18.025323 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:18.080865 systemd-logind[1565]: New session 3 of user core. Jan 20 01:37:18.268162 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:37:18.558833 kernel: kvm_amd: TSC scaling supported Jan 20 01:37:18.561850 kernel: kvm_amd: Nested Virtualization enabled Jan 20 01:37:18.562000 kernel: kvm_amd: Nested Paging enabled Jan 20 01:37:18.567587 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 01:37:18.599607 kernel: kvm_amd: PMU virtualization is disabled Jan 20 01:37:18.839026 sshd[1713]: Connection closed by 10.0.0.1 port 33058 Jan 20 01:37:18.860772 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:18.902046 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. Jan 20 01:37:18.911325 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:33058.service: Deactivated successfully. Jan 20 01:37:18.929669 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 01:37:18.971197 systemd-logind[1565]: Removed session 3. Jan 20 01:37:26.321844 kernel: EDAC MC: Ver: 3.0.0 Jan 20 01:37:27.839738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:37:27.840946 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:37:27.851069 systemd[1]: Startup finished in 37.682s (kernel) + 1min 37.490s (initrd) + 1min 23.377s (userspace) = 3min 38.551s. Jan 20 01:37:27.941998 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:37:28.962058 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:60096.service - OpenSSH per-connection server daemon (10.0.0.1:60096). Jan 20 01:37:29.278890 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 60096 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:37:29.284063 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:29.312519 systemd-logind[1565]: New session 4 of user core. Jan 20 01:37:29.355910 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:37:29.587623 sshd[1734]: Connection closed by 10.0.0.1 port 60096 Jan 20 01:37:29.587661 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:29.627547 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:60096.service: Deactivated successfully. Jan 20 01:37:29.636018 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:37:29.707263 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:37:29.755562 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:60102.service - OpenSSH per-connection server daemon (10.0.0.1:60102). Jan 20 01:37:29.770528 systemd-logind[1565]: Removed session 4. Jan 20 01:37:30.672807 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 60102 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:37:30.687809 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:30.769526 systemd-logind[1565]: New session 5 of user core. Jan 20 01:37:30.945033 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:37:31.071512 sshd[1743]: Connection closed by 10.0.0.1 port 60102 Jan 20 01:37:31.073825 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:31.136754 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:60102.service: Deactivated successfully. Jan 20 01:37:31.156989 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:37:31.170161 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:37:31.182902 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:60106.service - OpenSSH per-connection server daemon (10.0.0.1:60106). Jan 20 01:37:31.196901 systemd-logind[1565]: Removed session 5. Jan 20 01:37:31.819622 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 60106 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:37:31.830930 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:31.888317 systemd-logind[1565]: New session 6 of user core. Jan 20 01:37:31.927916 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:37:32.393493 sshd[1752]: Connection closed by 10.0.0.1 port 60106 Jan 20 01:37:32.483618 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:32.570094 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:60106.service: Deactivated successfully. Jan 20 01:37:32.653735 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:37:32.701243 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:37:32.779858 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:60110.service - OpenSSH per-connection server daemon (10.0.0.1:60110). Jan 20 01:37:33.119233 systemd-logind[1565]: Removed session 6. Jan 20 01:37:33.623997 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 60110 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:37:33.636335 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:33.740543 systemd-logind[1565]: New session 7 of user core. Jan 20 01:37:33.767592 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:37:34.471576 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:37:34.472223 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:37:34.710533 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 20 01:37:34.772615 sshd[1763]: Connection closed by 10.0.0.1 port 60110 Jan 20 01:37:34.742729 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:35.289207 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:60110.service: Deactivated successfully. Jan 20 01:37:35.355144 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:37:35.910181 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:37:36.400501 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:37778.service - OpenSSH per-connection server daemon (10.0.0.1:37778). Jan 20 01:37:36.828982 systemd-logind[1565]: Removed session 7. Jan 20 01:37:40.370126 update_engine[1569]: I20260120 01:37:40.318624 1569 update_attempter.cc:509] Updating boot flags... Jan 20 01:37:48.907803 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 37778 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:37:49.828869 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:49.889130 kubelet[1724]: E0120 01:37:49.884090 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:37:49.938134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:37:49.944158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:37:49.967525 systemd[1]: kubelet.service: Consumed 9.378s CPU time, 272M memory peak. Jan 20 01:37:50.026148 systemd-logind[1565]: New session 8 of user core. Jan 20 01:37:50.069752 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:37:50.668042 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:37:50.676643 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:37:51.001144 sudo[1790]: pam_unix(sudo:session): session closed for user root Jan 20 01:37:51.158616 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 01:37:51.159146 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:37:51.293822 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:37:52.575824 augenrules[1814]: No rules Jan 20 01:37:52.615031 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:37:52.615855 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:37:52.699067 sudo[1789]: pam_unix(sudo:session): session closed for user root Jan 20 01:37:52.735882 sshd[1788]: Connection closed by 10.0.0.1 port 37778 Jan 20 01:37:52.734725 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:52.810935 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:37778.service: Deactivated successfully. Jan 20 01:37:52.824232 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:37:52.851879 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:37:52.911037 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:54202.service - OpenSSH per-connection server daemon (10.0.0.1:54202). Jan 20 01:37:52.923342 systemd-logind[1565]: Removed session 8. Jan 20 01:37:53.616576 sshd[1823]: Accepted publickey for core from 10.0.0.1 port 54202 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:37:53.621837 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:53.721551 systemd-logind[1565]: New session 9 of user core. Jan 20 01:37:53.753505 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:37:54.020929 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:37:54.021837 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:38:00.057526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:38:00.094699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:38:08.383002 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:38:08.425324 (dockerd)[1851]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:38:09.166730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:38:09.225792 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:38:16.128748 kubelet[1861]: E0120 01:38:16.122530 1861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:38:16.230951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:38:16.231453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:38:16.234814 systemd[1]: kubelet.service: Consumed 4.909s CPU time, 108.4M memory peak. Jan 20 01:38:19.134881 dockerd[1851]: time="2026-01-20T01:38:19.134132169Z" level=info msg="Starting up" Jan 20 01:38:19.168280 dockerd[1851]: time="2026-01-20T01:38:19.166936722Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 01:38:19.458996 dockerd[1851]: time="2026-01-20T01:38:19.457190090Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 01:38:20.281024 dockerd[1851]: time="2026-01-20T01:38:20.275781268Z" level=info msg="Loading containers: start." Jan 20 01:38:20.586957 kernel: Initializing XFRM netlink socket Jan 20 01:38:26.313624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:38:26.332675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:38:31.107843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:38:31.172718 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:38:31.683335 systemd-networkd[1486]: docker0: Link UP Jan 20 01:38:31.789470 dockerd[1851]: time="2026-01-20T01:38:31.785745401Z" level=info msg="Loading containers: done." Jan 20 01:38:32.037060 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1882568737-merged.mount: Deactivated successfully. Jan 20 01:38:32.046281 kubelet[2034]: E0120 01:38:32.044809 2034 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:38:32.080466 dockerd[1851]: time="2026-01-20T01:38:32.080295425Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:38:32.080793 dockerd[1851]: time="2026-01-20T01:38:32.080756926Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 01:38:32.081211 dockerd[1851]: time="2026-01-20T01:38:32.081181911Z" level=info msg="Initializing buildkit" Jan 20 01:38:32.090294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:38:32.093778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:38:32.098786 systemd[1]: kubelet.service: Consumed 1.284s CPU time, 108.5M memory peak. Jan 20 01:38:32.338089 dockerd[1851]: time="2026-01-20T01:38:32.336524108Z" level=info msg="Completed buildkit initialization" Jan 20 01:38:32.702174 dockerd[1851]: time="2026-01-20T01:38:32.644296108Z" level=info msg="Daemon has completed initialization" Jan 20 01:38:32.702174 dockerd[1851]: time="2026-01-20T01:38:32.651178682Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:38:32.770241 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:38:42.399648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:38:42.422974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:38:45.591695 containerd[1591]: time="2026-01-20T01:38:45.590541568Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 01:38:48.275777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:38:48.339602 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:38:50.979044 kubelet[2106]: E0120 01:38:50.978596 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:38:50.993094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:38:50.993558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:38:51.027591 systemd[1]: kubelet.service: Consumed 1.814s CPU time, 111.1M memory peak. Jan 20 01:38:52.444768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4068582004.mount: Deactivated successfully. Jan 20 01:39:01.047736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 01:39:01.084083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:39:02.212918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:39:02.254907 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:39:05.189624 kubelet[2180]: E0120 01:39:05.186909 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:39:05.209501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:39:05.399193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:39:05.832507 systemd[1]: kubelet.service: Consumed 868ms CPU time, 110.5M memory peak. Jan 20 01:39:15.725274 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 01:39:15.793091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:39:18.268244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:39:18.431780 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:39:23.428482 kubelet[2197]: E0120 01:39:23.407245 2197 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:39:23.446341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:39:23.471071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:39:23.472026 systemd[1]: kubelet.service: Consumed 1.741s CPU time, 108.8M memory peak. Jan 20 01:39:27.817459 containerd[1591]: time="2026-01-20T01:39:27.806500872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:27.817459 containerd[1591]: time="2026-01-20T01:39:27.816509827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 20 01:39:27.817459 containerd[1591]: time="2026-01-20T01:39:27.822175112Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:28.207521 containerd[1591]: time="2026-01-20T01:39:28.196455924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:28.285128 containerd[1591]: time="2026-01-20T01:39:28.273200227Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 42.677316199s" Jan 20 01:39:28.285128 containerd[1591]: time="2026-01-20T01:39:28.277797833Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 01:39:28.437738 containerd[1591]: time="2026-01-20T01:39:28.436306148Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 01:39:33.722807 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 01:39:33.756671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:39:39.685960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:39:39.848113 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:39:42.921988 kubelet[2220]: E0120 01:39:42.921586 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:39:42.946326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:39:42.950775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:39:42.957778 systemd[1]: kubelet.service: Consumed 1.768s CPU time, 110.7M memory peak. Jan 20 01:39:53.110020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 01:39:53.172608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:39:56.662250 containerd[1591]: time="2026-01-20T01:39:56.659185557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:56.687998 containerd[1591]: time="2026-01-20T01:39:56.683600738Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 20 01:39:56.694059 containerd[1591]: time="2026-01-20T01:39:56.693873678Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:56.882861 containerd[1591]: time="2026-01-20T01:39:56.882798689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:56.899111 containerd[1591]: time="2026-01-20T01:39:56.883868611Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 28.447105958s" Jan 20 01:39:56.899111 containerd[1591]: time="2026-01-20T01:39:56.884964321Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 01:39:56.899111 containerd[1591]: time="2026-01-20T01:39:56.897830749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 01:39:57.595863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:39:57.738241 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:40:00.592609 kubelet[2236]: E0120 01:40:00.584334 2236 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:40:00.636730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:40:00.637056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:40:00.651273 systemd[1]: kubelet.service: Consumed 2.131s CPU time, 109.1M memory peak. Jan 20 01:40:11.135545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 01:40:11.200306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:40:16.701079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:40:17.114659 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:40:19.059878 containerd[1591]: time="2026-01-20T01:40:19.055519259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:40:19.083788 containerd[1591]: time="2026-01-20T01:40:19.083518631Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 20 01:40:19.100753 containerd[1591]: time="2026-01-20T01:40:19.090315762Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:40:19.123992 containerd[1591]: time="2026-01-20T01:40:19.122226031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:40:19.132808 containerd[1591]: time="2026-01-20T01:40:19.129815423Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 22.231935383s" Jan 20 01:40:19.132808 containerd[1591]: time="2026-01-20T01:40:19.129867880Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 01:40:19.154273 containerd[1591]: time="2026-01-20T01:40:19.150081616Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 01:40:19.157870 kubelet[2259]: E0120 01:40:19.156909 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:40:19.176130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:40:19.176565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:40:19.177309 systemd[1]: kubelet.service: Consumed 1.247s CPU time, 110.8M memory peak. Jan 20 01:40:29.353836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 01:40:29.409961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:40:35.489281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:40:35.634124 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:40:37.994306 kubelet[2279]: E0120 01:40:37.989885 2279 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:40:38.062139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:40:38.073869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:40:38.083212 systemd[1]: kubelet.service: Consumed 1.275s CPU time, 110.1M memory peak. Jan 20 01:40:38.436616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177569821.mount: Deactivated successfully. Jan 20 01:40:49.176300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 01:40:49.363997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:40:56.269980 containerd[1591]: time="2026-01-20T01:40:56.266994150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:40:56.297507 containerd[1591]: time="2026-01-20T01:40:56.295507015Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 20 01:40:56.314575 containerd[1591]: time="2026-01-20T01:40:56.309331151Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:40:56.317890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:40:56.363923 containerd[1591]: time="2026-01-20T01:40:56.356045611Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 37.205807384s" Jan 20 01:40:56.363923 containerd[1591]: time="2026-01-20T01:40:56.356240283Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 01:40:56.363923 containerd[1591]: time="2026-01-20T01:40:56.360971658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:40:56.384605 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:40:56.435962 containerd[1591]: time="2026-01-20T01:40:56.434839864Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 01:40:58.450107 kubelet[2300]: E0120 01:40:58.438355 2300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:40:58.478262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:40:58.483775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:40:58.489263 systemd[1]: kubelet.service: Consumed 1.430s CPU time, 110.6M memory peak. Jan 20 01:40:59.708586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount449365494.mount: Deactivated successfully. Jan 20 01:41:08.585970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 01:41:08.685178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:10.928806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:10.991010 (kubelet)[2370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:12.928590 kubelet[2370]: E0120 01:41:12.927990 2370 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:13.037273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:13.045038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:13.052680 systemd[1]: kubelet.service: Consumed 1.273s CPU time, 112.5M memory peak. Jan 20 01:41:14.827933 containerd[1591]: time="2026-01-20T01:41:14.824545938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:14.847142 containerd[1591]: time="2026-01-20T01:41:14.838036569Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 20 01:41:14.850974 containerd[1591]: time="2026-01-20T01:41:14.850004823Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:14.862839 containerd[1591]: time="2026-01-20T01:41:14.862557925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:14.865537 containerd[1591]: time="2026-01-20T01:41:14.863975004Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 18.429079536s" Jan 20 01:41:14.865537 containerd[1591]: time="2026-01-20T01:41:14.864017422Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 01:41:14.874906 containerd[1591]: time="2026-01-20T01:41:14.874149989Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 01:41:16.465878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3283105218.mount: Deactivated successfully. Jan 20 01:41:16.607525 containerd[1591]: time="2026-01-20T01:41:16.603883253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:16.613353 containerd[1591]: time="2026-01-20T01:41:16.613280665Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 01:41:16.619195 containerd[1591]: time="2026-01-20T01:41:16.617762878Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:16.646134 containerd[1591]: time="2026-01-20T01:41:16.645970582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:16.663899 containerd[1591]: time="2026-01-20T01:41:16.662644427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.788433535s" Jan 20 01:41:16.663899 containerd[1591]: time="2026-01-20T01:41:16.662753689Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 01:41:16.677112 containerd[1591]: time="2026-01-20T01:41:16.676277473Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 01:41:23.070871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 20 01:41:23.075685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:34.170642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:34.596881 (kubelet)[2395]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:39.127834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237646499.mount: Deactivated successfully. Jan 20 01:41:40.699692 kubelet[2395]: E0120 01:41:40.698879 2395 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:40.727021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:40.727307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:40.746014 systemd[1]: kubelet.service: Consumed 1.986s CPU time, 108.7M memory peak. Jan 20 01:41:51.189485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 20 01:41:51.276828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:04.690269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:06.474089 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:42:07.969924 kubelet[2436]: E0120 01:42:07.968145 2436 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:42:07.988309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:42:07.991113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:42:08.009616 systemd[1]: kubelet.service: Consumed 1.319s CPU time, 110.7M memory peak. Jan 20 01:42:18.172280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 20 01:42:18.211614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:24.845599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:24.902062 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:42:27.688481 kubelet[2480]: E0120 01:42:27.687832 2480 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:42:27.720067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:42:27.730680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:42:27.731646 systemd[1]: kubelet.service: Consumed 1.349s CPU time, 110.1M memory peak. Jan 20 01:42:40.358730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Jan 20 01:42:41.205066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:48.566840 containerd[1591]: time="2026-01-20T01:42:48.563825083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:48.591926 containerd[1591]: time="2026-01-20T01:42:48.591786768Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 20 01:42:48.601307 containerd[1591]: time="2026-01-20T01:42:48.601236650Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:48.648947 containerd[1591]: time="2026-01-20T01:42:48.636542255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:48.663177 containerd[1591]: time="2026-01-20T01:42:48.663061390Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1m31.986719536s" Jan 20 01:42:48.676322 containerd[1591]: time="2026-01-20T01:42:48.663662557Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 01:42:53.211927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:53.290019 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:42:57.563483 kubelet[2507]: E0120 01:42:57.530095 2507 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:42:57.568883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:42:57.569204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:42:57.574650 systemd[1]: kubelet.service: Consumed 2.116s CPU time, 110.9M memory peak. Jan 20 01:43:07.818823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Jan 20 01:43:07.866046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:43:09.978849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:43:10.110531 (kubelet)[2542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:43:13.021258 kubelet[2542]: E0120 01:43:13.020548 2542 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:43:13.035311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:43:13.035798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:43:13.039786 systemd[1]: kubelet.service: Consumed 1.167s CPU time, 110.6M memory peak. Jan 20 01:43:21.578887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:43:21.579193 systemd[1]: kubelet.service: Consumed 1.167s CPU time, 110.6M memory peak. Jan 20 01:43:21.595205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:43:21.804763 systemd[1]: Reload requested from client PID 2559 ('systemctl') (unit session-9.scope)... Jan 20 01:43:21.804859 systemd[1]: Reloading... Jan 20 01:43:22.726817 zram_generator::config[2602]: No configuration found. Jan 20 01:43:25.496098 systemd[1]: Reloading finished in 3685 ms. Jan 20 01:43:26.196228 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:43:26.207804 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:43:26.216476 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:43:26.216961 systemd[1]: kubelet.service: Consumed 621ms CPU time, 98.7M memory peak. Jan 20 01:43:26.274518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:43:28.231813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:43:28.368108 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:43:29.055223 kubelet[2650]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:43:29.055223 kubelet[2650]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:43:29.055223 kubelet[2650]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:43:29.055223 kubelet[2650]: I0120 01:43:29.050136 2650 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:43:31.175035 kubelet[2650]: I0120 01:43:31.174081 2650 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 01:43:31.175035 kubelet[2650]: I0120 01:43:31.174475 2650 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:43:31.175035 kubelet[2650]: I0120 01:43:31.177799 2650 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:43:31.385209 kubelet[2650]: I0120 01:43:31.378242 2650 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:43:31.390488 kubelet[2650]: E0120 01:43:31.380958 2650 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:43:31.706121 kubelet[2650]: I0120 01:43:31.702632 2650 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:43:32.113144 kubelet[2650]: I0120 01:43:32.106193 2650 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:43:32.126725 kubelet[2650]: I0120 01:43:32.116645 2650 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:43:32.126725 kubelet[2650]: I0120 01:43:32.116954 2650 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:43:32.126725 kubelet[2650]: I0120 01:43:32.122987 2650 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:43:32.126725 kubelet[2650]: I0120 01:43:32.123007 2650 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 01:43:32.133705 kubelet[2650]: I0120 01:43:32.131597 2650 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:43:32.266891 kubelet[2650]: I0120 01:43:32.266654 2650 kubelet.go:480] "Attempting to sync node with API server" Jan 20 01:43:32.266891 kubelet[2650]: I0120 01:43:32.266739 2650 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:43:32.266891 kubelet[2650]: I0120 01:43:32.266887 2650 kubelet.go:386] "Adding apiserver pod source" Jan 20 01:43:32.272705 kubelet[2650]: I0120 01:43:32.266963 2650 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:43:32.308987 kubelet[2650]: E0120 01:43:32.308495 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:43:32.315769 kubelet[2650]: E0120 01:43:32.312511 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:43:32.370145 kubelet[2650]: I0120 01:43:32.369622 2650 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:43:32.386185 kubelet[2650]: I0120 01:43:32.382896 2650 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:43:32.392597 kubelet[2650]: W0120 01:43:32.388317 2650 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:43:32.482478 kubelet[2650]: I0120 01:43:32.482073 2650 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:43:32.482478 kubelet[2650]: I0120 01:43:32.482251 2650 server.go:1289] "Started kubelet" Jan 20 01:43:32.494900 kubelet[2650]: I0120 01:43:32.486128 2650 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:43:32.494900 kubelet[2650]: I0120 01:43:32.489764 2650 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:43:32.494900 kubelet[2650]: I0120 01:43:32.490321 2650 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:43:32.516060 kubelet[2650]: I0120 01:43:32.508318 2650 server.go:317] "Adding debug handlers to kubelet server" Jan 20 01:43:32.537549 kubelet[2650]: I0120 01:43:32.530215 2650 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:43:32.537549 kubelet[2650]: I0120 01:43:32.533342 2650 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:43:32.555052 kubelet[2650]: I0120 01:43:32.551666 2650 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:43:32.555052 kubelet[2650]: E0120 01:43:32.552973 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:32.555052 kubelet[2650]: I0120 01:43:32.553514 2650 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:43:32.555052 kubelet[2650]: I0120 01:43:32.553824 2650 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:43:32.560144 kubelet[2650]: E0120 01:43:32.550462 2650 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4cff2ba1bcec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,LastTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:43:33.968797 kubelet[2650]: E0120 01:43:33.915613 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:34.067153 kubelet[2650]: E0120 01:43:33.921505 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:43:34.138706 kubelet[2650]: E0120 01:43:32.563713 2650 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" Jan 20 01:43:34.138706 kubelet[2650]: E0120 01:43:34.136066 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:43:34.381037 kubelet[2650]: E0120 01:43:34.377681 2650 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:43:34.381037 kubelet[2650]: E0120 01:43:34.377994 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:43:34.381037 kubelet[2650]: E0120 01:43:34.378098 2650 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" Jan 20 01:43:34.381562 kubelet[2650]: E0120 01:43:34.381532 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:34.455982 kubelet[2650]: I0120 01:43:34.431666 2650 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:43:34.455982 kubelet[2650]: I0120 01:43:34.431703 2650 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:43:34.455982 kubelet[2650]: I0120 01:43:34.431940 2650 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:43:34.456856 kubelet[2650]: E0120 01:43:34.456712 2650 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:43:34.558531 kubelet[2650]: E0120 01:43:34.558444 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:35.065621 kubelet[2650]: E0120 01:43:34.990547 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:35.264056 kubelet[2650]: E0120 01:43:35.263908 2650 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" Jan 20 01:43:35.484841 kubelet[2650]: E0120 01:43:35.336680 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:35.690297 kubelet[2650]: E0120 01:43:35.673894 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:35.820505 kubelet[2650]: E0120 01:43:35.817883 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:35.917795 kubelet[2650]: E0120 01:43:35.886518 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:43:36.004889 kubelet[2650]: E0120 01:43:36.004541 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:36.117346 kubelet[2650]: E0120 01:43:36.111064 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:36.216619 kubelet[2650]: E0120 01:43:36.134097 2650 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="1.6s" Jan 20 01:43:36.744019 kubelet[2650]: E0120 01:43:36.727213 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:43:36.744019 kubelet[2650]: E0120 01:43:36.743918 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:36.784240 kubelet[2650]: E0120 01:43:36.776525 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:43:36.816561 kubelet[2650]: I0120 01:43:36.816518 2650 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:43:36.831125 kubelet[2650]: I0120 01:43:36.831076 2650 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:43:36.831672 kubelet[2650]: I0120 01:43:36.831649 2650 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:43:36.871483 kubelet[2650]: E0120 01:43:36.868625 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:36.901723 kubelet[2650]: I0120 01:43:36.900536 2650 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 01:43:36.927116 kubelet[2650]: I0120 01:43:36.926463 2650 policy_none.go:49] "None policy: Start" Jan 20 01:43:36.927116 kubelet[2650]: I0120 01:43:36.926518 2650 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:43:36.927116 kubelet[2650]: I0120 01:43:36.926548 2650 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:43:36.965688 kubelet[2650]: I0120 01:43:36.928526 2650 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 01:43:36.965688 kubelet[2650]: I0120 01:43:36.928554 2650 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 01:43:36.965688 kubelet[2650]: I0120 01:43:36.928635 2650 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:43:36.965688 kubelet[2650]: I0120 01:43:36.928648 2650 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 01:43:36.965688 kubelet[2650]: E0120 01:43:36.930980 2650 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:43:36.965688 kubelet[2650]: E0120 01:43:36.932045 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:43:36.985012 kubelet[2650]: E0120 01:43:36.984109 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:37.042224 kubelet[2650]: E0120 01:43:37.040039 2650 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:43:37.203657 kubelet[2650]: E0120 01:43:37.199632 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:37.205576 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:43:37.244783 kubelet[2650]: E0120 01:43:37.244193 2650 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:43:37.308727 kubelet[2650]: E0120 01:43:37.302737 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:37.329190 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:43:37.370045 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:43:37.412656 kubelet[2650]: E0120 01:43:37.412599 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:43:37.436336 kubelet[2650]: E0120 01:43:37.436180 2650 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:43:37.452837 kubelet[2650]: I0120 01:43:37.436884 2650 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:43:37.452837 kubelet[2650]: I0120 01:43:37.436947 2650 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:43:37.460970 kubelet[2650]: I0120 01:43:37.455060 2650 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:43:37.515246 kubelet[2650]: E0120 01:43:37.515022 2650 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:43:37.518098 kubelet[2650]: E0120 01:43:37.517892 2650 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:43:37.589097 kubelet[2650]: E0120 01:43:37.588231 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:43:37.620544 kubelet[2650]: I0120 01:43:37.610109 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:43:37.620544 kubelet[2650]: E0120 01:43:37.610923 2650 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 20 01:43:37.704480 kubelet[2650]: I0120 01:43:37.704260 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4df06a43dd179d6da1100174d6963615-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4df06a43dd179d6da1100174d6963615\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:43:37.705118 kubelet[2650]: I0120 01:43:37.705063 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4df06a43dd179d6da1100174d6963615-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4df06a43dd179d6da1100174d6963615\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:43:37.718611 kubelet[2650]: I0120 01:43:37.717667 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4df06a43dd179d6da1100174d6963615-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4df06a43dd179d6da1100174d6963615\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:43:37.822581 systemd[1]: Created slice kubepods-burstable-pod4df06a43dd179d6da1100174d6963615.slice - libcontainer container kubepods-burstable-pod4df06a43dd179d6da1100174d6963615.slice. Jan 20 01:43:37.827179 kubelet[2650]: I0120 01:43:37.827145 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:43:37.834826 kubelet[2650]: I0120 01:43:37.827289 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:43:37.835171 kubelet[2650]: I0120 01:43:37.835139 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:43:37.835327 kubelet[2650]: I0120 01:43:37.835304 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:43:37.835550 kubelet[2650]: I0120 01:43:37.835530 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:43:37.857812 kubelet[2650]: I0120 01:43:37.855045 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:43:37.866548 kubelet[2650]: E0120 01:43:37.866109 2650 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 20 01:43:37.899621 kubelet[2650]: E0120 01:43:37.897175 2650 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="3.2s" Jan 20 01:43:37.910968 kubelet[2650]: E0120 01:43:37.910487 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:43:37.918494 kubelet[2650]: E0120 01:43:37.917460 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:37.926234 kubelet[2650]: E0120 01:43:37.926086 2650 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4cff2ba1bcec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,LastTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:43:37.996984 kubelet[2650]: I0120 01:43:37.994906 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:43:38.255961 containerd[1591]: time="2026-01-20T01:43:38.230191694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4df06a43dd179d6da1100174d6963615,Namespace:kube-system,Attempt:0,}" Jan 20 01:43:38.283155 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 20 01:43:38.321303 kubelet[2650]: I0120 01:43:38.321122 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:43:38.338629 kubelet[2650]: E0120 01:43:38.338299 2650 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 20 01:43:38.355888 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 20 01:43:38.378336 kubelet[2650]: E0120 01:43:38.377853 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:43:38.378336 kubelet[2650]: E0120 01:43:38.378279 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:43:38.387946 kubelet[2650]: E0120 01:43:38.387789 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:38.405988 containerd[1591]: time="2026-01-20T01:43:38.403702789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 20 01:43:38.411080 kubelet[2650]: E0120 01:43:38.407320 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:43:38.417925 kubelet[2650]: E0120 01:43:38.412337 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:38.419272 containerd[1591]: time="2026-01-20T01:43:38.419226039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 20 01:43:38.510963 kubelet[2650]: E0120 01:43:38.506604 2650 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:43:39.704142 kubelet[2650]: I0120 01:43:39.696540 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:43:39.704142 kubelet[2650]: E0120 01:43:39.699986 2650 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 20 01:43:40.314870 kubelet[2650]: E0120 01:43:40.314747 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:43:40.987567 containerd[1591]: time="2026-01-20T01:43:40.985172216Z" level=info msg="connecting to shim 1353c6422c3b0a42c483a9242215f54dde2637b141c94eea4d7e75ce3f460497" address="unix:///run/containerd/s/1cdd77818b9e98712b454b6f2f6c7676ec3f38638da2ec85ce35a7abb35c0461" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:43:41.001980 containerd[1591]: time="2026-01-20T01:43:40.999794035Z" level=info msg="connecting to shim ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0" address="unix:///run/containerd/s/d1f127681a9c4311b456c6aab9e8ce8d82f6bff97094d53185fe0bdf6b34c086" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:43:41.060861 containerd[1591]: time="2026-01-20T01:43:41.047057288Z" level=info msg="connecting to shim 35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd" address="unix:///run/containerd/s/bd2a1fdad2c63e6b97ea527fdb88e51d630cdf855c2be6bd3e0513bd6d003b8e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:43:41.106207 kubelet[2650]: E0120 01:43:41.102609 2650 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="6.4s" Jan 20 01:43:41.168176 kubelet[2650]: E0120 01:43:41.138513 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:43:41.588141 kubelet[2650]: I0120 01:43:41.586698 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:43:41.590316 kubelet[2650]: E0120 01:43:41.590168 2650 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 20 01:43:42.725806 systemd[1]: Started cri-containerd-1353c6422c3b0a42c483a9242215f54dde2637b141c94eea4d7e75ce3f460497.scope - libcontainer container 1353c6422c3b0a42c483a9242215f54dde2637b141c94eea4d7e75ce3f460497. Jan 20 01:43:42.771005 systemd[1]: Started cri-containerd-35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd.scope - libcontainer container 35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd. Jan 20 01:43:42.778947 systemd[1]: Started cri-containerd-ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0.scope - libcontainer container ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0. Jan 20 01:43:43.142254 kubelet[2650]: E0120 01:43:43.134471 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:43:43.754212 kubelet[2650]: E0120 01:43:43.731140 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:43:44.992590 kubelet[2650]: I0120 01:43:44.990973 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:43:45.001014 kubelet[2650]: E0120 01:43:45.000886 2650 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 20 01:43:45.730933 containerd[1591]: time="2026-01-20T01:43:45.721916155Z" level=error msg="get state for 1353c6422c3b0a42c483a9242215f54dde2637b141c94eea4d7e75ce3f460497" error="context deadline exceeded" Jan 20 01:43:45.730933 containerd[1591]: time="2026-01-20T01:43:45.722225469Z" level=warning msg="unknown status" status=0 Jan 20 01:43:45.866661 kubelet[2650]: E0120 01:43:45.864162 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:43:46.411116 containerd[1591]: time="2026-01-20T01:43:46.389241746Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:43:47.836315 kubelet[2650]: E0120 01:43:47.835973 2650 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:43:47.880915 kubelet[2650]: E0120 01:43:47.839704 2650 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:43:47.913061 kubelet[2650]: E0120 01:43:47.906023 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:43:47.933327 kubelet[2650]: E0120 01:43:47.917842 2650 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="7s" Jan 20 01:43:47.956128 kubelet[2650]: E0120 01:43:47.936973 2650 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4cff2ba1bcec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,LastTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:43:48.078182 containerd[1591]: time="2026-01-20T01:43:48.074811054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4df06a43dd179d6da1100174d6963615,Namespace:kube-system,Attempt:0,} returns sandbox id \"1353c6422c3b0a42c483a9242215f54dde2637b141c94eea4d7e75ce3f460497\"" Jan 20 01:43:48.137894 kubelet[2650]: E0120 01:43:48.133804 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:48.217467 containerd[1591]: time="2026-01-20T01:43:48.213985264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\"" Jan 20 01:43:49.563872 kubelet[2650]: E0120 01:43:49.558907 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:49.639461 containerd[1591]: time="2026-01-20T01:43:49.633621627Z" level=info msg="CreateContainer within sandbox \"1353c6422c3b0a42c483a9242215f54dde2637b141c94eea4d7e75ce3f460497\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:43:49.669241 containerd[1591]: time="2026-01-20T01:43:49.664887557Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:43:49.687115 containerd[1591]: time="2026-01-20T01:43:49.686249828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\"" Jan 20 01:43:49.697883 kubelet[2650]: E0120 01:43:49.697838 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:49.792654 containerd[1591]: time="2026-01-20T01:43:49.788725864Z" level=info msg="CreateContainer within sandbox \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:43:49.850570 containerd[1591]: time="2026-01-20T01:43:49.848204656Z" level=info msg="Container d9b1c34afdf1f404bd0422af676d7a992b79da31633a2ba86daa90cadbd11b42: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:43:50.863726 containerd[1591]: time="2026-01-20T01:43:50.834527768Z" level=info msg="Container 538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:43:50.922102 containerd[1591]: time="2026-01-20T01:43:50.921311657Z" level=info msg="CreateContainer within sandbox \"1353c6422c3b0a42c483a9242215f54dde2637b141c94eea4d7e75ce3f460497\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d9b1c34afdf1f404bd0422af676d7a992b79da31633a2ba86daa90cadbd11b42\"" Jan 20 01:43:50.928682 containerd[1591]: time="2026-01-20T01:43:50.928107818Z" level=info msg="StartContainer for \"d9b1c34afdf1f404bd0422af676d7a992b79da31633a2ba86daa90cadbd11b42\"" Jan 20 01:43:50.936931 containerd[1591]: time="2026-01-20T01:43:50.935046877Z" level=info msg="connecting to shim d9b1c34afdf1f404bd0422af676d7a992b79da31633a2ba86daa90cadbd11b42" address="unix:///run/containerd/s/1cdd77818b9e98712b454b6f2f6c7676ec3f38638da2ec85ce35a7abb35c0461" protocol=ttrpc version=3 Jan 20 01:43:50.985978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1032625970.mount: Deactivated successfully. Jan 20 01:43:51.090971 containerd[1591]: time="2026-01-20T01:43:51.090864742Z" level=info msg="Container 13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:43:51.100610 containerd[1591]: time="2026-01-20T01:43:51.099261116Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f\"" Jan 20 01:43:51.125875 containerd[1591]: time="2026-01-20T01:43:51.122283728Z" level=info msg="StartContainer for \"538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f\"" Jan 20 01:43:51.144817 containerd[1591]: time="2026-01-20T01:43:51.144298748Z" level=info msg="connecting to shim 538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f" address="unix:///run/containerd/s/bd2a1fdad2c63e6b97ea527fdb88e51d630cdf855c2be6bd3e0513bd6d003b8e" protocol=ttrpc version=3 Jan 20 01:43:51.253238 containerd[1591]: time="2026-01-20T01:43:51.253168109Z" level=info msg="CreateContainer within sandbox \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa\"" Jan 20 01:43:51.258991 containerd[1591]: time="2026-01-20T01:43:51.256507501Z" level=info msg="StartContainer for \"13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa\"" Jan 20 01:43:51.258991 containerd[1591]: time="2026-01-20T01:43:51.258161944Z" level=info msg="connecting to shim 13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa" address="unix:///run/containerd/s/d1f127681a9c4311b456c6aab9e8ce8d82f6bff97094d53185fe0bdf6b34c086" protocol=ttrpc version=3 Jan 20 01:43:51.431301 systemd[1]: Started cri-containerd-d9b1c34afdf1f404bd0422af676d7a992b79da31633a2ba86daa90cadbd11b42.scope - libcontainer container d9b1c34afdf1f404bd0422af676d7a992b79da31633a2ba86daa90cadbd11b42. Jan 20 01:43:51.493816 kubelet[2650]: I0120 01:43:51.493255 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:43:51.515602 kubelet[2650]: E0120 01:43:51.514113 2650 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 20 01:43:51.568140 kubelet[2650]: E0120 01:43:51.556643 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:43:51.588315 systemd[1]: Started cri-containerd-538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f.scope - libcontainer container 538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f. Jan 20 01:43:51.695009 systemd[1]: Started cri-containerd-13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa.scope - libcontainer container 13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa. Jan 20 01:43:54.079651 kubelet[2650]: E0120 01:43:54.073442 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:43:54.079651 kubelet[2650]: E0120 01:43:54.073621 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:43:54.115871 containerd[1591]: time="2026-01-20T01:43:54.093211215Z" level=error msg="get state for d9b1c34afdf1f404bd0422af676d7a992b79da31633a2ba86daa90cadbd11b42" error="context deadline exceeded" Jan 20 01:43:54.131735 containerd[1591]: time="2026-01-20T01:43:54.121985970Z" level=warning msg="unknown status" status=0 Jan 20 01:43:54.168209 containerd[1591]: time="2026-01-20T01:43:54.168134738Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:43:54.968928 kubelet[2650]: E0120 01:43:54.958723 2650 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="7s" Jan 20 01:43:55.658527 containerd[1591]: time="2026-01-20T01:43:55.657927158Z" level=info msg="StartContainer for \"d9b1c34afdf1f404bd0422af676d7a992b79da31633a2ba86daa90cadbd11b42\" returns successfully" Jan 20 01:43:55.710854 containerd[1591]: time="2026-01-20T01:43:55.710732166Z" level=info msg="StartContainer for \"538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f\" returns successfully" Jan 20 01:43:57.692129 containerd[1591]: time="2026-01-20T01:43:57.691602790Z" level=error msg="get state for 13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa" error="context deadline exceeded" Jan 20 01:43:57.727965 containerd[1591]: time="2026-01-20T01:43:57.709492568Z" level=warning msg="unknown status" status=0 Jan 20 01:43:57.807307 containerd[1591]: time="2026-01-20T01:43:57.798031961Z" level=error msg="ttrpc: received message on inactive stream" stream=13 Jan 20 01:43:57.820856 kubelet[2650]: E0120 01:43:57.820805 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:43:57.841036 kubelet[2650]: E0120 01:43:57.840767 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:57.864977 kubelet[2650]: E0120 01:43:57.863257 2650 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:43:57.994023 containerd[1591]: time="2026-01-20T01:43:57.923990651Z" level=info msg="StartContainer for \"13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa\" returns successfully" Jan 20 01:43:57.994298 kubelet[2650]: E0120 01:43:57.966809 2650 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4cff2ba1bcec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,LastTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:43:58.394710 kubelet[2650]: E0120 01:43:58.394661 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:43:58.435675 kubelet[2650]: E0120 01:43:58.424126 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:43:58.591031 kubelet[2650]: I0120 01:43:58.588316 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:43:58.645230 kubelet[2650]: E0120 01:43:58.644941 2650 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 20 01:44:01.189880 kubelet[2650]: E0120 01:44:01.185906 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:01.217809 kubelet[2650]: E0120 01:44:01.217768 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:01.231976 kubelet[2650]: E0120 01:44:01.231929 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:01.266995 kubelet[2650]: E0120 01:44:01.261207 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:01.266995 kubelet[2650]: E0120 01:44:01.263093 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:01.266995 kubelet[2650]: E0120 01:44:01.263241 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:01.974707 kubelet[2650]: E0120 01:44:01.972571 2650 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="7s" Jan 20 01:44:02.192864 kubelet[2650]: E0120 01:44:02.181257 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:02.202113 kubelet[2650]: E0120 01:44:02.201973 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:02.203831 kubelet[2650]: E0120 01:44:02.203585 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:02.203831 kubelet[2650]: E0120 01:44:02.203763 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:03.548864 kubelet[2650]: E0120 01:44:03.537152 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:03.590827 kubelet[2650]: E0120 01:44:03.571873 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:03.789876 kubelet[2650]: E0120 01:44:03.789702 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:03.806202 kubelet[2650]: E0120 01:44:03.799937 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:05.710249 kubelet[2650]: I0120 01:44:05.706863 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:44:05.726978 kubelet[2650]: E0120 01:44:05.718908 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:05.726978 kubelet[2650]: E0120 01:44:05.724787 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:07.875503 kubelet[2650]: E0120 01:44:07.866942 2650 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:44:10.210877 kubelet[2650]: E0120 01:44:10.198299 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:10.221595 kubelet[2650]: E0120 01:44:10.220688 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:14.173139 kubelet[2650]: E0120 01:44:14.169134 2650 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:44:14.239053 kubelet[2650]: E0120 01:44:14.184181 2650 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:44:14.942614 kubelet[2650]: E0120 01:44:14.928940 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:44:15.719025 kubelet[2650]: E0120 01:44:15.718138 2650 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:44:16.330152 update_engine[1569]: I20260120 01:44:16.323127 1569 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 01:44:16.330152 update_engine[1569]: I20260120 01:44:16.323282 1569 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 01:44:16.330152 update_engine[1569]: I20260120 01:44:16.323956 1569 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 01:44:16.627015 update_engine[1569]: I20260120 01:44:16.609963 1569 omaha_request_params.cc:62] Current group set to stable Jan 20 01:44:16.667996 update_engine[1569]: I20260120 01:44:16.627667 1569 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 01:44:16.667996 update_engine[1569]: I20260120 01:44:16.627727 1569 update_attempter.cc:643] Scheduling an action processor start. Jan 20 01:44:16.667996 update_engine[1569]: I20260120 01:44:16.627764 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:44:16.667996 update_engine[1569]: I20260120 01:44:16.627974 1569 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 01:44:16.667996 update_engine[1569]: I20260120 01:44:16.628204 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:44:16.667996 update_engine[1569]: I20260120 01:44:16.628223 1569 omaha_request_action.cc:272] Request: Jan 20 01:44:16.667996 update_engine[1569]: Jan 20 01:44:16.667996 update_engine[1569]: Jan 20 01:44:16.667996 update_engine[1569]: Jan 20 01:44:16.667996 update_engine[1569]: Jan 20 01:44:16.667996 update_engine[1569]: Jan 20 01:44:16.667996 update_engine[1569]: Jan 20 01:44:16.667996 update_engine[1569]: Jan 20 01:44:16.667996 update_engine[1569]: Jan 20 01:44:16.667996 update_engine[1569]: I20260120 01:44:16.628284 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:44:16.757525 update_engine[1569]: I20260120 01:44:16.749315 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:44:16.778863 update_engine[1569]: I20260120 01:44:16.778733 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:44:16.812525 locksmithd[1620]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 01:44:16.814174 update_engine[1569]: E20260120 01:44:16.810843 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:44:16.814174 update_engine[1569]: I20260120 01:44:16.811115 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 01:44:17.281857 kubelet[2650]: E0120 01:44:17.268123 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:17.281857 kubelet[2650]: E0120 01:44:17.269106 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:17.873295 kubelet[2650]: E0120 01:44:17.873151 2650 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:44:18.152701 kubelet[2650]: E0120 01:44:18.139811 2650 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4cff2ba1bcec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,LastTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:44:18.766499 kubelet[2650]: E0120 01:44:18.746275 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:44:18.996469 kubelet[2650]: E0120 01:44:18.988249 2650 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 01:44:23.222059 kubelet[2650]: I0120 01:44:23.216842 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:44:23.897057 kubelet[2650]: E0120 01:44:23.885304 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:44:24.206154 kubelet[2650]: E0120 01:44:24.188613 2650 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:44:27.321800 update_engine[1569]: I20260120 01:44:27.312198 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:44:27.338270 update_engine[1569]: I20260120 01:44:27.325475 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:44:27.338270 update_engine[1569]: I20260120 01:44:27.333691 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:44:27.364829 update_engine[1569]: E20260120 01:44:27.364743 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:44:27.365130 update_engine[1569]: I20260120 01:44:27.365099 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 01:44:27.887991 kubelet[2650]: E0120 01:44:27.887510 2650 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:44:33.277698 kubelet[2650]: E0120 01:44:33.275954 2650 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:44:37.559998 update_engine[1569]: I20260120 01:44:37.528597 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:44:37.559998 update_engine[1569]: I20260120 01:44:37.532087 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:44:37.735683 update_engine[1569]: I20260120 01:44:37.730887 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:44:37.740968 update_engine[1569]: E20260120 01:44:37.740741 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:44:37.741096 update_engine[1569]: I20260120 01:44:37.741039 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 01:44:38.468201 kubelet[2650]: E0120 01:44:38.466937 2650 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:44:38.716960 kubelet[2650]: E0120 01:44:38.716920 2650 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:44:38.718948 kubelet[2650]: E0120 01:44:38.718848 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:38.938185 kubelet[2650]: E0120 01:44:38.926849 2650 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 01:44:39.294763 kubelet[2650]: E0120 01:44:39.223488 2650 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4cff2ba1bcec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,LastTimestamp:2026-01-20 01:43:32.482153708 +0000 UTC m=+4.065418018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:44:41.487956 kubelet[2650]: I0120 01:44:41.485598 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:44:41.783767 kubelet[2650]: I0120 01:44:41.780502 2650 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:44:41.783767 kubelet[2650]: E0120 01:44:41.780556 2650 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:44:43.654951 kubelet[2650]: E0120 01:44:43.640633 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:43.818654 kubelet[2650]: E0120 01:44:43.818595 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:43.922590 kubelet[2650]: E0120 01:44:43.922432 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:44.037141 kubelet[2650]: E0120 01:44:44.027831 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:44.291021 kubelet[2650]: E0120 01:44:44.230777 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:44.366439 kubelet[2650]: E0120 01:44:44.363004 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:44.562884 kubelet[2650]: E0120 01:44:44.522188 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:44.712154 kubelet[2650]: E0120 01:44:44.705623 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:44.807593 kubelet[2650]: E0120 01:44:44.807519 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:44.916661 kubelet[2650]: E0120 01:44:44.909155 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:45.026010 kubelet[2650]: E0120 01:44:45.022194 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:45.138536 kubelet[2650]: E0120 01:44:45.126218 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:45.267075 kubelet[2650]: E0120 01:44:45.239304 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:45.339939 kubelet[2650]: E0120 01:44:45.339856 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:45.458308 kubelet[2650]: E0120 01:44:45.443226 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:45.559876 kubelet[2650]: E0120 01:44:45.559546 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:45.693699 kubelet[2650]: E0120 01:44:45.682793 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:45.785889 kubelet[2650]: E0120 01:44:45.785838 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:45.927837 kubelet[2650]: E0120 01:44:45.912583 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:46.034830 kubelet[2650]: E0120 01:44:46.029756 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:46.152923 kubelet[2650]: E0120 01:44:46.152843 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:46.265710 kubelet[2650]: E0120 01:44:46.253794 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:46.357311 kubelet[2650]: E0120 01:44:46.354620 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:46.457121 kubelet[2650]: E0120 01:44:46.456661 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:46.558998 kubelet[2650]: E0120 01:44:46.558492 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:46.663554 kubelet[2650]: E0120 01:44:46.662984 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:46.764672 kubelet[2650]: E0120 01:44:46.764603 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:46.874800 kubelet[2650]: E0120 01:44:46.868816 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:47.020135 kubelet[2650]: E0120 01:44:47.015022 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:47.125760 kubelet[2650]: E0120 01:44:47.116967 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:47.229473 kubelet[2650]: E0120 01:44:47.229206 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:47.339642 kubelet[2650]: E0120 01:44:47.339499 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:47.445783 kubelet[2650]: E0120 01:44:47.439919 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:47.574589 kubelet[2650]: E0120 01:44:47.567716 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:47.696206 kubelet[2650]: E0120 01:44:47.676909 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:47.786124 kubelet[2650]: E0120 01:44:47.785847 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:47.887160 kubelet[2650]: E0120 01:44:47.886582 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:47.989222 kubelet[2650]: E0120 01:44:47.989066 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:48.092305 kubelet[2650]: E0120 01:44:48.092172 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:48.194713 kubelet[2650]: E0120 01:44:48.193570 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:48.322668 kubelet[2650]: E0120 01:44:48.320605 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:48.361505 update_engine[1569]: I20260120 01:44:48.332191 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:44:48.361505 update_engine[1569]: I20260120 01:44:48.333185 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:44:48.412292 update_engine[1569]: I20260120 01:44:48.397804 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:44:48.429238 kubelet[2650]: E0120 01:44:48.423055 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:48.453902 update_engine[1569]: E20260120 01:44:48.436127 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.436241 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.436319 1569 omaha_request_action.cc:617] Omaha request response: Jan 20 01:44:48.453902 update_engine[1569]: E20260120 01:44:48.436947 1569 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.437682 1569 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.437704 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.437719 1569 update_attempter.cc:306] Processing Done. Jan 20 01:44:48.453902 update_engine[1569]: E20260120 01:44:48.437808 1569 update_attempter.cc:619] Update failed. Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.437824 1569 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.437834 1569 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.437845 1569 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.438062 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.438146 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:44:48.453902 update_engine[1569]: I20260120 01:44:48.438159 1569 omaha_request_action.cc:272] Request: Jan 20 01:44:48.453902 update_engine[1569]: Jan 20 01:44:48.453902 update_engine[1569]: Jan 20 01:44:48.454774 locksmithd[1620]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 01:44:48.455448 update_engine[1569]: Jan 20 01:44:48.455448 update_engine[1569]: Jan 20 01:44:48.455448 update_engine[1569]: Jan 20 01:44:48.455448 update_engine[1569]: Jan 20 01:44:48.455448 update_engine[1569]: I20260120 01:44:48.438174 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:44:48.455448 update_engine[1569]: I20260120 01:44:48.438215 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:44:48.455448 update_engine[1569]: I20260120 01:44:48.438834 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:44:48.477138 kubelet[2650]: E0120 01:44:48.476925 2650 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:44:48.507477 update_engine[1569]: E20260120 01:44:48.507078 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:44:48.507477 update_engine[1569]: I20260120 01:44:48.507233 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 01:44:48.507477 update_engine[1569]: I20260120 01:44:48.507304 1569 omaha_request_action.cc:617] Omaha request response: Jan 20 01:44:48.507477 update_engine[1569]: I20260120 01:44:48.507328 1569 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:44:48.507477 update_engine[1569]: I20260120 01:44:48.507337 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:44:48.508020 update_engine[1569]: I20260120 01:44:48.507347 1569 update_attempter.cc:306] Processing Done. Jan 20 01:44:48.508020 update_engine[1569]: I20260120 01:44:48.507884 1569 update_attempter.cc:310] Error event sent. Jan 20 01:44:48.508020 update_engine[1569]: I20260120 01:44:48.507907 1569 update_check_scheduler.cc:74] Next update check in 43m12s Jan 20 01:44:48.514749 locksmithd[1620]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 01:44:48.529887 kubelet[2650]: E0120 01:44:48.529843 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:48.635343 kubelet[2650]: E0120 01:44:48.634109 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:48.747086 kubelet[2650]: E0120 01:44:48.736147 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:49.057214 kubelet[2650]: E0120 01:44:48.965153 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:49.128698 kubelet[2650]: E0120 01:44:49.128450 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:49.232499 kubelet[2650]: E0120 01:44:49.230805 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:49.349025 kubelet[2650]: E0120 01:44:49.331794 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:49.439490 kubelet[2650]: E0120 01:44:49.436153 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:49.616764 kubelet[2650]: E0120 01:44:49.590738 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:49.738890 kubelet[2650]: E0120 01:44:49.723031 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:49.838304 kubelet[2650]: E0120 01:44:49.830805 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:49.937710 kubelet[2650]: E0120 01:44:49.932476 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:50.145232 kubelet[2650]: E0120 01:44:50.110193 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:50.247009 kubelet[2650]: E0120 01:44:50.237116 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:50.410967 kubelet[2650]: E0120 01:44:50.399439 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:50.506035 kubelet[2650]: E0120 01:44:50.501560 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:50.709355 kubelet[2650]: E0120 01:44:50.703911 2650 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:44:50.796989 kubelet[2650]: I0120 01:44:50.774739 2650 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:44:50.991851 kubelet[2650]: I0120 01:44:50.984170 2650 apiserver.go:52] "Watching apiserver" Jan 20 01:44:51.058557 kubelet[2650]: I0120 01:44:51.056814 2650 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:44:51.674169 kubelet[2650]: I0120 01:44:51.672925 2650 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 01:44:51.730834 kubelet[2650]: E0120 01:44:51.730776 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:51.947046 kubelet[2650]: E0120 01:44:51.945589 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:51.950846 kubelet[2650]: I0120 01:44:51.949663 2650 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 01:44:52.095176 kubelet[2650]: E0120 01:44:52.093806 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:44:59.690859 kubelet[2650]: I0120 01:44:59.689660 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.689566478 podStartE2EDuration="8.689566478s" podCreationTimestamp="2026-01-20 01:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:44:59.689089393 +0000 UTC m=+91.272353704" watchObservedRunningTime="2026-01-20 01:44:59.689566478 +0000 UTC m=+91.272830789" Jan 20 01:45:00.606222 kubelet[2650]: I0120 01:45:00.605946 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=9.60580627 podStartE2EDuration="9.60580627s" podCreationTimestamp="2026-01-20 01:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:45:00.605007637 +0000 UTC m=+92.188271958" watchObservedRunningTime="2026-01-20 01:45:00.60580627 +0000 UTC m=+92.189070662" Jan 20 01:45:07.791997 systemd[1]: cri-containerd-538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f.scope: Deactivated successfully. Jan 20 01:45:07.840759 systemd[1]: cri-containerd-538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f.scope: Consumed 4.762s CPU time, 26.1M memory peak. Jan 20 01:45:07.904085 containerd[1591]: time="2026-01-20T01:45:07.903937710Z" level=info msg="received container exit event container_id:\"538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f\" id:\"538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f\" pid:2871 exit_status:1 exited_at:{seconds:1768873507 nanos:883892278}" Jan 20 01:45:08.403202 kubelet[2650]: E0120 01:45:08.400918 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:08.846964 kubelet[2650]: I0120 01:45:08.839060 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=17.838910182 podStartE2EDuration="17.838910182s" podCreationTimestamp="2026-01-20 01:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:45:01.928811395 +0000 UTC m=+93.512075727" watchObservedRunningTime="2026-01-20 01:45:08.838910182 +0000 UTC m=+100.422174503" Jan 20 01:45:08.971068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f-rootfs.mount: Deactivated successfully. Jan 20 01:45:09.530306 kubelet[2650]: I0120 01:45:09.528629 2650 scope.go:117] "RemoveContainer" containerID="538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f" Jan 20 01:45:09.530306 kubelet[2650]: E0120 01:45:09.528750 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:09.898664 containerd[1591]: time="2026-01-20T01:45:09.894531997Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 01:45:10.222800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1323300819.mount: Deactivated successfully. Jan 20 01:45:10.268510 containerd[1591]: time="2026-01-20T01:45:10.267731825Z" level=info msg="Container b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:45:10.584474 containerd[1591]: time="2026-01-20T01:45:10.528140677Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8\"" Jan 20 01:45:10.585073 containerd[1591]: time="2026-01-20T01:45:10.585022198Z" level=info msg="StartContainer for \"b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8\"" Jan 20 01:45:10.622906 containerd[1591]: time="2026-01-20T01:45:10.622798852Z" level=info msg="connecting to shim b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8" address="unix:///run/containerd/s/bd2a1fdad2c63e6b97ea527fdb88e51d630cdf855c2be6bd3e0513bd6d003b8e" protocol=ttrpc version=3 Jan 20 01:45:11.283016 systemd[1]: Started cri-containerd-b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8.scope - libcontainer container b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8. Jan 20 01:45:11.332011 systemd[1]: Reload requested from client PID 2965 ('systemctl') (unit session-9.scope)... Jan 20 01:45:11.332032 systemd[1]: Reloading... Jan 20 01:45:12.921490 zram_generator::config[3014]: No configuration found. Jan 20 01:45:13.408332 containerd[1591]: time="2026-01-20T01:45:13.407993995Z" level=error msg="get state for b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8" error="context deadline exceeded" Jan 20 01:45:13.408332 containerd[1591]: time="2026-01-20T01:45:13.408057894Z" level=warning msg="unknown status" status=0 Jan 20 01:45:13.976416 kubelet[2650]: E0120 01:45:13.974122 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:15.101713 kubelet[2650]: E0120 01:45:15.101664 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:15.522978 containerd[1591]: time="2026-01-20T01:45:15.511307089Z" level=error msg="get state for b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8" error="context deadline exceeded" Jan 20 01:45:15.522978 containerd[1591]: time="2026-01-20T01:45:15.511353747Z" level=warning msg="unknown status" status=0 Jan 20 01:45:18.677576 containerd[1591]: time="2026-01-20T01:45:18.645941103Z" level=error msg="get state for b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8" error="context deadline exceeded" Jan 20 01:45:18.880523 containerd[1591]: time="2026-01-20T01:45:18.878484420Z" level=warning msg="unknown status" status=0 Jan 20 01:45:19.211681 systemd[1]: Reloading finished in 7869 ms. Jan 20 01:45:20.031877 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:45:20.099303 containerd[1591]: time="2026-01-20T01:45:20.096893265Z" level=error msg="get state for b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8" error="context canceled" Jan 20 01:45:20.099303 containerd[1591]: time="2026-01-20T01:45:20.097037854Z" level=warning msg="unknown status" status=0 Jan 20 01:45:20.130200 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:45:20.132513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:45:20.150580 systemd[1]: kubelet.service: Consumed 15.735s CPU time, 137.2M memory peak. Jan 20 01:45:20.188856 containerd[1591]: time="2026-01-20T01:45:20.185106206Z" level=error msg="ttrpc: received message on inactive stream" stream=1 Jan 20 01:45:20.188856 containerd[1591]: time="2026-01-20T01:45:20.188780038Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:45:20.188856 containerd[1591]: time="2026-01-20T01:45:20.188808511Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Jan 20 01:45:20.188856 containerd[1591]: time="2026-01-20T01:45:20.188819852Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Jan 20 01:45:20.188856 containerd[1591]: time="2026-01-20T01:45:20.188832295Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Jan 20 01:45:20.191294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:45:26.981636 containerd[1591]: time="2026-01-20T01:45:26.979187651Z" level=error msg="failed to drain init process b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Jan 20 01:45:26.981636 containerd[1591]: time="2026-01-20T01:45:26.981191754Z" level=warning msg="error copying stdout" runtime=io.containerd.runc.v2 Jan 20 01:45:26.981636 containerd[1591]: time="2026-01-20T01:45:26.981299124Z" level=warning msg="error copying stderr" runtime=io.containerd.runc.v2 Jan 20 01:45:27.007116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8-rootfs.mount: Deactivated successfully. Jan 20 01:45:27.054008 containerd[1591]: time="2026-01-20T01:45:27.053930276Z" level=error msg="StartContainer for \"b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8\" failed" error="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: context canceled" Jan 20 01:45:33.175986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:45:33.273178 (kubelet)[3059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:45:37.389937 kubelet[3059]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:45:37.389937 kubelet[3059]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:45:37.389937 kubelet[3059]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:45:37.415510 kubelet[3059]: I0120 01:45:37.388246 3059 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:45:37.560412 kubelet[3059]: I0120 01:45:37.559975 3059 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 01:45:37.560412 kubelet[3059]: I0120 01:45:37.560075 3059 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:45:37.578685 kubelet[3059]: I0120 01:45:37.575506 3059 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:45:37.597485 kubelet[3059]: I0120 01:45:37.596355 3059 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 01:45:37.660113 kubelet[3059]: I0120 01:45:37.649129 3059 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:45:37.755512 kubelet[3059]: I0120 01:45:37.754008 3059 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:45:38.008591 kubelet[3059]: I0120 01:45:38.001767 3059 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:45:38.008591 kubelet[3059]: I0120 01:45:38.002599 3059 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:45:38.024837 kubelet[3059]: I0120 01:45:38.002793 3059 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:45:38.024837 kubelet[3059]: I0120 01:45:38.017169 3059 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:45:38.024837 kubelet[3059]: I0120 01:45:38.017255 3059 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 01:45:38.037720 kubelet[3059]: I0120 01:45:38.037662 3059 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:45:38.090107 kubelet[3059]: I0120 01:45:38.089581 3059 kubelet.go:480] "Attempting to sync node with API server" Jan 20 01:45:38.103560 kubelet[3059]: I0120 01:45:38.103513 3059 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:45:38.103995 kubelet[3059]: I0120 01:45:38.103968 3059 kubelet.go:386] "Adding apiserver pod source" Jan 20 01:45:38.104259 kubelet[3059]: I0120 01:45:38.104175 3059 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:45:38.465663 kubelet[3059]: I0120 01:45:38.443104 3059 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:45:38.539501 kubelet[3059]: I0120 01:45:38.530995 3059 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:45:38.609585 kubelet[3059]: I0120 01:45:38.590999 3059 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:45:38.609585 kubelet[3059]: I0120 01:45:38.591458 3059 server.go:1289] "Started kubelet" Jan 20 01:45:38.609991 kubelet[3059]: I0120 01:45:38.609934 3059 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:45:38.611787 kubelet[3059]: I0120 01:45:38.611761 3059 server.go:317] "Adding debug handlers to kubelet server" Jan 20 01:45:38.632472 kubelet[3059]: I0120 01:45:38.631306 3059 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:45:38.632472 kubelet[3059]: I0120 01:45:38.632097 3059 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:45:38.646727 kubelet[3059]: I0120 01:45:38.646682 3059 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:45:38.672328 kubelet[3059]: I0120 01:45:38.668927 3059 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:45:38.678067 kubelet[3059]: I0120 01:45:38.674295 3059 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:45:38.680897 kubelet[3059]: I0120 01:45:38.680528 3059 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:45:38.680897 kubelet[3059]: I0120 01:45:38.680857 3059 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:45:38.685894 sudo[3077]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 20 01:45:38.694740 kubelet[3059]: I0120 01:45:38.694697 3059 scope.go:117] "RemoveContainer" containerID="538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f" Jan 20 01:45:38.695172 kubelet[3059]: I0120 01:45:38.695146 3059 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:45:38.719108 sudo[3077]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 20 01:45:38.780155 containerd[1591]: time="2026-01-20T01:45:38.737021970Z" level=info msg="RemoveContainer for \"538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f\"" Jan 20 01:45:38.808820 kubelet[3059]: I0120 01:45:38.752123 3059 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:45:38.833543 kubelet[3059]: I0120 01:45:38.809661 3059 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:45:38.833543 kubelet[3059]: E0120 01:45:38.809693 3059 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:45:38.833771 containerd[1591]: time="2026-01-20T01:45:38.828730647Z" level=info msg="RemoveContainer for \"538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f\" returns successfully" Jan 20 01:45:39.150816 kubelet[3059]: I0120 01:45:39.135034 3059 apiserver.go:52] "Watching apiserver" Jan 20 01:45:39.296976 kubelet[3059]: I0120 01:45:39.296891 3059 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 01:45:39.333106 kubelet[3059]: I0120 01:45:39.333061 3059 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 01:45:39.336982 kubelet[3059]: I0120 01:45:39.336958 3059 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 01:45:39.337148 kubelet[3059]: I0120 01:45:39.337129 3059 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:45:39.338161 kubelet[3059]: I0120 01:45:39.338144 3059 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 01:45:39.338536 kubelet[3059]: E0120 01:45:39.338493 3059 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:45:39.441992 kubelet[3059]: E0120 01:45:39.441864 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:45:39.647280 kubelet[3059]: E0120 01:45:39.643876 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:45:40.049806 kubelet[3059]: E0120 01:45:40.049623 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:45:40.876905 kubelet[3059]: E0120 01:45:40.874813 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:45:42.487880 kubelet[3059]: E0120 01:45:42.487517 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:45:43.586013 kubelet[3059]: E0120 01:45:43.584810 3059 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice/cri-containerd-b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8.scope: task b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8 not found Jan 20 01:45:45.728296 kubelet[3059]: E0120 01:45:45.727985 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:45:47.304964 sudo[3077]: pam_unix(sudo:session): session closed for user root Jan 20 01:45:47.826631 kubelet[3059]: E0120 01:45:47.826579 3059 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice/cri-containerd-b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8.scope: task b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8 not found Jan 20 01:45:51.777992 kubelet[3059]: E0120 01:45:50.874492 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:45:53.538485 kubelet[3059]: W0120 01:45:53.494911 3059 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice/cri-containerd-b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8.scope WatchSource:0}: task b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8 not found Jan 20 01:45:53.614095 kubelet[3059]: I0120 01:45:53.614038 3059 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:45:53.619588 kubelet[3059]: I0120 01:45:53.614640 3059 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:45:53.619588 kubelet[3059]: I0120 01:45:53.614689 3059 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:45:53.620110 kubelet[3059]: I0120 01:45:53.620080 3059 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:45:53.620298 kubelet[3059]: I0120 01:45:53.620257 3059 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:45:53.620494 kubelet[3059]: I0120 01:45:53.620479 3059 policy_none.go:49] "None policy: Start" Jan 20 01:45:53.620619 kubelet[3059]: I0120 01:45:53.620600 3059 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:45:53.620724 kubelet[3059]: I0120 01:45:53.620706 3059 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:45:53.621016 kubelet[3059]: I0120 01:45:53.620998 3059 state_mem.go:75] "Updated machine memory state" Jan 20 01:45:53.727859 kubelet[3059]: E0120 01:45:53.727814 3059 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:45:53.735084 kubelet[3059]: I0120 01:45:53.735050 3059 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:45:53.753476 kubelet[3059]: I0120 01:45:53.744919 3059 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:45:53.753476 kubelet[3059]: I0120 01:45:53.745851 3059 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:45:53.821556 kubelet[3059]: E0120 01:45:53.811112 3059 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:45:54.176631 kubelet[3059]: I0120 01:45:54.174760 3059 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:45:54.383296 kubelet[3059]: I0120 01:45:54.380992 3059 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 01:45:54.383296 kubelet[3059]: I0120 01:45:54.381222 3059 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:45:56.785445 kubelet[3059]: I0120 01:45:56.783708 3059 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:45:56.810898 kubelet[3059]: I0120 01:45:56.793254 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4df06a43dd179d6da1100174d6963615-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4df06a43dd179d6da1100174d6963615\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:45:56.810898 kubelet[3059]: I0120 01:45:56.793454 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4df06a43dd179d6da1100174d6963615-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4df06a43dd179d6da1100174d6963615\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:45:56.810898 kubelet[3059]: I0120 01:45:56.793494 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4df06a43dd179d6da1100174d6963615-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4df06a43dd179d6da1100174d6963615\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:45:56.890505 kubelet[3059]: I0120 01:45:56.885604 3059 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:45:56.896957 kubelet[3059]: I0120 01:45:56.896910 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:45:56.904597 kubelet[3059]: I0120 01:45:56.899879 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:45:56.904597 kubelet[3059]: I0120 01:45:56.903670 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:45:56.904597 kubelet[3059]: I0120 01:45:56.903967 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:45:56.917567 kubelet[3059]: I0120 01:45:56.904141 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:45:56.917567 kubelet[3059]: I0120 01:45:56.914100 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:45:57.066606 kubelet[3059]: E0120 01:45:57.065963 3059 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 01:45:57.066832 kubelet[3059]: E0120 01:45:57.066804 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:57.094494 kubelet[3059]: E0120 01:45:57.094093 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:57.111573 kubelet[3059]: I0120 01:45:57.107537 3059 scope.go:117] "RemoveContainer" containerID="b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8" Jan 20 01:45:57.111573 kubelet[3059]: E0120 01:45:57.107722 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:57.136745 containerd[1591]: time="2026-01-20T01:45:57.128692115Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jan 20 01:45:57.329536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535489360.mount: Deactivated successfully. Jan 20 01:45:57.523541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480191287.mount: Deactivated successfully. Jan 20 01:45:57.537584 containerd[1591]: time="2026-01-20T01:45:57.537532474Z" level=info msg="Container 7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:45:57.555353 kubelet[3059]: E0120 01:45:57.541813 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:57.559937 kubelet[3059]: E0120 01:45:57.557467 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:57.961521 containerd[1591]: time="2026-01-20T01:45:57.957141486Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094\"" Jan 20 01:45:57.987644 containerd[1591]: time="2026-01-20T01:45:57.974488806Z" level=info msg="StartContainer for \"7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094\"" Jan 20 01:45:57.996707 containerd[1591]: time="2026-01-20T01:45:57.996660688Z" level=info msg="connecting to shim 7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094" address="unix:///run/containerd/s/bd2a1fdad2c63e6b97ea527fdb88e51d630cdf855c2be6bd3e0513bd6d003b8e" protocol=ttrpc version=3 Jan 20 01:45:58.673825 systemd[1]: Started cri-containerd-7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094.scope - libcontainer container 7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094. Jan 20 01:45:58.720922 kubelet[3059]: E0120 01:45:58.720810 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:58.736684 kubelet[3059]: E0120 01:45:58.735279 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:00.076630 containerd[1591]: time="2026-01-20T01:46:00.061569284Z" level=info msg="StartContainer for \"7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094\" returns successfully" Jan 20 01:46:00.859995 kubelet[3059]: E0120 01:46:00.859899 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:01.875297 kubelet[3059]: E0120 01:46:01.874632 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:07.244693 kubelet[3059]: E0120 01:46:07.235640 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:07.718496 kubelet[3059]: E0120 01:46:07.715983 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:08.065066 kubelet[3059]: E0120 01:46:08.064117 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:08.663463 kubelet[3059]: E0120 01:46:08.662465 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:08.686330 kubelet[3059]: E0120 01:46:08.672111 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:10.411066 sudo[1827]: pam_unix(sudo:session): session closed for user root Jan 20 01:46:10.435074 sshd[1826]: Connection closed by 10.0.0.1 port 54202 Jan 20 01:46:10.467651 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Jan 20 01:46:10.507668 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:54202.service: Deactivated successfully. Jan 20 01:46:10.557691 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:46:10.562671 systemd[1]: session-9.scope: Consumed 26.704s CPU time, 264.8M memory peak. Jan 20 01:46:10.600913 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:46:10.624981 systemd-logind[1565]: Removed session 9. Jan 20 01:46:17.240765 kubelet[3059]: E0120 01:46:17.238749 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:28.830446 kubelet[3059]: E0120 01:46:28.818445 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.487s" Jan 20 01:46:35.379936 kernel: sched: DL replenish lagged too much Jan 20 01:46:39.161918 systemd[1]: cri-containerd-13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa.scope: Deactivated successfully. Jan 20 01:46:39.181500 systemd[1]: cri-containerd-13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa.scope: Consumed 7.555s CPU time, 20.1M memory peak. Jan 20 01:46:39.301242 kubelet[3059]: E0120 01:46:39.301193 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.483s" Jan 20 01:46:39.316846 containerd[1591]: time="2026-01-20T01:46:39.286671000Z" level=info msg="received container exit event container_id:\"13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa\" id:\"13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa\" pid:2878 exit_status:1 exited_at:{seconds:1768873599 nanos:277514782}" Jan 20 01:46:39.957026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa-rootfs.mount: Deactivated successfully. Jan 20 01:46:40.105129 kubelet[3059]: E0120 01:46:40.093794 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:42.990435 kubelet[3059]: I0120 01:46:42.370713 3059 scope.go:117] "RemoveContainer" containerID="13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa" Jan 20 01:46:42.990435 kubelet[3059]: E0120 01:46:42.699537 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:52.106432 containerd[1591]: time="2026-01-20T01:46:52.105886124Z" level=info msg="CreateContainer within sandbox \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 01:46:52.517731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441519983.mount: Deactivated successfully. Jan 20 01:46:52.573694 containerd[1591]: time="2026-01-20T01:46:52.571562518Z" level=info msg="Container 3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:46:52.710686 containerd[1591]: time="2026-01-20T01:46:52.710344743Z" level=info msg="CreateContainer within sandbox \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019\"" Jan 20 01:46:52.722878 containerd[1591]: time="2026-01-20T01:46:52.720321173Z" level=info msg="StartContainer for \"3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019\"" Jan 20 01:46:52.774206 containerd[1591]: time="2026-01-20T01:46:52.770508957Z" level=info msg="connecting to shim 3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019" address="unix:///run/containerd/s/d1f127681a9c4311b456c6aab9e8ce8d82f6bff97094d53185fe0bdf6b34c086" protocol=ttrpc version=3 Jan 20 01:46:53.484204 systemd[1]: Started cri-containerd-3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019.scope - libcontainer container 3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019. Jan 20 01:46:56.269125 kubelet[3059]: E0120 01:46:56.268910 3059 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice/cri-containerd-b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8.scope: task b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8 not found Jan 20 01:47:10.277316 containerd[1591]: time="2026-01-20T01:47:09.998746694Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Jan 20 01:47:16.693145 containerd[1591]: time="2026-01-20T01:47:10.283448258Z" level=error msg="get state for ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0" error="context deadline exceeded" Jan 20 01:47:16.693145 containerd[1591]: time="2026-01-20T01:47:10.283831851Z" level=warning msg="unknown status" status=0 Jan 20 01:47:16.817515 systemd[1]: cri-containerd-7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094.scope: Deactivated successfully. Jan 20 01:47:16.830951 systemd[1]: cri-containerd-7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094.scope: Consumed 5.990s CPU time, 24.3M memory peak. Jan 20 01:47:16.964718 kubelet[3059]: E0120 01:47:16.964525 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.283s" Jan 20 01:47:17.701617 containerd[1591]: time="2026-01-20T01:47:17.700940168Z" level=info msg="received container exit event container_id:\"7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094\" id:\"7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094\" pid:3126 exit_status:1 exited_at:{seconds:1768873637 nanos:22893796}" Jan 20 01:47:18.039203 containerd[1591]: time="2026-01-20T01:47:18.039043411Z" level=info msg="StartContainer for \"3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019\" returns successfully" Jan 20 01:47:18.414238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094-rootfs.mount: Deactivated successfully. Jan 20 01:47:19.045776 kubelet[3059]: E0120 01:47:19.040198 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:19.206167 kubelet[3059]: I0120 01:47:19.206035 3059 scope.go:117] "RemoveContainer" containerID="b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8" Jan 20 01:47:19.407608 containerd[1591]: time="2026-01-20T01:47:19.388961562Z" level=info msg="RemoveContainer for \"b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8\"" Jan 20 01:47:19.421954 kubelet[3059]: I0120 01:47:19.393816 3059 scope.go:117] "RemoveContainer" containerID="7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094" Jan 20 01:47:19.421954 kubelet[3059]: E0120 01:47:19.394064 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:19.421954 kubelet[3059]: E0120 01:47:19.394501 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(66e26b992bcd7ea6fb75e339cf7a3f7d)\"" pod="kube-system/kube-controller-manager-localhost" podUID="66e26b992bcd7ea6fb75e339cf7a3f7d" Jan 20 01:47:19.486294 containerd[1591]: time="2026-01-20T01:47:19.484929538Z" level=info msg="RemoveContainer for \"b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8\" returns successfully" Jan 20 01:47:20.487547 kubelet[3059]: E0120 01:47:20.475931 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:21.438078 kubelet[3059]: E0120 01:47:21.434771 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:27.105937 kubelet[3059]: E0120 01:47:27.097267 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:27.157659 kubelet[3059]: I0120 01:47:27.156887 3059 scope.go:117] "RemoveContainer" containerID="7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094" Jan 20 01:47:27.167463 kubelet[3059]: E0120 01:47:27.164705 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:27.218275 containerd[1591]: time="2026-01-20T01:47:27.216613584Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Jan 20 01:47:27.444281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983095131.mount: Deactivated successfully. Jan 20 01:47:27.461061 containerd[1591]: time="2026-01-20T01:47:27.460992156Z" level=info msg="Container 8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:47:27.526516 containerd[1591]: time="2026-01-20T01:47:27.524765785Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867\"" Jan 20 01:47:27.535823 containerd[1591]: time="2026-01-20T01:47:27.532916905Z" level=info msg="StartContainer for \"8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867\"" Jan 20 01:47:27.552253 containerd[1591]: time="2026-01-20T01:47:27.548721260Z" level=info msg="connecting to shim 8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867" address="unix:///run/containerd/s/bd2a1fdad2c63e6b97ea527fdb88e51d630cdf855c2be6bd3e0513bd6d003b8e" protocol=ttrpc version=3 Jan 20 01:47:27.855775 systemd[1]: Started cri-containerd-8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867.scope - libcontainer container 8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867. Jan 20 01:47:28.531473 containerd[1591]: time="2026-01-20T01:47:28.522704113Z" level=info msg="StartContainer for \"8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867\" returns successfully" Jan 20 01:47:29.466506 kubelet[3059]: E0120 01:47:29.466297 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:39.836460 kubelet[3059]: E0120 01:47:39.819797 3059 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Jan 20 01:47:39.927644 kubelet[3059]: E0120 01:47:39.927550 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:39.928608 kubelet[3059]: E0120 01:47:39.928508 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:39.930757 kubelet[3059]: E0120 01:47:39.930709 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:47:40.039211 kubelet[3059]: E0120 01:47:40.037793 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:40.993019 kubelet[3059]: E0120 01:47:40.984841 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:44.965595 kubelet[3059]: E0120 01:47:44.965473 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:04.697181 systemd[1]: cri-containerd-3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019.scope: Deactivated successfully. Jan 20 01:48:04.887502 systemd[1]: cri-containerd-3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019.scope: Consumed 3.694s CPU time, 18.3M memory peak. Jan 20 01:48:04.980809 containerd[1591]: time="2026-01-20T01:48:04.980700412Z" level=info msg="received container exit event container_id:\"3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019\" id:\"3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019\" pid:3207 exit_status:1 exited_at:{seconds:1768873684 nanos:965082972}" Jan 20 01:48:05.014139 kubelet[3059]: E0120 01:48:05.012856 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:05.079825 kubelet[3059]: E0120 01:48:05.079476 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.714s" Jan 20 01:48:05.102314 kubelet[3059]: E0120 01:48:05.102112 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:06.265320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019-rootfs.mount: Deactivated successfully. Jan 20 01:48:06.714179 kubelet[3059]: E0120 01:48:06.712213 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:07.333224 kubelet[3059]: I0120 01:48:07.329228 3059 scope.go:117] "RemoveContainer" containerID="13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa" Jan 20 01:48:07.356170 kubelet[3059]: I0120 01:48:07.336788 3059 scope.go:117] "RemoveContainer" containerID="3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019" Jan 20 01:48:07.356170 kubelet[3059]: E0120 01:48:07.337133 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:07.379325 kubelet[3059]: E0120 01:48:07.373606 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(6e6cfcfb327385445a9bb0d2bc2fd5d4)\"" pod="kube-system/kube-scheduler-localhost" podUID="6e6cfcfb327385445a9bb0d2bc2fd5d4" Jan 20 01:48:07.649716 containerd[1591]: time="2026-01-20T01:48:07.631543420Z" level=info msg="RemoveContainer for \"13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa\"" Jan 20 01:48:07.757019 containerd[1591]: time="2026-01-20T01:48:07.756246612Z" level=info msg="RemoveContainer for \"13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa\" returns successfully" Jan 20 01:48:10.026650 kubelet[3059]: E0120 01:48:10.026308 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:15.052329 kubelet[3059]: E0120 01:48:15.052167 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:17.132750 kubelet[3059]: I0120 01:48:17.117492 3059 scope.go:117] "RemoveContainer" containerID="3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019" Jan 20 01:48:17.132750 kubelet[3059]: E0120 01:48:17.117614 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:17.229618 containerd[1591]: time="2026-01-20T01:48:17.229315632Z" level=info msg="CreateContainer within sandbox \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jan 20 01:48:17.438154 containerd[1591]: time="2026-01-20T01:48:17.434027043Z" level=info msg="Container c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:48:17.486316 containerd[1591]: time="2026-01-20T01:48:17.486184078Z" level=info msg="CreateContainer within sandbox \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9\"" Jan 20 01:48:17.504037 containerd[1591]: time="2026-01-20T01:48:17.487932947Z" level=info msg="StartContainer for \"c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9\"" Jan 20 01:48:17.504037 containerd[1591]: time="2026-01-20T01:48:17.503586210Z" level=info msg="connecting to shim c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9" address="unix:///run/containerd/s/d1f127681a9c4311b456c6aab9e8ce8d82f6bff97094d53185fe0bdf6b34c086" protocol=ttrpc version=3 Jan 20 01:48:17.764180 systemd[1]: Started cri-containerd-c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9.scope - libcontainer container c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9. Jan 20 01:48:18.415780 containerd[1591]: time="2026-01-20T01:48:18.415693727Z" level=info msg="StartContainer for \"c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9\" returns successfully" Jan 20 01:48:19.075284 kubelet[3059]: E0120 01:48:19.074605 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:20.065709 kubelet[3059]: E0120 01:48:20.065539 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:20.086183 kubelet[3059]: E0120 01:48:20.085855 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:21.100192 kubelet[3059]: E0120 01:48:21.089509 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:25.098270 kubelet[3059]: E0120 01:48:25.097639 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:29.835445 kubelet[3059]: E0120 01:48:29.833995 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:30.108840 kubelet[3059]: E0120 01:48:30.108435 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:33.724058 kubelet[3059]: E0120 01:48:33.722215 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:34.889221 kubelet[3059]: E0120 01:48:34.886862 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:35.256350 kubelet[3059]: E0120 01:48:35.248273 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:35.463455 kubelet[3059]: I0120 01:48:35.463326 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1707fe08-91f0-4065-a008-ede32ebd2110-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-snnlg\" (UID: \"1707fe08-91f0-4065-a008-ede32ebd2110\") " pod="kube-system/cilium-operator-6c4d7847fc-snnlg" Jan 20 01:48:35.493534 kubelet[3059]: I0120 01:48:35.493000 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9blht\" (UniqueName: \"kubernetes.io/projected/1707fe08-91f0-4065-a008-ede32ebd2110-kube-api-access-9blht\") pod \"cilium-operator-6c4d7847fc-snnlg\" (UID: \"1707fe08-91f0-4065-a008-ede32ebd2110\") " pod="kube-system/cilium-operator-6c4d7847fc-snnlg" Jan 20 01:48:35.662190 kubelet[3059]: I0120 01:48:35.651296 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-hostproc\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.662190 kubelet[3059]: I0120 01:48:35.651613 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-lib-modules\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.662190 kubelet[3059]: I0120 01:48:35.651740 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bec2d1f6-0191-44c5-91d0-e947fbda26bc-clustermesh-secrets\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.662190 kubelet[3059]: I0120 01:48:35.651772 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwxz\" (UniqueName: \"kubernetes.io/projected/006851cc-2f3a-45f7-b095-cc8c9de3c8cd-kube-api-access-nmwxz\") pod \"kube-proxy-dkhnn\" (UID: \"006851cc-2f3a-45f7-b095-cc8c9de3c8cd\") " pod="kube-system/kube-proxy-dkhnn" Jan 20 01:48:35.662190 kubelet[3059]: I0120 01:48:35.651803 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-xtables-lock\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.662190 kubelet[3059]: I0120 01:48:35.651830 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-host-proc-sys-net\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.662692 kubelet[3059]: I0120 01:48:35.651852 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-host-proc-sys-kernel\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.670105 systemd[1]: Created slice kubepods-besteffort-pod1707fe08_91f0_4065_a008_ede32ebd2110.slice - libcontainer container kubepods-besteffort-pod1707fe08_91f0_4065_a008_ede32ebd2110.slice. Jan 20 01:48:35.701768 kubelet[3059]: I0120 01:48:35.679702 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/006851cc-2f3a-45f7-b095-cc8c9de3c8cd-xtables-lock\") pod \"kube-proxy-dkhnn\" (UID: \"006851cc-2f3a-45f7-b095-cc8c9de3c8cd\") " pod="kube-system/kube-proxy-dkhnn" Jan 20 01:48:35.701768 kubelet[3059]: I0120 01:48:35.679782 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/006851cc-2f3a-45f7-b095-cc8c9de3c8cd-lib-modules\") pod \"kube-proxy-dkhnn\" (UID: \"006851cc-2f3a-45f7-b095-cc8c9de3c8cd\") " pod="kube-system/kube-proxy-dkhnn" Jan 20 01:48:35.701768 kubelet[3059]: I0120 01:48:35.679807 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-run\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.734012 kubelet[3059]: I0120 01:48:35.728436 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-cgroup\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.734012 kubelet[3059]: I0120 01:48:35.728509 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bec2d1f6-0191-44c5-91d0-e947fbda26bc-hubble-tls\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.734012 kubelet[3059]: I0120 01:48:35.728545 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cni-path\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.734012 kubelet[3059]: I0120 01:48:35.728572 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-config-path\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.734012 kubelet[3059]: I0120 01:48:35.728595 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-etc-cni-netd\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.734012 kubelet[3059]: I0120 01:48:35.728619 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjgq2\" (UniqueName: \"kubernetes.io/projected/bec2d1f6-0191-44c5-91d0-e947fbda26bc-kube-api-access-cjgq2\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.734620 kubelet[3059]: I0120 01:48:35.728650 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/006851cc-2f3a-45f7-b095-cc8c9de3c8cd-kube-proxy\") pod \"kube-proxy-dkhnn\" (UID: \"006851cc-2f3a-45f7-b095-cc8c9de3c8cd\") " pod="kube-system/kube-proxy-dkhnn" Jan 20 01:48:35.734620 kubelet[3059]: I0120 01:48:35.728687 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-bpf-maps\") pod \"cilium-fhzk2\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " pod="kube-system/cilium-fhzk2" Jan 20 01:48:35.954086 systemd[1]: Created slice kubepods-besteffort-pod006851cc_2f3a_45f7_b095_cc8c9de3c8cd.slice - libcontainer container kubepods-besteffort-pod006851cc_2f3a_45f7_b095_cc8c9de3c8cd.slice. Jan 20 01:48:36.093997 kubelet[3059]: I0120 01:48:36.088286 3059 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:48:36.105296 containerd[1591]: time="2026-01-20T01:48:36.089267535Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:48:36.106083 kubelet[3059]: I0120 01:48:36.098656 3059 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:48:36.169764 systemd[1]: Created slice kubepods-burstable-podbec2d1f6_0191_44c5_91d0_e947fbda26bc.slice - libcontainer container kubepods-burstable-podbec2d1f6_0191_44c5_91d0_e947fbda26bc.slice. Jan 20 01:48:36.457560 kubelet[3059]: E0120 01:48:36.428734 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:36.457765 containerd[1591]: time="2026-01-20T01:48:36.438591521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-snnlg,Uid:1707fe08-91f0-4065-a008-ede32ebd2110,Namespace:kube-system,Attempt:0,}" Jan 20 01:48:36.520651 kubelet[3059]: E0120 01:48:36.520473 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:36.530912 containerd[1591]: time="2026-01-20T01:48:36.529798440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fhzk2,Uid:bec2d1f6-0191-44c5-91d0-e947fbda26bc,Namespace:kube-system,Attempt:0,}" Jan 20 01:48:36.788626 kubelet[3059]: E0120 01:48:36.787102 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:36.788820 containerd[1591]: time="2026-01-20T01:48:36.788572678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dkhnn,Uid:006851cc-2f3a-45f7-b095-cc8c9de3c8cd,Namespace:kube-system,Attempt:0,}" Jan 20 01:48:37.231605 containerd[1591]: time="2026-01-20T01:48:37.212984393Z" level=info msg="connecting to shim 20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b" address="unix:///run/containerd/s/69e942a488c4e1743ee9d7bfa64c0f9e5fbc21403b02ac8c4d75389832099660" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:48:37.237509 containerd[1591]: time="2026-01-20T01:48:37.237344529Z" level=info msg="connecting to shim c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6" address="unix:///run/containerd/s/3c10e525120a26ac559e989735cca01e85bfc4788b9bf36e0f8ffa6c4442d577" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:48:39.016508 containerd[1591]: time="2026-01-20T01:48:39.015965256Z" level=info msg="connecting to shim 97a22c94ea24a95a0eb0c9ec8ab37a1049228a1a481f48fbbcc5d34c6ff20d39" address="unix:///run/containerd/s/a6bada1a8249c87b71b42243f3cf51470126a3abf53f45f131ff248a0a36b05c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:48:39.253750 systemd[1]: Started cri-containerd-c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6.scope - libcontainer container c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6. Jan 20 01:48:39.572074 systemd[1]: Started cri-containerd-20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b.scope - libcontainer container 20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b. Jan 20 01:48:40.076896 systemd[1]: Started cri-containerd-97a22c94ea24a95a0eb0c9ec8ab37a1049228a1a481f48fbbcc5d34c6ff20d39.scope - libcontainer container 97a22c94ea24a95a0eb0c9ec8ab37a1049228a1a481f48fbbcc5d34c6ff20d39. Jan 20 01:48:40.169483 containerd[1591]: time="2026-01-20T01:48:40.167290968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fhzk2,Uid:bec2d1f6-0191-44c5-91d0-e947fbda26bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\"" Jan 20 01:48:40.178653 kubelet[3059]: E0120 01:48:40.178583 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:40.273573 containerd[1591]: time="2026-01-20T01:48:40.269037980Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 01:48:40.274480 kubelet[3059]: E0120 01:48:40.274219 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:40.655961 containerd[1591]: time="2026-01-20T01:48:40.653589107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-snnlg,Uid:1707fe08-91f0-4065-a008-ede32ebd2110,Namespace:kube-system,Attempt:0,} returns sandbox id \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\"" Jan 20 01:48:40.684883 kubelet[3059]: E0120 01:48:40.669287 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:40.989572 containerd[1591]: time="2026-01-20T01:48:40.979296487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dkhnn,Uid:006851cc-2f3a-45f7-b095-cc8c9de3c8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"97a22c94ea24a95a0eb0c9ec8ab37a1049228a1a481f48fbbcc5d34c6ff20d39\"" Jan 20 01:48:41.084349 kubelet[3059]: E0120 01:48:41.080690 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:41.453348 containerd[1591]: time="2026-01-20T01:48:41.446590466Z" level=info msg="CreateContainer within sandbox \"97a22c94ea24a95a0eb0c9ec8ab37a1049228a1a481f48fbbcc5d34c6ff20d39\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:48:41.664862 containerd[1591]: time="2026-01-20T01:48:41.663159026Z" level=info msg="Container c14ebd690017f2beb71e87ed33ec7f52613066479272544ccb71e2edfdd195f2: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:48:41.838852 containerd[1591]: time="2026-01-20T01:48:41.837335047Z" level=info msg="CreateContainer within sandbox \"97a22c94ea24a95a0eb0c9ec8ab37a1049228a1a481f48fbbcc5d34c6ff20d39\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c14ebd690017f2beb71e87ed33ec7f52613066479272544ccb71e2edfdd195f2\"" Jan 20 01:48:41.856847 containerd[1591]: time="2026-01-20T01:48:41.854524665Z" level=info msg="StartContainer for \"c14ebd690017f2beb71e87ed33ec7f52613066479272544ccb71e2edfdd195f2\"" Jan 20 01:48:41.873139 containerd[1591]: time="2026-01-20T01:48:41.871597676Z" level=info msg="connecting to shim c14ebd690017f2beb71e87ed33ec7f52613066479272544ccb71e2edfdd195f2" address="unix:///run/containerd/s/a6bada1a8249c87b71b42243f3cf51470126a3abf53f45f131ff248a0a36b05c" protocol=ttrpc version=3 Jan 20 01:48:43.169520 systemd[1]: Started cri-containerd-c14ebd690017f2beb71e87ed33ec7f52613066479272544ccb71e2edfdd195f2.scope - libcontainer container c14ebd690017f2beb71e87ed33ec7f52613066479272544ccb71e2edfdd195f2. Jan 20 01:48:45.335840 kubelet[3059]: E0120 01:48:45.335583 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:46.112906 containerd[1591]: time="2026-01-20T01:48:46.109943092Z" level=info msg="StartContainer for \"c14ebd690017f2beb71e87ed33ec7f52613066479272544ccb71e2edfdd195f2\" returns successfully" Jan 20 01:48:47.718343 kubelet[3059]: E0120 01:48:47.712062 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:48.083863 containerd[1591]: time="2026-01-20T01:48:48.077954610Z" level=warning msg="container event discarded" container=1353c6422c3b0a42c483a9242215f54dde2637b141c94eea4d7e75ce3f460497 type=CONTAINER_CREATED_EVENT Jan 20 01:48:48.083863 containerd[1591]: time="2026-01-20T01:48:48.078118904Z" level=warning msg="container event discarded" container=1353c6422c3b0a42c483a9242215f54dde2637b141c94eea4d7e75ce3f460497 type=CONTAINER_STARTED_EVENT Jan 20 01:48:48.330309 containerd[1591]: time="2026-01-20T01:48:48.233688965Z" level=warning msg="container event discarded" container=35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd type=CONTAINER_CREATED_EVENT Jan 20 01:48:48.330309 containerd[1591]: time="2026-01-20T01:48:48.234483891Z" level=warning msg="container event discarded" container=35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd type=CONTAINER_STARTED_EVENT Jan 20 01:48:48.767244 kubelet[3059]: E0120 01:48:48.734165 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:49.697352 containerd[1591]: time="2026-01-20T01:48:49.696976268Z" level=warning msg="container event discarded" container=ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0 type=CONTAINER_CREATED_EVENT Jan 20 01:48:49.697352 containerd[1591]: time="2026-01-20T01:48:49.697066365Z" level=warning msg="container event discarded" container=ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0 type=CONTAINER_STARTED_EVENT Jan 20 01:48:50.494340 kubelet[3059]: E0120 01:48:50.481492 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:50.935651 containerd[1591]: time="2026-01-20T01:48:50.926055449Z" level=warning msg="container event discarded" container=d9b1c34afdf1f404bd0422af676d7a992b79da31633a2ba86daa90cadbd11b42 type=CONTAINER_CREATED_EVENT Jan 20 01:48:51.135265 containerd[1591]: time="2026-01-20T01:48:51.090852990Z" level=warning msg="container event discarded" container=538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f type=CONTAINER_CREATED_EVENT Jan 20 01:48:51.123587 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 01:48:51.303224 containerd[1591]: time="2026-01-20T01:48:51.272777794Z" level=warning msg="container event discarded" container=13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa type=CONTAINER_CREATED_EVENT Jan 20 01:48:52.296602 systemd-tmpfiles[3542]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:48:52.296659 systemd-tmpfiles[3542]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:48:52.356468 systemd-tmpfiles[3542]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:48:52.374316 systemd-tmpfiles[3542]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:48:52.396530 systemd-tmpfiles[3542]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:48:52.406792 systemd-tmpfiles[3542]: ACLs are not supported, ignoring. Jan 20 01:48:52.406937 systemd-tmpfiles[3542]: ACLs are not supported, ignoring. Jan 20 01:48:52.806212 systemd-tmpfiles[3542]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:48:52.806229 systemd-tmpfiles[3542]: Skipping /boot Jan 20 01:48:53.107605 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 01:48:53.111637 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 01:48:53.230567 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Jan 20 01:48:55.489712 kubelet[3059]: E0120 01:48:55.489653 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:48:55.586646 containerd[1591]: time="2026-01-20T01:48:55.586349317Z" level=warning msg="container event discarded" container=d9b1c34afdf1f404bd0422af676d7a992b79da31633a2ba86daa90cadbd11b42 type=CONTAINER_STARTED_EVENT Jan 20 01:48:57.709004 containerd[1591]: time="2026-01-20T01:48:57.696499419Z" level=warning msg="container event discarded" container=538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f type=CONTAINER_STARTED_EVENT Jan 20 01:48:57.899232 containerd[1591]: time="2026-01-20T01:48:57.887252188Z" level=warning msg="container event discarded" container=13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa type=CONTAINER_STARTED_EVENT Jan 20 01:49:01.490784 kubelet[3059]: E0120 01:49:01.490288 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:02.393131 kubelet[3059]: E0120 01:49:02.389294 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:06.519854 kubelet[3059]: E0120 01:49:06.500596 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:11.531562 kubelet[3059]: E0120 01:49:11.531500 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:16.568175 kubelet[3059]: E0120 01:49:16.567035 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:21.604169 kubelet[3059]: E0120 01:49:21.603505 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:26.609124 kubelet[3059]: E0120 01:49:26.607819 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:28.417129 kubelet[3059]: E0120 01:49:28.411336 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:31.616930 kubelet[3059]: E0120 01:49:31.610259 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:36.616202 kubelet[3059]: E0120 01:49:36.612019 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:41.617148 kubelet[3059]: E0120 01:49:41.616705 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:46.638232 kubelet[3059]: E0120 01:49:46.637725 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:49.978012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659441971.mount: Deactivated successfully. Jan 20 01:49:51.393964 kubelet[3059]: E0120 01:49:51.392297 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:51.670098 kubelet[3059]: E0120 01:49:51.669910 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:49:56.672539 kubelet[3059]: E0120 01:49:56.671085 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:50:01.683204 kubelet[3059]: E0120 01:50:01.680004 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:50:04.343318 kubelet[3059]: E0120 01:50:04.340211 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:06.696098 kubelet[3059]: E0120 01:50:06.696047 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:50:09.206994 containerd[1591]: time="2026-01-20T01:50:09.206891188Z" level=warning msg="container event discarded" container=538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f type=CONTAINER_STOPPED_EVENT Jan 20 01:50:10.501781 containerd[1591]: time="2026-01-20T01:50:10.500903827Z" level=warning msg="container event discarded" container=b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8 type=CONTAINER_CREATED_EVENT Jan 20 01:50:11.701768 kubelet[3059]: E0120 01:50:11.699862 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:50:12.352926 kubelet[3059]: E0120 01:50:12.352838 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:16.707922 kubelet[3059]: E0120 01:50:16.707683 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:50:21.753027 kubelet[3059]: E0120 01:50:21.752928 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:50:26.763725 kubelet[3059]: E0120 01:50:26.760779 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:50:28.267444 containerd[1591]: time="2026-01-20T01:50:28.265775743Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 01:50:28.267444 containerd[1591]: time="2026-01-20T01:50:28.266611933Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:28.272806 containerd[1591]: time="2026-01-20T01:50:28.272169066Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:28.280801 containerd[1591]: time="2026-01-20T01:50:28.280477588Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 1m48.01129014s" Jan 20 01:50:28.280801 containerd[1591]: time="2026-01-20T01:50:28.280589225Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 01:50:28.290270 containerd[1591]: time="2026-01-20T01:50:28.289759638Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 01:50:28.353621 containerd[1591]: time="2026-01-20T01:50:28.349834349Z" level=info msg="CreateContainer within sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 01:50:28.505966 containerd[1591]: time="2026-01-20T01:50:28.503157995Z" level=info msg="Container 4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:50:28.615598 containerd[1591]: time="2026-01-20T01:50:28.614959593Z" level=info msg="CreateContainer within sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332\"" Jan 20 01:50:28.631828 containerd[1591]: time="2026-01-20T01:50:28.627480970Z" level=info msg="StartContainer for \"4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332\"" Jan 20 01:50:28.657918 containerd[1591]: time="2026-01-20T01:50:28.656121205Z" level=info msg="connecting to shim 4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332" address="unix:///run/containerd/s/3c10e525120a26ac559e989735cca01e85bfc4788b9bf36e0f8ffa6c4442d577" protocol=ttrpc version=3 Jan 20 01:50:29.054727 systemd[1]: Started cri-containerd-4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332.scope - libcontainer container 4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332. Jan 20 01:50:29.403777 containerd[1591]: time="2026-01-20T01:50:29.402141878Z" level=info msg="StartContainer for \"4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332\" returns successfully" Jan 20 01:50:29.514979 systemd[1]: cri-containerd-4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332.scope: Deactivated successfully. Jan 20 01:50:29.543270 containerd[1591]: time="2026-01-20T01:50:29.540343116Z" level=info msg="received container exit event container_id:\"4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332\" id:\"4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332\" pid:3758 exited_at:{seconds:1768873829 nanos:536566312}" Jan 20 01:50:29.860918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332-rootfs.mount: Deactivated successfully. Jan 20 01:50:30.358116 kubelet[3059]: E0120 01:50:30.354894 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:30.406195 kubelet[3059]: E0120 01:50:30.405879 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:30.825918 kubelet[3059]: I0120 01:50:30.804201 3059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dkhnn" podStartSLOduration=117.804135145 podStartE2EDuration="1m57.804135145s" podCreationTimestamp="2026-01-20 01:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:48:48.007922673 +0000 UTC m=+192.077809901" watchObservedRunningTime="2026-01-20 01:50:30.804135145 +0000 UTC m=+294.874022363" Jan 20 01:50:30.847792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157176147.mount: Deactivated successfully. Jan 20 01:50:31.432722 kubelet[3059]: E0120 01:50:31.430917 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:31.562477 containerd[1591]: time="2026-01-20T01:50:31.562207277Z" level=info msg="CreateContainer within sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 01:50:31.782780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401513161.mount: Deactivated successfully. Jan 20 01:50:31.800825 kubelet[3059]: E0120 01:50:31.792232 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:50:31.814982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3456651641.mount: Deactivated successfully. Jan 20 01:50:31.821808 containerd[1591]: time="2026-01-20T01:50:31.817625484Z" level=info msg="Container c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:50:31.902182 containerd[1591]: time="2026-01-20T01:50:31.896110774Z" level=info msg="CreateContainer within sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e\"" Jan 20 01:50:31.914737 containerd[1591]: time="2026-01-20T01:50:31.911982684Z" level=info msg="StartContainer for \"c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e\"" Jan 20 01:50:31.914737 containerd[1591]: time="2026-01-20T01:50:31.913454999Z" level=info msg="connecting to shim c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e" address="unix:///run/containerd/s/3c10e525120a26ac559e989735cca01e85bfc4788b9bf36e0f8ffa6c4442d577" protocol=ttrpc version=3 Jan 20 01:50:32.075277 systemd[1]: Started cri-containerd-c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e.scope - libcontainer container c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e. Jan 20 01:50:32.224190 containerd[1591]: time="2026-01-20T01:50:32.224118207Z" level=info msg="StartContainer for \"c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e\" returns successfully" Jan 20 01:50:32.310991 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:50:32.311447 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:50:32.313594 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:50:32.320785 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:50:32.330963 systemd[1]: cri-containerd-c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e.scope: Deactivated successfully. Jan 20 01:50:32.358180 containerd[1591]: time="2026-01-20T01:50:32.356867091Z" level=info msg="received container exit event container_id:\"c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e\" id:\"c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e\" pid:3815 exited_at:{seconds:1768873832 nanos:356155419}" Jan 20 01:50:32.451350 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:50:32.457214 kubelet[3059]: E0120 01:50:32.457174 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:32.734343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e-rootfs.mount: Deactivated successfully. Jan 20 01:50:33.524288 kubelet[3059]: E0120 01:50:33.520805 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:33.692060 containerd[1591]: time="2026-01-20T01:50:33.673113682Z" level=info msg="CreateContainer within sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 01:50:33.934924 containerd[1591]: time="2026-01-20T01:50:33.934843485Z" level=info msg="Container 7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:50:33.940958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952912307.mount: Deactivated successfully. Jan 20 01:50:34.011534 containerd[1591]: time="2026-01-20T01:50:34.011249907Z" level=info msg="CreateContainer within sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b\"" Jan 20 01:50:34.026786 containerd[1591]: time="2026-01-20T01:50:34.024964265Z" level=info msg="StartContainer for \"7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b\"" Jan 20 01:50:34.047512 containerd[1591]: time="2026-01-20T01:50:34.046980819Z" level=info msg="connecting to shim 7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b" address="unix:///run/containerd/s/3c10e525120a26ac559e989735cca01e85bfc4788b9bf36e0f8ffa6c4442d577" protocol=ttrpc version=3 Jan 20 01:50:34.322186 systemd[1]: Started cri-containerd-7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b.scope - libcontainer container 7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b. Jan 20 01:50:34.994910 systemd[1]: cri-containerd-7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b.scope: Deactivated successfully. Jan 20 01:50:35.020146 containerd[1591]: time="2026-01-20T01:50:35.005639873Z" level=info msg="received container exit event container_id:\"7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b\" id:\"7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b\" pid:3862 exited_at:{seconds:1768873835 nanos:5029299}" Jan 20 01:50:35.046677 containerd[1591]: time="2026-01-20T01:50:35.040306842Z" level=info msg="StartContainer for \"7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b\" returns successfully" Jan 20 01:50:35.485044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b-rootfs.mount: Deactivated successfully. Jan 20 01:50:35.587241 kubelet[3059]: E0120 01:50:35.587034 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:36.647654 kubelet[3059]: E0120 01:50:36.646613 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:36.718616 containerd[1591]: time="2026-01-20T01:50:36.717224362Z" level=info msg="CreateContainer within sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 01:50:36.818839 kubelet[3059]: E0120 01:50:36.800099 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:50:36.968685 containerd[1591]: time="2026-01-20T01:50:36.964667164Z" level=info msg="Container ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:50:37.048202 containerd[1591]: time="2026-01-20T01:50:37.047868580Z" level=info msg="CreateContainer within sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f\"" Jan 20 01:50:37.057952 containerd[1591]: time="2026-01-20T01:50:37.054899549Z" level=info msg="StartContainer for \"ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f\"" Jan 20 01:50:37.122653 containerd[1591]: time="2026-01-20T01:50:37.120552171Z" level=info msg="connecting to shim ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f" address="unix:///run/containerd/s/3c10e525120a26ac559e989735cca01e85bfc4788b9bf36e0f8ffa6c4442d577" protocol=ttrpc version=3 Jan 20 01:50:37.489083 systemd[1]: Started cri-containerd-ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f.scope - libcontainer container ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f. Jan 20 01:50:38.222443 systemd[1]: cri-containerd-ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f.scope: Deactivated successfully. Jan 20 01:50:38.290964 containerd[1591]: time="2026-01-20T01:50:38.289898155Z" level=info msg="received container exit event container_id:\"ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f\" id:\"ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f\" pid:3899 exited_at:{seconds:1768873838 nanos:236886452}" Jan 20 01:50:38.320778 containerd[1591]: time="2026-01-20T01:50:38.307937772Z" level=info msg="StartContainer for \"ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f\" returns successfully" Jan 20 01:50:38.825016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f-rootfs.mount: Deactivated successfully. Jan 20 01:50:38.856067 containerd[1591]: time="2026-01-20T01:50:38.853625167Z" level=warning msg="container event discarded" container=538d9af0e0fe6061699ae55830fb6054a5f3e398bd17c486de5684d0fe96c93f type=CONTAINER_DELETED_EVENT Jan 20 01:50:38.986026 kubelet[3059]: E0120 01:50:38.979169 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:40.071238 kubelet[3059]: E0120 01:50:40.063823 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:40.151718 containerd[1591]: time="2026-01-20T01:50:40.151663932Z" level=info msg="CreateContainer within sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 01:50:40.272508 containerd[1591]: time="2026-01-20T01:50:40.262794322Z" level=info msg="Container bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:50:40.335763 containerd[1591]: time="2026-01-20T01:50:40.334677527Z" level=info msg="CreateContainer within sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\"" Jan 20 01:50:40.357575 containerd[1591]: time="2026-01-20T01:50:40.356126345Z" level=info msg="StartContainer for \"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\"" Jan 20 01:50:40.392669 containerd[1591]: time="2026-01-20T01:50:40.378805188Z" level=info msg="connecting to shim bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4" address="unix:///run/containerd/s/3c10e525120a26ac559e989735cca01e85bfc4788b9bf36e0f8ffa6c4442d577" protocol=ttrpc version=3 Jan 20 01:50:40.578184 systemd[1]: Started cri-containerd-bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4.scope - libcontainer container bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4. Jan 20 01:50:41.466657 containerd[1591]: time="2026-01-20T01:50:41.461099525Z" level=info msg="StartContainer for \"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\" returns successfully" Jan 20 01:50:41.856659 kubelet[3059]: E0120 01:50:41.838652 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:50:42.680787 containerd[1591]: time="2026-01-20T01:50:42.677170134Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:42.734763 containerd[1591]: time="2026-01-20T01:50:42.729355422Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 01:50:42.792683 containerd[1591]: time="2026-01-20T01:50:42.784010906Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:42.817108 containerd[1591]: time="2026-01-20T01:50:42.811146258Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 14.520897581s" Jan 20 01:50:42.817108 containerd[1591]: time="2026-01-20T01:50:42.811260731Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 01:50:42.921013 containerd[1591]: time="2026-01-20T01:50:42.920738399Z" level=info msg="CreateContainer within sandbox \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 01:50:43.036972 containerd[1591]: time="2026-01-20T01:50:43.036916739Z" level=info msg="Container 964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:50:43.125870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3861849288.mount: Deactivated successfully. Jan 20 01:50:43.233978 containerd[1591]: time="2026-01-20T01:50:43.233348142Z" level=info msg="CreateContainer within sandbox \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6\"" Jan 20 01:50:43.259196 containerd[1591]: time="2026-01-20T01:50:43.254327067Z" level=info msg="StartContainer for \"964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6\"" Jan 20 01:50:43.288850 containerd[1591]: time="2026-01-20T01:50:43.286239806Z" level=info msg="connecting to shim 964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6" address="unix:///run/containerd/s/69e942a488c4e1743ee9d7bfa64c0f9e5fbc21403b02ac8c4d75389832099660" protocol=ttrpc version=3 Jan 20 01:50:43.571988 systemd[1]: Started cri-containerd-964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6.scope - libcontainer container 964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6. Jan 20 01:50:43.676621 kubelet[3059]: E0120 01:50:43.670734 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:44.013954 kubelet[3059]: I0120 01:50:44.013648 3059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fhzk2" podStartSLOduration=22.898748581 podStartE2EDuration="2m11.004349682s" podCreationTimestamp="2026-01-20 01:48:33 +0000 UTC" firstStartedPulling="2026-01-20 01:48:40.181227824 +0000 UTC m=+184.251115032" lastFinishedPulling="2026-01-20 01:50:28.286828924 +0000 UTC m=+292.356716133" observedRunningTime="2026-01-20 01:50:43.98621044 +0000 UTC m=+308.056097648" watchObservedRunningTime="2026-01-20 01:50:44.004349682 +0000 UTC m=+308.074236900" Jan 20 01:50:44.422684 containerd[1591]: time="2026-01-20T01:50:44.420846795Z" level=info msg="StartContainer for \"964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6\" returns successfully" Jan 20 01:50:44.819549 kubelet[3059]: E0120 01:50:44.815266 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:44.830595 kubelet[3059]: E0120 01:50:44.827020 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:45.077039 kubelet[3059]: I0120 01:50:45.076297 3059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-snnlg" podStartSLOduration=9.935879999 podStartE2EDuration="2m12.07603765s" podCreationTimestamp="2026-01-20 01:48:33 +0000 UTC" firstStartedPulling="2026-01-20 01:48:40.681975132 +0000 UTC m=+184.751862339" lastFinishedPulling="2026-01-20 01:50:42.822132782 +0000 UTC m=+306.892019990" observedRunningTime="2026-01-20 01:50:45.059769955 +0000 UTC m=+309.129657163" watchObservedRunningTime="2026-01-20 01:50:45.07603765 +0000 UTC m=+309.145924877" Jan 20 01:50:45.902021 kubelet[3059]: E0120 01:50:45.889718 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:50.526585 kubelet[3059]: I0120 01:50:50.477152 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb98f06e-4397-42ce-b77c-6bc98f1c54eb-config-volume\") pod \"coredns-674b8bbfcf-dldmv\" (UID: \"eb98f06e-4397-42ce-b77c-6bc98f1c54eb\") " pod="kube-system/coredns-674b8bbfcf-dldmv" Jan 20 01:50:50.526585 kubelet[3059]: I0120 01:50:50.477278 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwgq\" (UniqueName: \"kubernetes.io/projected/eb98f06e-4397-42ce-b77c-6bc98f1c54eb-kube-api-access-rmwgq\") pod \"coredns-674b8bbfcf-dldmv\" (UID: \"eb98f06e-4397-42ce-b77c-6bc98f1c54eb\") " pod="kube-system/coredns-674b8bbfcf-dldmv" Jan 20 01:50:50.584171 kubelet[3059]: I0120 01:50:50.582913 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgjkq\" (UniqueName: \"kubernetes.io/projected/e4ba1920-4f27-451e-a7e3-4210a00f7ea6-kube-api-access-fgjkq\") pod \"coredns-674b8bbfcf-5l47g\" (UID: \"e4ba1920-4f27-451e-a7e3-4210a00f7ea6\") " pod="kube-system/coredns-674b8bbfcf-5l47g" Jan 20 01:50:50.584171 kubelet[3059]: I0120 01:50:50.584041 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4ba1920-4f27-451e-a7e3-4210a00f7ea6-config-volume\") pod \"coredns-674b8bbfcf-5l47g\" (UID: \"e4ba1920-4f27-451e-a7e3-4210a00f7ea6\") " pod="kube-system/coredns-674b8bbfcf-5l47g" Jan 20 01:50:50.698681 systemd[1]: Created slice kubepods-burstable-pode4ba1920_4f27_451e_a7e3_4210a00f7ea6.slice - libcontainer container kubepods-burstable-pode4ba1920_4f27_451e_a7e3_4210a00f7ea6.slice. Jan 20 01:50:50.932593 systemd[1]: Created slice kubepods-burstable-podeb98f06e_4397_42ce_b77c_6bc98f1c54eb.slice - libcontainer container kubepods-burstable-podeb98f06e_4397_42ce_b77c_6bc98f1c54eb.slice. Jan 20 01:50:51.339688 kubelet[3059]: E0120 01:50:51.319990 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:51.339951 containerd[1591]: time="2026-01-20T01:50:51.328740271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dldmv,Uid:eb98f06e-4397-42ce-b77c-6bc98f1c54eb,Namespace:kube-system,Attempt:0,}" Jan 20 01:50:52.607332 kubelet[3059]: E0120 01:50:52.604678 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:52.631294 containerd[1591]: time="2026-01-20T01:50:52.628582038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5l47g,Uid:e4ba1920-4f27-451e-a7e3-4210a00f7ea6,Namespace:kube-system,Attempt:0,}" Jan 20 01:50:53.421912 kubelet[3059]: E0120 01:50:53.407858 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:57.938692 containerd[1591]: time="2026-01-20T01:50:57.935762893Z" level=warning msg="container event discarded" container=7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094 type=CONTAINER_CREATED_EVENT Jan 20 01:50:59.978322 containerd[1591]: time="2026-01-20T01:50:59.978186966Z" level=warning msg="container event discarded" container=7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094 type=CONTAINER_STARTED_EVENT Jan 20 01:51:00.070991 systemd-networkd[1486]: cilium_host: Link UP Jan 20 01:51:00.074274 systemd-networkd[1486]: cilium_net: Link UP Jan 20 01:51:00.076019 systemd-networkd[1486]: cilium_host: Gained carrier Jan 20 01:51:00.081767 systemd-networkd[1486]: cilium_net: Gained carrier Jan 20 01:51:00.840095 systemd-networkd[1486]: cilium_host: Gained IPv6LL Jan 20 01:51:01.120940 systemd-networkd[1486]: cilium_net: Gained IPv6LL Jan 20 01:51:02.370763 systemd-networkd[1486]: cilium_vxlan: Link UP Jan 20 01:51:02.370781 systemd-networkd[1486]: cilium_vxlan: Gained carrier Jan 20 01:51:03.919257 systemd-networkd[1486]: cilium_vxlan: Gained IPv6LL Jan 20 01:51:05.325642 kernel: NET: Registered PF_ALG protocol family Jan 20 01:51:06.525931 kubelet[3059]: E0120 01:51:06.525775 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:13.502096 kubelet[3059]: E0120 01:51:13.492083 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:22.094568 systemd-networkd[1486]: lxc_health: Link UP Jan 20 01:51:23.513141 systemd-networkd[1486]: lxc_health: Gained carrier Jan 20 01:51:23.759214 kubelet[3059]: E0120 01:51:23.688212 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:25.018571 kubelet[3059]: E0120 01:51:25.018229 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:25.560640 systemd-networkd[1486]: lxc_health: Gained IPv6LL Jan 20 01:51:26.027231 containerd[1591]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jan 20 01:51:26.082297 systemd[1]: run-netns-cni\x2ddfea7ef6\x2d6b36\x2d191f\x2da51c\x2d526307f1c090.mount: Deactivated successfully. Jan 20 01:51:26.098660 containerd[1591]: time="2026-01-20T01:51:26.095283200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dldmv,Uid:eb98f06e-4397-42ce-b77c-6bc98f1c54eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a097329930a5caef3a224a26b8a47ac239592bbaa2d8d6caa6dd403cfbd554ca\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Jan 20 01:51:26.099281 kubelet[3059]: E0120 01:51:26.099175 3059 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 20 01:51:26.099281 kubelet[3059]: rpc error: code = Unknown desc = failed to setup network for sandbox "a097329930a5caef3a224a26b8a47ac239592bbaa2d8d6caa6dd403cfbd554ca": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:51:26.099281 kubelet[3059]: Is the agent running? Jan 20 01:51:26.099281 kubelet[3059]: > Jan 20 01:51:26.105127 kubelet[3059]: E0120 01:51:26.099310 3059 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Jan 20 01:51:26.105127 kubelet[3059]: rpc error: code = Unknown desc = failed to setup network for sandbox "a097329930a5caef3a224a26b8a47ac239592bbaa2d8d6caa6dd403cfbd554ca": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:51:26.105127 kubelet[3059]: Is the agent running? Jan 20 01:51:26.105127 kubelet[3059]: > pod="kube-system/coredns-674b8bbfcf-dldmv" Jan 20 01:51:26.105127 kubelet[3059]: E0120 01:51:26.099482 3059 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Jan 20 01:51:26.105127 kubelet[3059]: rpc error: code = Unknown desc = failed to setup network for sandbox "a097329930a5caef3a224a26b8a47ac239592bbaa2d8d6caa6dd403cfbd554ca": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:51:26.105127 kubelet[3059]: Is the agent running? Jan 20 01:51:26.105127 kubelet[3059]: > pod="kube-system/coredns-674b8bbfcf-dldmv" Jan 20 01:51:26.112212 kubelet[3059]: E0120 01:51:26.099634 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dldmv_kube-system(eb98f06e-4397-42ce-b77c-6bc98f1c54eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dldmv_kube-system(eb98f06e-4397-42ce-b77c-6bc98f1c54eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a097329930a5caef3a224a26b8a47ac239592bbaa2d8d6caa6dd403cfbd554ca\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-674b8bbfcf-dldmv" podUID="eb98f06e-4397-42ce-b77c-6bc98f1c54eb" Jan 20 01:51:26.434949 containerd[1591]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jan 20 01:51:26.453722 systemd[1]: run-netns-cni\x2d42728b15\x2d25f3\x2d7f16\x2d482b\x2de62bc57fa2e5.mount: Deactivated successfully. Jan 20 01:51:26.471585 containerd[1591]: time="2026-01-20T01:51:26.471491936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5l47g,Uid:e4ba1920-4f27-451e-a7e3-4210a00f7ea6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b28effc007ab5caffed7623d9b6a788a893a2ae5635f8ba607e7189982d604\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Jan 20 01:51:26.478526 kubelet[3059]: E0120 01:51:26.474737 3059 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 20 01:51:26.478526 kubelet[3059]: rpc error: code = Unknown desc = failed to setup network for sandbox "06b28effc007ab5caffed7623d9b6a788a893a2ae5635f8ba607e7189982d604": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:51:26.478526 kubelet[3059]: Is the agent running? Jan 20 01:51:26.478526 kubelet[3059]: > Jan 20 01:51:26.478526 kubelet[3059]: E0120 01:51:26.474823 3059 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Jan 20 01:51:26.478526 kubelet[3059]: rpc error: code = Unknown desc = failed to setup network for sandbox "06b28effc007ab5caffed7623d9b6a788a893a2ae5635f8ba607e7189982d604": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:51:26.478526 kubelet[3059]: Is the agent running? Jan 20 01:51:26.478526 kubelet[3059]: > pod="kube-system/coredns-674b8bbfcf-5l47g" Jan 20 01:51:26.478526 kubelet[3059]: E0120 01:51:26.474850 3059 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Jan 20 01:51:26.478526 kubelet[3059]: rpc error: code = Unknown desc = failed to setup network for sandbox "06b28effc007ab5caffed7623d9b6a788a893a2ae5635f8ba607e7189982d604": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 20 01:51:26.478526 kubelet[3059]: Is the agent running? Jan 20 01:51:26.478526 kubelet[3059]: > pod="kube-system/coredns-674b8bbfcf-5l47g" Jan 20 01:51:26.479027 kubelet[3059]: E0120 01:51:26.474922 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5l47g_kube-system(e4ba1920-4f27-451e-a7e3-4210a00f7ea6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5l47g_kube-system(e4ba1920-4f27-451e-a7e3-4210a00f7ea6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06b28effc007ab5caffed7623d9b6a788a893a2ae5635f8ba607e7189982d604\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-674b8bbfcf-5l47g" podUID="e4ba1920-4f27-451e-a7e3-4210a00f7ea6" Jan 20 01:51:37.373221 kubelet[3059]: E0120 01:51:37.373166 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:37.414885 containerd[1591]: time="2026-01-20T01:51:37.395782698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5l47g,Uid:e4ba1920-4f27-451e-a7e3-4210a00f7ea6,Namespace:kube-system,Attempt:0,}" Jan 20 01:51:37.997586 systemd-networkd[1486]: lxcdd3b3e2d8600: Link UP Jan 20 01:51:38.015507 kernel: eth0: renamed from tmpbf05d Jan 20 01:51:38.055318 systemd-networkd[1486]: lxcdd3b3e2d8600: Gained carrier Jan 20 01:51:39.389793 kubelet[3059]: E0120 01:51:39.386844 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:39.417250 containerd[1591]: time="2026-01-20T01:51:39.403102353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dldmv,Uid:eb98f06e-4397-42ce-b77c-6bc98f1c54eb,Namespace:kube-system,Attempt:0,}" Jan 20 01:51:39.637712 systemd-networkd[1486]: lxcdd3b3e2d8600: Gained IPv6LL Jan 20 01:51:39.985691 kernel: eth0: renamed from tmp5e528 Jan 20 01:51:40.009416 systemd-networkd[1486]: lxcb8a440a4febe: Link UP Jan 20 01:51:40.035958 systemd-networkd[1486]: lxcb8a440a4febe: Gained carrier Jan 20 01:51:40.113556 containerd[1591]: time="2026-01-20T01:51:40.101649226Z" level=warning msg="container event discarded" container=13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa type=CONTAINER_STOPPED_EVENT Jan 20 01:51:41.367465 kubelet[3059]: E0120 01:51:41.366992 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:41.377941 systemd-networkd[1486]: lxcb8a440a4febe: Gained IPv6LL Jan 20 01:51:52.708976 containerd[1591]: time="2026-01-20T01:51:52.708875130Z" level=warning msg="container event discarded" container=3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019 type=CONTAINER_CREATED_EVENT Jan 20 01:51:53.363225 kubelet[3059]: E0120 01:51:53.341991 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:56.357885 kubelet[3059]: E0120 01:51:56.341212 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:10.359812 kubelet[3059]: E0120 01:52:10.359755 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:10.465980 containerd[1591]: time="2026-01-20T01:52:10.465354365Z" level=info msg="connecting to shim bf05d8603bd8f675e2ea88b31a70434a59e3f7b60e36a7e1c92efe5304de759d" address="unix:///run/containerd/s/d87a2b92e230c44bb4d59edf56f83d55ffcf63aae03f87a009db43e87a6b4523" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:52:10.478611 containerd[1591]: time="2026-01-20T01:52:10.477007182Z" level=info msg="connecting to shim 5e528607aa47b5aa6801bd5c6ac00d5e603bafd821f24a032f4a337016ab6279" address="unix:///run/containerd/s/4010227a20d7f2b17500ba94e08014fc60b00109f3a723827e278b2effea13a3" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:52:10.892849 systemd[1]: Started cri-containerd-bf05d8603bd8f675e2ea88b31a70434a59e3f7b60e36a7e1c92efe5304de759d.scope - libcontainer container bf05d8603bd8f675e2ea88b31a70434a59e3f7b60e36a7e1c92efe5304de759d. Jan 20 01:52:10.911750 systemd[1]: Started cri-containerd-5e528607aa47b5aa6801bd5c6ac00d5e603bafd821f24a032f4a337016ab6279.scope - libcontainer container 5e528607aa47b5aa6801bd5c6ac00d5e603bafd821f24a032f4a337016ab6279. Jan 20 01:52:11.222776 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:52:11.271848 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:52:11.622351 containerd[1591]: time="2026-01-20T01:52:11.621930282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dldmv,Uid:eb98f06e-4397-42ce-b77c-6bc98f1c54eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e528607aa47b5aa6801bd5c6ac00d5e603bafd821f24a032f4a337016ab6279\"" Jan 20 01:52:11.632518 containerd[1591]: time="2026-01-20T01:52:11.630133258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5l47g,Uid:e4ba1920-4f27-451e-a7e3-4210a00f7ea6,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf05d8603bd8f675e2ea88b31a70434a59e3f7b60e36a7e1c92efe5304de759d\"" Jan 20 01:52:11.638756 kubelet[3059]: E0120 01:52:11.638715 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:11.665555 kubelet[3059]: E0120 01:52:11.665510 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:11.728898 containerd[1591]: time="2026-01-20T01:52:11.728751699Z" level=info msg="CreateContainer within sandbox \"5e528607aa47b5aa6801bd5c6ac00d5e603bafd821f24a032f4a337016ab6279\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:52:11.769009 containerd[1591]: time="2026-01-20T01:52:11.767691996Z" level=info msg="CreateContainer within sandbox \"bf05d8603bd8f675e2ea88b31a70434a59e3f7b60e36a7e1c92efe5304de759d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:52:11.941141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619288207.mount: Deactivated successfully. Jan 20 01:52:11.963901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648088964.mount: Deactivated successfully. Jan 20 01:52:11.974045 containerd[1591]: time="2026-01-20T01:52:11.973169832Z" level=info msg="Container 72fb1a92ab0bb1f421062d946956d5e26be1dab14bc384afcc70f1e56f6e6ef4: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:52:11.992478 containerd[1591]: time="2026-01-20T01:52:11.990134573Z" level=info msg="Container cdd85e6874630e37dc9e5949c9038b1f7390aec6f0bc72474c7dd83b8dbaed16: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:52:12.048954 containerd[1591]: time="2026-01-20T01:52:12.048773542Z" level=info msg="CreateContainer within sandbox \"5e528607aa47b5aa6801bd5c6ac00d5e603bafd821f24a032f4a337016ab6279\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72fb1a92ab0bb1f421062d946956d5e26be1dab14bc384afcc70f1e56f6e6ef4\"" Jan 20 01:52:12.067433 containerd[1591]: time="2026-01-20T01:52:12.063220485Z" level=info msg="StartContainer for \"72fb1a92ab0bb1f421062d946956d5e26be1dab14bc384afcc70f1e56f6e6ef4\"" Jan 20 01:52:12.081206 containerd[1591]: time="2026-01-20T01:52:12.081148675Z" level=info msg="connecting to shim 72fb1a92ab0bb1f421062d946956d5e26be1dab14bc384afcc70f1e56f6e6ef4" address="unix:///run/containerd/s/4010227a20d7f2b17500ba94e08014fc60b00109f3a723827e278b2effea13a3" protocol=ttrpc version=3 Jan 20 01:52:12.233783 containerd[1591]: time="2026-01-20T01:52:12.219620233Z" level=info msg="CreateContainer within sandbox \"bf05d8603bd8f675e2ea88b31a70434a59e3f7b60e36a7e1c92efe5304de759d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdd85e6874630e37dc9e5949c9038b1f7390aec6f0bc72474c7dd83b8dbaed16\"" Jan 20 01:52:12.260110 containerd[1591]: time="2026-01-20T01:52:12.260060262Z" level=info msg="StartContainer for \"cdd85e6874630e37dc9e5949c9038b1f7390aec6f0bc72474c7dd83b8dbaed16\"" Jan 20 01:52:12.282530 containerd[1591]: time="2026-01-20T01:52:12.276665264Z" level=info msg="connecting to shim cdd85e6874630e37dc9e5949c9038b1f7390aec6f0bc72474c7dd83b8dbaed16" address="unix:///run/containerd/s/d87a2b92e230c44bb4d59edf56f83d55ffcf63aae03f87a009db43e87a6b4523" protocol=ttrpc version=3 Jan 20 01:52:12.468131 systemd[1]: Started cri-containerd-72fb1a92ab0bb1f421062d946956d5e26be1dab14bc384afcc70f1e56f6e6ef4.scope - libcontainer container 72fb1a92ab0bb1f421062d946956d5e26be1dab14bc384afcc70f1e56f6e6ef4. Jan 20 01:52:12.562697 systemd[1]: Started cri-containerd-cdd85e6874630e37dc9e5949c9038b1f7390aec6f0bc72474c7dd83b8dbaed16.scope - libcontainer container cdd85e6874630e37dc9e5949c9038b1f7390aec6f0bc72474c7dd83b8dbaed16. Jan 20 01:52:13.563519 containerd[1591]: time="2026-01-20T01:52:13.560138366Z" level=info msg="StartContainer for \"cdd85e6874630e37dc9e5949c9038b1f7390aec6f0bc72474c7dd83b8dbaed16\" returns successfully" Jan 20 01:52:13.758606 containerd[1591]: time="2026-01-20T01:52:13.753954257Z" level=info msg="StartContainer for \"72fb1a92ab0bb1f421062d946956d5e26be1dab14bc384afcc70f1e56f6e6ef4\" returns successfully" Jan 20 01:52:14.221420 kubelet[3059]: E0120 01:52:14.219603 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:14.290111 kubelet[3059]: E0120 01:52:14.280078 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:14.406584 kubelet[3059]: I0120 01:52:14.405067 3059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dldmv" podStartSLOduration=221.405046374 podStartE2EDuration="3m41.405046374s" podCreationTimestamp="2026-01-20 01:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:52:14.368305728 +0000 UTC m=+398.438192986" watchObservedRunningTime="2026-01-20 01:52:14.405046374 +0000 UTC m=+398.474933583" Jan 20 01:52:14.536434 kubelet[3059]: I0120 01:52:14.535830 3059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5l47g" podStartSLOduration=221.534188657 podStartE2EDuration="3m41.534188657s" podCreationTimestamp="2026-01-20 01:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:52:14.53375952 +0000 UTC m=+398.603646728" watchObservedRunningTime="2026-01-20 01:52:14.534188657 +0000 UTC m=+398.604075866" Jan 20 01:52:15.313443 kubelet[3059]: E0120 01:52:15.310892 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:15.466504 kubelet[3059]: E0120 01:52:15.453477 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:18.067010 containerd[1591]: time="2026-01-20T01:52:18.064645507Z" level=warning msg="container event discarded" container=3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019 type=CONTAINER_STARTED_EVENT Jan 20 01:52:18.612211 containerd[1591]: time="2026-01-20T01:52:18.611264902Z" level=warning msg="container event discarded" container=7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094 type=CONTAINER_STOPPED_EVENT Jan 20 01:52:19.497425 containerd[1591]: time="2026-01-20T01:52:19.496946116Z" level=warning msg="container event discarded" container=b68d23fd715dca60ec859fe6faa9ab0a6e02817e93c76b078ca607a70c7418d8 type=CONTAINER_DELETED_EVENT Jan 20 01:52:20.688805 kubelet[3059]: E0120 01:52:20.688588 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:25.397536 kubelet[3059]: E0120 01:52:25.396901 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:25.718620 kubelet[3059]: E0120 01:52:25.696326 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:27.559763 containerd[1591]: time="2026-01-20T01:52:27.559268447Z" level=warning msg="container event discarded" container=8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867 type=CONTAINER_CREATED_EVENT Jan 20 01:52:28.530331 containerd[1591]: time="2026-01-20T01:52:28.529705762Z" level=warning msg="container event discarded" container=8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867 type=CONTAINER_STARTED_EVENT Jan 20 01:52:35.348797 kubelet[3059]: E0120 01:52:35.348749 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:07.509512 containerd[1591]: time="2026-01-20T01:53:07.450875492Z" level=warning msg="container event discarded" container=3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019 type=CONTAINER_STOPPED_EVENT Jan 20 01:53:07.578602 systemd[1]: cri-containerd-8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867.scope: Deactivated successfully. Jan 20 01:53:07.579478 systemd[1]: cri-containerd-8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867.scope: Consumed 16.186s CPU time, 51.3M memory peak, 256K read from disk. Jan 20 01:53:07.793848 containerd[1591]: time="2026-01-20T01:53:07.777696039Z" level=warning msg="container event discarded" container=13a4154d1fd23da25968a1e66216047a8907b3cec186e71884cb9f3f8404eafa type=CONTAINER_DELETED_EVENT Jan 20 01:53:07.809766 systemd[1]: cri-containerd-964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6.scope: Deactivated successfully. Jan 20 01:53:07.821759 systemd[1]: cri-containerd-964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6.scope: Consumed 1.904s CPU time, 26.6M memory peak, 4K written to disk. Jan 20 01:53:07.911668 systemd[1]: cri-containerd-c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9.scope: Deactivated successfully. Jan 20 01:53:07.917944 systemd[1]: cri-containerd-c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9.scope: Consumed 11.962s CPU time, 24.4M memory peak, 128K read from disk. Jan 20 01:53:08.024348 containerd[1591]: time="2026-01-20T01:53:08.007000541Z" level=info msg="received container exit event container_id:\"8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867\" id:\"8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867\" pid:3259 exit_status:1 exited_at:{seconds:1768873987 nanos:804077849}" Jan 20 01:53:08.125542 containerd[1591]: time="2026-01-20T01:53:08.095159275Z" level=info msg="received container exit event container_id:\"964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6\" id:\"964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6\" pid:4023 exit_status:1 exited_at:{seconds:1768873988 nanos:77553919}" Jan 20 01:53:08.190141 kubelet[3059]: E0120 01:53:08.190021 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.707s" Jan 20 01:53:08.335829 containerd[1591]: time="2026-01-20T01:53:08.335680758Z" level=info msg="received container exit event container_id:\"c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9\" id:\"c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9\" pid:3311 exit_status:1 exited_at:{seconds:1768873988 nanos:273852944}" Jan 20 01:53:08.604102 kubelet[3059]: E0120 01:53:08.603995 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:08.625877 kubelet[3059]: E0120 01:53:08.625834 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:08.714689 kubelet[3059]: E0120 01:53:08.714647 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:09.326576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6-rootfs.mount: Deactivated successfully. Jan 20 01:53:09.528020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867-rootfs.mount: Deactivated successfully. Jan 20 01:53:09.711568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9-rootfs.mount: Deactivated successfully. Jan 20 01:53:10.275803 kubelet[3059]: I0120 01:53:10.244771 3059 scope.go:117] "RemoveContainer" containerID="964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6" Jan 20 01:53:10.275803 kubelet[3059]: E0120 01:53:10.244938 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:10.298521 containerd[1591]: time="2026-01-20T01:53:10.296688944Z" level=info msg="CreateContainer within sandbox \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 20 01:53:10.316258 kubelet[3059]: I0120 01:53:10.314644 3059 scope.go:117] "RemoveContainer" containerID="3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019" Jan 20 01:53:10.316258 kubelet[3059]: I0120 01:53:10.315457 3059 scope.go:117] "RemoveContainer" containerID="c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9" Jan 20 01:53:10.316258 kubelet[3059]: E0120 01:53:10.315559 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:10.316258 kubelet[3059]: E0120 01:53:10.315758 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(6e6cfcfb327385445a9bb0d2bc2fd5d4)\"" pod="kube-system/kube-scheduler-localhost" podUID="6e6cfcfb327385445a9bb0d2bc2fd5d4" Jan 20 01:53:10.740568 kubelet[3059]: I0120 01:53:10.740165 3059 scope.go:117] "RemoveContainer" containerID="8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867" Jan 20 01:53:10.740568 kubelet[3059]: E0120 01:53:10.740294 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:10.740568 kubelet[3059]: E0120 01:53:10.740531 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(66e26b992bcd7ea6fb75e339cf7a3f7d)\"" pod="kube-system/kube-controller-manager-localhost" podUID="66e26b992bcd7ea6fb75e339cf7a3f7d" Jan 20 01:53:10.911594 containerd[1591]: time="2026-01-20T01:53:10.911535226Z" level=info msg="RemoveContainer for \"3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019\"" Jan 20 01:53:11.016604 containerd[1591]: time="2026-01-20T01:53:11.012761700Z" level=info msg="RemoveContainer for \"3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019\" returns successfully" Jan 20 01:53:11.019660 kubelet[3059]: I0120 01:53:11.018301 3059 scope.go:117] "RemoveContainer" containerID="7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094" Jan 20 01:53:11.029753 containerd[1591]: time="2026-01-20T01:53:11.029692284Z" level=info msg="RemoveContainer for \"7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094\"" Jan 20 01:53:11.103733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1504599662.mount: Deactivated successfully. Jan 20 01:53:11.170300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702928530.mount: Deactivated successfully. Jan 20 01:53:11.184203 containerd[1591]: time="2026-01-20T01:53:11.180188495Z" level=info msg="Container 7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:53:11.216716 containerd[1591]: time="2026-01-20T01:53:11.214793834Z" level=info msg="RemoveContainer for \"7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094\" returns successfully" Jan 20 01:53:11.279571 containerd[1591]: time="2026-01-20T01:53:11.274671626Z" level=info msg="CreateContainer within sandbox \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\"" Jan 20 01:53:11.289303 containerd[1591]: time="2026-01-20T01:53:11.284577585Z" level=info msg="StartContainer for \"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\"" Jan 20 01:53:11.299473 containerd[1591]: time="2026-01-20T01:53:11.298554569Z" level=info msg="connecting to shim 7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9" address="unix:///run/containerd/s/69e942a488c4e1743ee9d7bfa64c0f9e5fbc21403b02ac8c4d75389832099660" protocol=ttrpc version=3 Jan 20 01:53:11.530828 systemd[1]: Started cri-containerd-7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9.scope - libcontainer container 7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9. Jan 20 01:53:12.122337 containerd[1591]: time="2026-01-20T01:53:12.116580567Z" level=info msg="StartContainer for \"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\" returns successfully" Jan 20 01:53:12.880969 kubelet[3059]: E0120 01:53:12.877931 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:17.100550 kubelet[3059]: I0120 01:53:17.096616 3059 scope.go:117] "RemoveContainer" containerID="c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9" Jan 20 01:53:17.100550 kubelet[3059]: E0120 01:53:17.096817 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:17.100550 kubelet[3059]: E0120 01:53:17.097317 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(6e6cfcfb327385445a9bb0d2bc2fd5d4)\"" pod="kube-system/kube-scheduler-localhost" podUID="6e6cfcfb327385445a9bb0d2bc2fd5d4" Jan 20 01:53:17.120926 kubelet[3059]: I0120 01:53:17.120628 3059 scope.go:117] "RemoveContainer" containerID="8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867" Jan 20 01:53:17.120926 kubelet[3059]: E0120 01:53:17.120734 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:17.120926 kubelet[3059]: E0120 01:53:17.120855 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(66e26b992bcd7ea6fb75e339cf7a3f7d)\"" pod="kube-system/kube-controller-manager-localhost" podUID="66e26b992bcd7ea6fb75e339cf7a3f7d" Jan 20 01:53:17.499151 containerd[1591]: time="2026-01-20T01:53:17.497581517Z" level=warning msg="container event discarded" container=c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9 type=CONTAINER_CREATED_EVENT Jan 20 01:53:18.427062 containerd[1591]: time="2026-01-20T01:53:18.424172989Z" level=warning msg="container event discarded" container=c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9 type=CONTAINER_STARTED_EVENT Jan 20 01:53:26.364848 kubelet[3059]: E0120 01:53:26.359792 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:28.354264 kubelet[3059]: I0120 01:53:28.352678 3059 scope.go:117] "RemoveContainer" containerID="c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9" Jan 20 01:53:28.354264 kubelet[3059]: E0120 01:53:28.352890 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:28.354264 kubelet[3059]: E0120 01:53:28.353827 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:28.410535 containerd[1591]: time="2026-01-20T01:53:28.397921536Z" level=info msg="CreateContainer within sandbox \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Jan 20 01:53:28.719022 containerd[1591]: time="2026-01-20T01:53:28.712497500Z" level=info msg="Container d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:53:28.716703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1648600082.mount: Deactivated successfully. Jan 20 01:53:28.917620 containerd[1591]: time="2026-01-20T01:53:28.913769703Z" level=info msg="CreateContainer within sandbox \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0\"" Jan 20 01:53:28.917620 containerd[1591]: time="2026-01-20T01:53:28.915834049Z" level=info msg="StartContainer for \"d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0\"" Jan 20 01:53:28.964278 containerd[1591]: time="2026-01-20T01:53:28.961217493Z" level=info msg="connecting to shim d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0" address="unix:///run/containerd/s/d1f127681a9c4311b456c6aab9e8ce8d82f6bff97094d53185fe0bdf6b34c086" protocol=ttrpc version=3 Jan 20 01:53:29.362776 systemd[1]: Started cri-containerd-d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0.scope - libcontainer container d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0. Jan 20 01:53:29.972834 containerd[1591]: time="2026-01-20T01:53:29.968685982Z" level=info msg="StartContainer for \"d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0\" returns successfully" Jan 20 01:53:30.372788 kubelet[3059]: I0120 01:53:30.343052 3059 scope.go:117] "RemoveContainer" containerID="8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867" Jan 20 01:53:30.372788 kubelet[3059]: E0120 01:53:30.366716 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:30.505265 containerd[1591]: time="2026-01-20T01:53:30.499869416Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Jan 20 01:53:30.610287 kubelet[3059]: E0120 01:53:30.609021 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:30.892267 containerd[1591]: time="2026-01-20T01:53:30.887475006Z" level=info msg="Container bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:53:30.984221 containerd[1591]: time="2026-01-20T01:53:30.980592046Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af\"" Jan 20 01:53:30.994156 containerd[1591]: time="2026-01-20T01:53:30.990171298Z" level=info msg="StartContainer for \"bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af\"" Jan 20 01:53:31.019311 containerd[1591]: time="2026-01-20T01:53:31.009037533Z" level=info msg="connecting to shim bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af" address="unix:///run/containerd/s/bd2a1fdad2c63e6b97ea527fdb88e51d630cdf855c2be6bd3e0513bd6d003b8e" protocol=ttrpc version=3 Jan 20 01:53:31.449681 systemd[1]: Started cri-containerd-bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af.scope - libcontainer container bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af. Jan 20 01:53:31.696242 kubelet[3059]: E0120 01:53:31.686919 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:32.117757 containerd[1591]: time="2026-01-20T01:53:32.112501013Z" level=info msg="StartContainer for \"bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af\" returns successfully" Jan 20 01:53:32.926444 kubelet[3059]: E0120 01:53:32.923740 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:37.111313 kubelet[3059]: E0120 01:53:37.099581 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:37.111313 kubelet[3059]: E0120 01:53:37.110731 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:40.179859 containerd[1591]: time="2026-01-20T01:53:40.176091820Z" level=warning msg="container event discarded" container=c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6 type=CONTAINER_CREATED_EVENT Jan 20 01:53:40.179859 containerd[1591]: time="2026-01-20T01:53:40.179712993Z" level=warning msg="container event discarded" container=c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6 type=CONTAINER_STARTED_EVENT Jan 20 01:53:41.320298 containerd[1591]: time="2026-01-20T01:53:41.115001405Z" level=warning msg="container event discarded" container=20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b type=CONTAINER_CREATED_EVENT Jan 20 01:53:41.320298 containerd[1591]: time="2026-01-20T01:53:41.320260680Z" level=warning msg="container event discarded" container=20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b type=CONTAINER_STARTED_EVENT Jan 20 01:53:41.686578 containerd[1591]: time="2026-01-20T01:53:41.675530589Z" level=warning msg="container event discarded" container=97a22c94ea24a95a0eb0c9ec8ab37a1049228a1a481f48fbbcc5d34c6ff20d39 type=CONTAINER_CREATED_EVENT Jan 20 01:53:41.686578 containerd[1591]: time="2026-01-20T01:53:41.675608265Z" level=warning msg="container event discarded" container=97a22c94ea24a95a0eb0c9ec8ab37a1049228a1a481f48fbbcc5d34c6ff20d39 type=CONTAINER_STARTED_EVENT Jan 20 01:53:41.826446 containerd[1591]: time="2026-01-20T01:53:41.825514049Z" level=warning msg="container event discarded" container=c14ebd690017f2beb71e87ed33ec7f52613066479272544ccb71e2edfdd195f2 type=CONTAINER_CREATED_EVENT Jan 20 01:53:46.104582 containerd[1591]: time="2026-01-20T01:53:46.103837379Z" level=warning msg="container event discarded" container=c14ebd690017f2beb71e87ed33ec7f52613066479272544ccb71e2edfdd195f2 type=CONTAINER_STARTED_EVENT Jan 20 01:53:47.230944 kubelet[3059]: E0120 01:53:47.230107 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:47.265172 kubelet[3059]: E0120 01:53:47.264984 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:47.601107 kubelet[3059]: E0120 01:53:47.597935 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:48.347538 kubelet[3059]: E0120 01:53:48.343679 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:49.357744 kubelet[3059]: E0120 01:53:49.357695 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:04.943762 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:45462.service - OpenSSH per-connection server daemon (10.0.0.1:45462). Jan 20 01:54:06.223327 sshd[4915]: Accepted publickey for core from 10.0.0.1 port 45462 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:54:06.262893 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:54:06.340805 systemd-logind[1565]: New session 10 of user core. Jan 20 01:54:06.397961 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:54:12.103436 sshd[4918]: Connection closed by 10.0.0.1 port 45462 Jan 20 01:54:12.109066 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Jan 20 01:54:12.244990 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:45462.service: Deactivated successfully. Jan 20 01:54:12.282781 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:54:12.290789 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:54:12.330795 systemd-logind[1565]: Removed session 10. Jan 20 01:54:15.340529 kubelet[3059]: E0120 01:54:15.340005 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:17.204683 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:58494.service - OpenSSH per-connection server daemon (10.0.0.1:58494). Jan 20 01:54:17.683074 sshd[4935]: Accepted publickey for core from 10.0.0.1 port 58494 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:54:17.721959 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:54:17.785524 systemd-logind[1565]: New session 11 of user core. Jan 20 01:54:17.806663 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:54:18.628293 sshd[4938]: Connection closed by 10.0.0.1 port 58494 Jan 20 01:54:18.631195 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Jan 20 01:54:18.664076 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:54:18.676553 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:58494.service: Deactivated successfully. Jan 20 01:54:18.692907 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:54:18.716666 systemd-logind[1565]: Removed session 11. Jan 20 01:54:23.791324 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:58498.service - OpenSSH per-connection server daemon (10.0.0.1:58498). Jan 20 01:54:24.426155 sshd[4954]: Accepted publickey for core from 10.0.0.1 port 58498 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:54:24.425712 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:54:24.480282 systemd-logind[1565]: New session 12 of user core. Jan 20 01:54:24.518163 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:54:25.402059 sshd[4957]: Connection closed by 10.0.0.1 port 58498 Jan 20 01:54:25.401712 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Jan 20 01:54:25.433988 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:58498.service: Deactivated successfully. Jan 20 01:54:25.458762 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:54:25.489061 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:54:25.494724 systemd-logind[1565]: Removed session 12. Jan 20 01:54:27.373528 kubelet[3059]: E0120 01:54:27.371545 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:30.350650 kubelet[3059]: E0120 01:54:30.346847 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:30.541781 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:40734.service - OpenSSH per-connection server daemon (10.0.0.1:40734). Jan 20 01:54:30.961587 sshd[4972]: Accepted publickey for core from 10.0.0.1 port 40734 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:54:30.971653 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:54:31.039568 systemd-logind[1565]: New session 13 of user core. Jan 20 01:54:31.067693 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:54:32.109470 sshd[4975]: Connection closed by 10.0.0.1 port 40734 Jan 20 01:54:32.109549 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Jan 20 01:54:32.138080 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:40734.service: Deactivated successfully. Jan 20 01:54:32.153620 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:54:32.186457 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:54:32.201602 systemd-logind[1565]: Removed session 13. Jan 20 01:54:37.207485 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:38198.service - OpenSSH per-connection server daemon (10.0.0.1:38198). Jan 20 01:54:37.750823 sshd[4989]: Accepted publickey for core from 10.0.0.1 port 38198 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:54:37.752559 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:54:37.823835 systemd-logind[1565]: New session 14 of user core. Jan 20 01:54:37.854880 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:54:38.787565 sshd[4992]: Connection closed by 10.0.0.1 port 38198 Jan 20 01:54:38.791877 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Jan 20 01:54:38.856263 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:38198.service: Deactivated successfully. Jan 20 01:54:38.925699 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:54:38.944258 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:54:38.947072 systemd-logind[1565]: Removed session 14. Jan 20 01:54:43.836807 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:38202.service - OpenSSH per-connection server daemon (10.0.0.1:38202). Jan 20 01:54:44.114501 sshd[5008]: Accepted publickey for core from 10.0.0.1 port 38202 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:54:44.110050 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:54:44.191310 systemd-logind[1565]: New session 15 of user core. Jan 20 01:54:44.216863 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:54:45.176497 sshd[5011]: Connection closed by 10.0.0.1 port 38202 Jan 20 01:54:45.197672 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Jan 20 01:54:45.262682 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:38202.service: Deactivated successfully. Jan 20 01:54:45.302125 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:54:45.364054 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:54:45.411849 systemd-logind[1565]: Removed session 15. Jan 20 01:54:49.477851 kubelet[3059]: E0120 01:54:49.477621 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.963s" Jan 20 01:54:49.540812 kubelet[3059]: E0120 01:54:49.540761 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:50.231076 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:56954.service - OpenSSH per-connection server daemon (10.0.0.1:56954). Jan 20 01:54:50.676896 sshd[5025]: Accepted publickey for core from 10.0.0.1 port 56954 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:54:50.714816 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:54:50.818096 systemd-logind[1565]: New session 16 of user core. Jan 20 01:54:50.871713 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:54:52.179539 sshd[5028]: Connection closed by 10.0.0.1 port 56954 Jan 20 01:54:52.191840 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Jan 20 01:54:52.240954 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:56954.service: Deactivated successfully. Jan 20 01:54:52.268882 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:54:52.307022 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:54:52.344104 systemd-logind[1565]: Removed session 16. Jan 20 01:54:57.309167 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:56782.service - OpenSSH per-connection server daemon (10.0.0.1:56782). Jan 20 01:54:57.659096 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 56782 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:54:57.681799 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:54:57.757148 systemd-logind[1565]: New session 17 of user core. Jan 20 01:54:57.803031 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:54:58.870566 sshd[5048]: Connection closed by 10.0.0.1 port 56782 Jan 20 01:54:58.880808 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Jan 20 01:54:58.904965 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:56782.service: Deactivated successfully. Jan 20 01:54:58.920097 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:54:58.940083 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:54:58.965823 systemd-logind[1565]: Removed session 17. Jan 20 01:55:04.083956 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:56784.service - OpenSSH per-connection server daemon (10.0.0.1:56784). Jan 20 01:55:04.932216 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 56784 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:55:04.965233 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:55:05.054728 systemd-logind[1565]: New session 18 of user core. Jan 20 01:55:05.074093 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:55:05.368738 kubelet[3059]: E0120 01:55:05.362496 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:06.304106 sshd[5065]: Connection closed by 10.0.0.1 port 56784 Jan 20 01:55:06.301351 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Jan 20 01:55:06.352545 kubelet[3059]: E0120 01:55:06.352250 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:06.366847 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:56784.service: Deactivated successfully. Jan 20 01:55:06.387163 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:55:06.401099 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:55:06.411213 systemd-logind[1565]: Removed session 18. Jan 20 01:55:09.339843 kubelet[3059]: E0120 01:55:09.339566 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:11.404100 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:54866.service - OpenSSH per-connection server daemon (10.0.0.1:54866). Jan 20 01:55:12.038773 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 54866 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:55:12.063862 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:55:12.130934 systemd-logind[1565]: New session 19 of user core. Jan 20 01:55:12.205815 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:55:13.593802 sshd[5083]: Connection closed by 10.0.0.1 port 54866 Jan 20 01:55:13.670704 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Jan 20 01:55:13.718290 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:54866.service: Deactivated successfully. Jan 20 01:55:13.771168 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:55:13.790462 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:55:13.829688 systemd-logind[1565]: Removed session 19. Jan 20 01:55:15.360731 kubelet[3059]: E0120 01:55:15.358215 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:18.663098 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:55860.service - OpenSSH per-connection server daemon (10.0.0.1:55860). Jan 20 01:55:19.099103 sshd[5097]: Accepted publickey for core from 10.0.0.1 port 55860 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:55:19.107936 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:55:19.186598 systemd-logind[1565]: New session 20 of user core. Jan 20 01:55:19.223218 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:55:20.434624 sshd[5100]: Connection closed by 10.0.0.1 port 55860 Jan 20 01:55:20.435814 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Jan 20 01:55:20.477706 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:55860.service: Deactivated successfully. Jan 20 01:55:20.481955 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:55:20.508959 systemd-logind[1565]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:55:20.526106 systemd-logind[1565]: Removed session 20. Jan 20 01:55:25.602293 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:38942.service - OpenSSH per-connection server daemon (10.0.0.1:38942). Jan 20 01:55:26.277794 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 38942 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:55:26.304472 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:55:26.342208 kubelet[3059]: E0120 01:55:26.340527 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:26.412920 systemd-logind[1565]: New session 21 of user core. Jan 20 01:55:26.477475 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:55:27.420925 sshd[5119]: Connection closed by 10.0.0.1 port 38942 Jan 20 01:55:27.420567 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Jan 20 01:55:27.439318 systemd-logind[1565]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:55:27.440114 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:38942.service: Deactivated successfully. Jan 20 01:55:27.451201 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:55:27.467289 systemd-logind[1565]: Removed session 21. Jan 20 01:55:28.628908 containerd[1591]: time="2026-01-20T01:55:28.628779548Z" level=warning msg="container event discarded" container=4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332 type=CONTAINER_CREATED_EVENT Jan 20 01:55:29.411310 containerd[1591]: time="2026-01-20T01:55:29.411165998Z" level=warning msg="container event discarded" container=4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332 type=CONTAINER_STARTED_EVENT Jan 20 01:55:30.605858 containerd[1591]: time="2026-01-20T01:55:30.601601669Z" level=warning msg="container event discarded" container=4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332 type=CONTAINER_STOPPED_EVENT Jan 20 01:55:31.902184 containerd[1591]: time="2026-01-20T01:55:31.902083705Z" level=warning msg="container event discarded" container=c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e type=CONTAINER_CREATED_EVENT Jan 20 01:55:32.234282 containerd[1591]: time="2026-01-20T01:55:32.232089927Z" level=warning msg="container event discarded" container=c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e type=CONTAINER_STARTED_EVENT Jan 20 01:55:32.507316 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:38954.service - OpenSSH per-connection server daemon (10.0.0.1:38954). Jan 20 01:55:32.745296 containerd[1591]: time="2026-01-20T01:55:32.739144320Z" level=warning msg="container event discarded" container=c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e type=CONTAINER_STOPPED_EVENT Jan 20 01:55:33.004041 sshd[5134]: Accepted publickey for core from 10.0.0.1 port 38954 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:55:33.020510 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:55:33.099020 systemd-logind[1565]: New session 22 of user core. Jan 20 01:55:33.139266 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:55:34.024211 containerd[1591]: time="2026-01-20T01:55:34.024101225Z" level=warning msg="container event discarded" container=7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b type=CONTAINER_CREATED_EVENT Jan 20 01:55:34.488050 sshd[5137]: Connection closed by 10.0.0.1 port 38954 Jan 20 01:55:34.489076 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Jan 20 01:55:34.568001 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:38954.service: Deactivated successfully. Jan 20 01:55:34.610334 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:55:34.653005 systemd-logind[1565]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:55:34.668585 systemd-logind[1565]: Removed session 22. Jan 20 01:55:34.997481 containerd[1591]: time="2026-01-20T01:55:34.994825527Z" level=warning msg="container event discarded" container=7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b type=CONTAINER_STARTED_EVENT Jan 20 01:55:35.652226 containerd[1591]: time="2026-01-20T01:55:35.652092144Z" level=warning msg="container event discarded" container=7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b type=CONTAINER_STOPPED_EVENT Jan 20 01:55:37.057686 containerd[1591]: time="2026-01-20T01:55:37.057562571Z" level=warning msg="container event discarded" container=ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f type=CONTAINER_CREATED_EVENT Jan 20 01:55:38.323581 containerd[1591]: time="2026-01-20T01:55:38.310734467Z" level=warning msg="container event discarded" container=ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f type=CONTAINER_STARTED_EVENT Jan 20 01:55:39.180869 containerd[1591]: time="2026-01-20T01:55:39.180705814Z" level=warning msg="container event discarded" container=ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f type=CONTAINER_STOPPED_EVENT Jan 20 01:55:39.586744 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:42756.service - OpenSSH per-connection server daemon (10.0.0.1:42756). Jan 20 01:55:40.131037 sshd[5154]: Accepted publickey for core from 10.0.0.1 port 42756 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:55:40.139229 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:55:40.201195 systemd-logind[1565]: New session 23 of user core. Jan 20 01:55:40.262313 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:55:40.343984 containerd[1591]: time="2026-01-20T01:55:40.342102155Z" level=warning msg="container event discarded" container=bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4 type=CONTAINER_CREATED_EVENT Jan 20 01:55:41.381120 containerd[1591]: time="2026-01-20T01:55:41.380318414Z" level=warning msg="container event discarded" container=bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4 type=CONTAINER_STARTED_EVENT Jan 20 01:55:41.752290 sshd[5157]: Connection closed by 10.0.0.1 port 42756 Jan 20 01:55:41.771634 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Jan 20 01:55:41.826547 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:42756.service: Deactivated successfully. Jan 20 01:55:41.836625 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:55:41.874214 systemd-logind[1565]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:55:41.896560 systemd-logind[1565]: Removed session 23. Jan 20 01:55:43.192838 containerd[1591]: time="2026-01-20T01:55:43.176620880Z" level=warning msg="container event discarded" container=964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6 type=CONTAINER_CREATED_EVENT Jan 20 01:55:44.384138 containerd[1591]: time="2026-01-20T01:55:44.383455650Z" level=warning msg="container event discarded" container=964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6 type=CONTAINER_STARTED_EVENT Jan 20 01:55:46.885235 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:37048.service - OpenSSH per-connection server daemon (10.0.0.1:37048). Jan 20 01:55:47.304652 sshd[5172]: Accepted publickey for core from 10.0.0.1 port 37048 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:55:47.365697 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:55:47.478685 systemd-logind[1565]: New session 24 of user core. Jan 20 01:55:47.504225 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:55:48.526083 sshd[5175]: Connection closed by 10.0.0.1 port 37048 Jan 20 01:55:48.517856 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Jan 20 01:55:48.616470 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:37048.service: Deactivated successfully. Jan 20 01:55:48.691872 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:55:48.702548 systemd-logind[1565]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:55:48.783011 systemd-logind[1565]: Removed session 24. Jan 20 01:55:51.373499 kubelet[3059]: E0120 01:55:51.343297 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:53.588701 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:37062.service - OpenSSH per-connection server daemon (10.0.0.1:37062). Jan 20 01:55:54.053130 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 37062 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:55:54.062664 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:55:54.102105 systemd-logind[1565]: New session 25 of user core. Jan 20 01:55:54.134533 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:55:55.806840 sshd[5194]: Connection closed by 10.0.0.1 port 37062 Jan 20 01:55:55.815913 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Jan 20 01:55:55.919561 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:37062.service: Deactivated successfully. Jan 20 01:55:55.972638 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:55:56.058245 systemd-logind[1565]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:55:56.089486 systemd-logind[1565]: Removed session 25. Jan 20 01:55:56.367617 kubelet[3059]: E0120 01:55:56.365856 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:00.367192 kubelet[3059]: E0120 01:56:00.366636 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:00.901921 systemd[1]: Started sshd@25-10.0.0.51:22-10.0.0.1:43752.service - OpenSSH per-connection server daemon (10.0.0.1:43752). Jan 20 01:56:01.423710 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 43752 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:56:01.451948 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:56:01.539722 systemd-logind[1565]: New session 26 of user core. Jan 20 01:56:01.594713 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:56:02.756614 sshd[5211]: Connection closed by 10.0.0.1 port 43752 Jan 20 01:56:02.757893 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Jan 20 01:56:02.786850 systemd[1]: sshd@25-10.0.0.51:22-10.0.0.1:43752.service: Deactivated successfully. Jan 20 01:56:02.824987 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:56:02.879967 systemd-logind[1565]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:56:02.892858 systemd-logind[1565]: Removed session 26. Jan 20 01:56:07.908941 systemd[1]: Started sshd@26-10.0.0.51:22-10.0.0.1:44230.service - OpenSSH per-connection server daemon (10.0.0.1:44230). Jan 20 01:56:08.464571 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 44230 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:56:08.492230 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:56:08.576585 systemd-logind[1565]: New session 27 of user core. Jan 20 01:56:08.601804 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 01:56:09.730531 sshd[5229]: Connection closed by 10.0.0.1 port 44230 Jan 20 01:56:09.729992 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Jan 20 01:56:09.783531 systemd[1]: sshd@26-10.0.0.51:22-10.0.0.1:44230.service: Deactivated successfully. Jan 20 01:56:09.832280 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 01:56:09.907507 systemd-logind[1565]: Session 27 logged out. Waiting for processes to exit. Jan 20 01:56:09.939918 systemd-logind[1565]: Removed session 27. Jan 20 01:56:14.831340 systemd[1]: Started sshd@27-10.0.0.51:22-10.0.0.1:44928.service - OpenSSH per-connection server daemon (10.0.0.1:44928). Jan 20 01:56:15.237606 sshd[5244]: Accepted publickey for core from 10.0.0.1 port 44928 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:56:15.278991 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:56:15.407458 systemd-logind[1565]: New session 28 of user core. Jan 20 01:56:15.436922 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 01:56:17.702499 sshd[5247]: Connection closed by 10.0.0.1 port 44928 Jan 20 01:56:17.705579 sshd-session[5244]: pam_unix(sshd:session): session closed for user core Jan 20 01:56:17.731112 systemd[1]: sshd@27-10.0.0.51:22-10.0.0.1:44928.service: Deactivated successfully. Jan 20 01:56:17.757786 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 01:56:17.764927 systemd-logind[1565]: Session 28 logged out. Waiting for processes to exit. Jan 20 01:56:17.776878 systemd-logind[1565]: Removed session 28. Jan 20 01:56:20.346458 kubelet[3059]: E0120 01:56:20.345722 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:22.830727 systemd[1]: Started sshd@28-10.0.0.51:22-10.0.0.1:44934.service - OpenSSH per-connection server daemon (10.0.0.1:44934). Jan 20 01:56:23.295060 sshd[5264]: Accepted publickey for core from 10.0.0.1 port 44934 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:56:23.299092 sshd-session[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:56:23.386619 systemd-logind[1565]: New session 29 of user core. Jan 20 01:56:23.405705 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 01:56:24.264818 sshd[5267]: Connection closed by 10.0.0.1 port 44934 Jan 20 01:56:24.270648 sshd-session[5264]: pam_unix(sshd:session): session closed for user core Jan 20 01:56:24.296295 systemd[1]: sshd@28-10.0.0.51:22-10.0.0.1:44934.service: Deactivated successfully. Jan 20 01:56:24.334727 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 01:56:24.362117 systemd-logind[1565]: Session 29 logged out. Waiting for processes to exit. Jan 20 01:56:24.376607 systemd-logind[1565]: Removed session 29. Jan 20 01:56:26.343034 kubelet[3059]: E0120 01:56:26.341719 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:29.397740 systemd[1]: Started sshd@29-10.0.0.51:22-10.0.0.1:58896.service - OpenSSH per-connection server daemon (10.0.0.1:58896). Jan 20 01:56:29.866057 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 58896 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:56:29.865628 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:56:29.920636 systemd-logind[1565]: New session 30 of user core. Jan 20 01:56:29.955644 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 01:56:30.877126 sshd[5285]: Connection closed by 10.0.0.1 port 58896 Jan 20 01:56:30.878856 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Jan 20 01:56:30.935119 systemd[1]: sshd@29-10.0.0.51:22-10.0.0.1:58896.service: Deactivated successfully. Jan 20 01:56:30.953327 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 01:56:30.977638 systemd-logind[1565]: Session 30 logged out. Waiting for processes to exit. Jan 20 01:56:30.980807 systemd-logind[1565]: Removed session 30. Jan 20 01:56:36.008012 systemd[1]: Started sshd@30-10.0.0.51:22-10.0.0.1:37810.service - OpenSSH per-connection server daemon (10.0.0.1:37810). Jan 20 01:56:36.420621 kubelet[3059]: E0120 01:56:36.403605 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:36.691053 sshd[5301]: Accepted publickey for core from 10.0.0.1 port 37810 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:56:36.715792 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:56:36.794532 systemd-logind[1565]: New session 31 of user core. Jan 20 01:56:36.820942 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 01:56:37.663214 sshd[5304]: Connection closed by 10.0.0.1 port 37810 Jan 20 01:56:37.668604 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Jan 20 01:56:37.718728 systemd[1]: sshd@30-10.0.0.51:22-10.0.0.1:37810.service: Deactivated successfully. Jan 20 01:56:37.766042 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 01:56:37.801817 systemd-logind[1565]: Session 31 logged out. Waiting for processes to exit. Jan 20 01:56:37.829103 systemd-logind[1565]: Removed session 31. Jan 20 01:56:39.422117 kubelet[3059]: E0120 01:56:39.415489 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:41.360244 kubelet[3059]: E0120 01:56:41.360178 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:42.809489 systemd[1]: Started sshd@31-10.0.0.51:22-10.0.0.1:37812.service - OpenSSH per-connection server daemon (10.0.0.1:37812). Jan 20 01:56:43.280753 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 37812 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:56:43.308978 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:56:43.361279 systemd-logind[1565]: New session 32 of user core. Jan 20 01:56:43.385268 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 01:56:44.432253 sshd[5324]: Connection closed by 10.0.0.1 port 37812 Jan 20 01:56:44.434908 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Jan 20 01:56:44.497271 systemd[1]: sshd@31-10.0.0.51:22-10.0.0.1:37812.service: Deactivated successfully. Jan 20 01:56:44.529699 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 01:56:44.557050 systemd-logind[1565]: Session 32 logged out. Waiting for processes to exit. Jan 20 01:56:44.593240 systemd-logind[1565]: Removed session 32. Jan 20 01:56:49.595573 systemd[1]: Started sshd@32-10.0.0.51:22-10.0.0.1:37390.service - OpenSSH per-connection server daemon (10.0.0.1:37390). Jan 20 01:56:54.603636 sshd[5339]: Accepted publickey for core from 10.0.0.1 port 37390 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:56:54.615077 sshd-session[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:56:54.680740 systemd-logind[1565]: New session 33 of user core. Jan 20 01:56:54.723518 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 01:56:55.767874 sshd[5344]: Connection closed by 10.0.0.1 port 37390 Jan 20 01:56:55.772034 sshd-session[5339]: pam_unix(sshd:session): session closed for user core Jan 20 01:56:55.806856 systemd[1]: sshd@32-10.0.0.51:22-10.0.0.1:37390.service: Deactivated successfully. Jan 20 01:56:55.844138 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 01:56:55.877352 systemd-logind[1565]: Session 33 logged out. Waiting for processes to exit. Jan 20 01:56:55.887239 systemd-logind[1565]: Removed session 33. Jan 20 01:57:08.312232 systemd[1]: Started sshd@33-10.0.0.51:22-10.0.0.1:52356.service - OpenSSH per-connection server daemon (10.0.0.1:52356). Jan 20 01:57:08.373865 kubelet[3059]: E0120 01:57:08.373810 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:09.226683 sshd[5360]: Accepted publickey for core from 10.0.0.1 port 52356 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:57:09.276686 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:57:09.366728 systemd-logind[1565]: New session 34 of user core. Jan 20 01:57:09.427817 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 01:57:10.372250 kubelet[3059]: E0120 01:57:10.371008 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:11.340643 sshd[5364]: Connection closed by 10.0.0.1 port 52356 Jan 20 01:57:11.384602 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Jan 20 01:57:11.425811 systemd[1]: sshd@33-10.0.0.51:22-10.0.0.1:52356.service: Deactivated successfully. Jan 20 01:57:11.454253 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 01:57:11.490605 systemd-logind[1565]: Session 34 logged out. Waiting for processes to exit. Jan 20 01:57:11.511764 systemd-logind[1565]: Removed session 34. Jan 20 01:57:11.649263 containerd[1591]: time="2026-01-20T01:57:11.637828062Z" level=warning msg="container event discarded" container=5e528607aa47b5aa6801bd5c6ac00d5e603bafd821f24a032f4a337016ab6279 type=CONTAINER_CREATED_EVENT Jan 20 01:57:11.649263 containerd[1591]: time="2026-01-20T01:57:11.637995413Z" level=warning msg="container event discarded" container=5e528607aa47b5aa6801bd5c6ac00d5e603bafd821f24a032f4a337016ab6279 type=CONTAINER_STARTED_EVENT Jan 20 01:57:11.649263 containerd[1591]: time="2026-01-20T01:57:11.638011073Z" level=warning msg="container event discarded" container=bf05d8603bd8f675e2ea88b31a70434a59e3f7b60e36a7e1c92efe5304de759d type=CONTAINER_CREATED_EVENT Jan 20 01:57:11.649263 containerd[1591]: time="2026-01-20T01:57:11.638020701Z" level=warning msg="container event discarded" container=bf05d8603bd8f675e2ea88b31a70434a59e3f7b60e36a7e1c92efe5304de759d type=CONTAINER_STARTED_EVENT Jan 20 01:57:12.050911 containerd[1591]: time="2026-01-20T01:57:12.047107350Z" level=warning msg="container event discarded" container=72fb1a92ab0bb1f421062d946956d5e26be1dab14bc384afcc70f1e56f6e6ef4 type=CONTAINER_CREATED_EVENT Jan 20 01:57:12.200736 containerd[1591]: time="2026-01-20T01:57:12.200557676Z" level=warning msg="container event discarded" container=cdd85e6874630e37dc9e5949c9038b1f7390aec6f0bc72474c7dd83b8dbaed16 type=CONTAINER_CREATED_EVENT Jan 20 01:57:13.568895 containerd[1591]: time="2026-01-20T01:57:13.566933364Z" level=warning msg="container event discarded" container=cdd85e6874630e37dc9e5949c9038b1f7390aec6f0bc72474c7dd83b8dbaed16 type=CONTAINER_STARTED_EVENT Jan 20 01:57:13.756321 containerd[1591]: time="2026-01-20T01:57:13.750152835Z" level=warning msg="container event discarded" container=72fb1a92ab0bb1f421062d946956d5e26be1dab14bc384afcc70f1e56f6e6ef4 type=CONTAINER_STARTED_EVENT Jan 20 01:57:16.355430 kubelet[3059]: E0120 01:57:16.349681 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:16.453335 systemd[1]: Started sshd@34-10.0.0.51:22-10.0.0.1:60506.service - OpenSSH per-connection server daemon (10.0.0.1:60506). Jan 20 01:57:16.857330 sshd[5380]: Accepted publickey for core from 10.0.0.1 port 60506 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:57:16.882341 sshd-session[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:57:16.939564 systemd-logind[1565]: New session 35 of user core. Jan 20 01:57:17.008611 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 01:57:18.698809 sshd[5383]: Connection closed by 10.0.0.1 port 60506 Jan 20 01:57:18.708492 sshd-session[5380]: pam_unix(sshd:session): session closed for user core Jan 20 01:57:18.777860 systemd[1]: sshd@34-10.0.0.51:22-10.0.0.1:60506.service: Deactivated successfully. Jan 20 01:57:18.784895 systemd-logind[1565]: Session 35 logged out. Waiting for processes to exit. Jan 20 01:57:18.857074 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 01:57:18.912211 systemd-logind[1565]: Removed session 35. Jan 20 01:57:23.850249 systemd[1]: Started sshd@35-10.0.0.51:22-10.0.0.1:60516.service - OpenSSH per-connection server daemon (10.0.0.1:60516). Jan 20 01:57:24.234292 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 60516 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:57:24.261900 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:57:24.358100 systemd-logind[1565]: New session 36 of user core. Jan 20 01:57:24.420250 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 01:57:25.731892 sshd[5404]: Connection closed by 10.0.0.1 port 60516 Jan 20 01:57:25.727889 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Jan 20 01:57:25.788829 systemd-logind[1565]: Session 36 logged out. Waiting for processes to exit. Jan 20 01:57:25.792177 systemd[1]: sshd@35-10.0.0.51:22-10.0.0.1:60516.service: Deactivated successfully. Jan 20 01:57:25.826338 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 01:57:25.860060 systemd-logind[1565]: Removed session 36. Jan 20 01:57:30.851580 systemd[1]: Started sshd@36-10.0.0.51:22-10.0.0.1:40760.service - OpenSSH per-connection server daemon (10.0.0.1:40760). Jan 20 01:57:31.289063 sshd[5419]: Accepted publickey for core from 10.0.0.1 port 40760 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:57:31.352333 sshd-session[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:57:31.527911 systemd-logind[1565]: New session 37 of user core. Jan 20 01:57:31.560291 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 01:57:32.382747 sshd[5423]: Connection closed by 10.0.0.1 port 40760 Jan 20 01:57:32.386818 sshd-session[5419]: pam_unix(sshd:session): session closed for user core Jan 20 01:57:32.420555 systemd[1]: sshd@36-10.0.0.51:22-10.0.0.1:40760.service: Deactivated successfully. Jan 20 01:57:32.443240 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 01:57:32.470834 systemd-logind[1565]: Session 37 logged out. Waiting for processes to exit. Jan 20 01:57:32.502045 systemd-logind[1565]: Removed session 37. Jan 20 01:57:44.054903 systemd[1]: Started sshd@37-10.0.0.51:22-10.0.0.1:32772.service - OpenSSH per-connection server daemon (10.0.0.1:32772). Jan 20 01:57:44.466949 kubelet[3059]: E0120 01:57:44.458164 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.117s" Jan 20 01:57:45.701336 kubelet[3059]: E0120 01:57:45.695196 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.237s" Jan 20 01:57:45.875538 kubelet[3059]: E0120 01:57:45.872206 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:46.183239 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 32772 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:57:46.201859 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:57:46.279954 systemd-logind[1565]: New session 38 of user core. Jan 20 01:57:46.309969 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 01:57:48.231053 sshd[5443]: Connection closed by 10.0.0.1 port 32772 Jan 20 01:57:48.238668 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Jan 20 01:57:48.341594 systemd[1]: sshd@37-10.0.0.51:22-10.0.0.1:32772.service: Deactivated successfully. Jan 20 01:57:48.380261 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 01:57:48.418450 systemd-logind[1565]: Session 38 logged out. Waiting for processes to exit. Jan 20 01:57:48.436497 systemd-logind[1565]: Removed session 38. Jan 20 01:57:53.374720 systemd[1]: Started sshd@38-10.0.0.51:22-10.0.0.1:34420.service - OpenSSH per-connection server daemon (10.0.0.1:34420). Jan 20 01:57:54.026054 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 34420 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:57:54.088583 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:57:54.331227 systemd-logind[1565]: New session 39 of user core. Jan 20 01:57:54.361176 kubelet[3059]: E0120 01:57:54.361131 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:54.386668 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 01:57:56.591912 sshd[5463]: Connection closed by 10.0.0.1 port 34420 Jan 20 01:57:56.578289 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Jan 20 01:57:56.671647 systemd[1]: sshd@38-10.0.0.51:22-10.0.0.1:34420.service: Deactivated successfully. Jan 20 01:57:56.735264 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 01:57:56.744994 systemd-logind[1565]: Session 39 logged out. Waiting for processes to exit. Jan 20 01:57:56.787569 systemd-logind[1565]: Removed session 39. Jan 20 01:57:57.364456 kubelet[3059]: E0120 01:57:57.357505 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:01.666227 systemd[1]: Started sshd@39-10.0.0.51:22-10.0.0.1:51670.service - OpenSSH per-connection server daemon (10.0.0.1:51670). Jan 20 01:58:02.209008 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 51670 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:58:02.212692 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:02.280719 systemd-logind[1565]: New session 40 of user core. Jan 20 01:58:02.344684 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 01:58:02.347104 kubelet[3059]: E0120 01:58:02.346056 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:04.477591 sshd[5481]: Connection closed by 10.0.0.1 port 51670 Jan 20 01:58:04.473653 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:04.604510 systemd[1]: sshd@39-10.0.0.51:22-10.0.0.1:51670.service: Deactivated successfully. Jan 20 01:58:04.692531 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 01:58:04.742922 systemd-logind[1565]: Session 40 logged out. Waiting for processes to exit. Jan 20 01:58:04.784152 systemd-logind[1565]: Removed session 40. Jan 20 01:58:09.526468 systemd[1]: Started sshd@40-10.0.0.51:22-10.0.0.1:54072.service - OpenSSH per-connection server daemon (10.0.0.1:54072). Jan 20 01:58:09.765637 containerd[1591]: time="2026-01-20T01:58:09.765519564Z" level=warning msg="container event discarded" container=964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6 type=CONTAINER_STOPPED_EVENT Jan 20 01:58:09.770806 containerd[1591]: time="2026-01-20T01:58:09.770207371Z" level=warning msg="container event discarded" container=8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867 type=CONTAINER_STOPPED_EVENT Jan 20 01:58:09.818559 containerd[1591]: time="2026-01-20T01:58:09.818468795Z" level=warning msg="container event discarded" container=c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9 type=CONTAINER_STOPPED_EVENT Jan 20 01:58:09.996950 sshd[5497]: Accepted publickey for core from 10.0.0.1 port 54072 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:58:10.012662 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:10.052510 systemd-logind[1565]: New session 41 of user core. Jan 20 01:58:10.072993 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 01:58:11.027192 containerd[1591]: time="2026-01-20T01:58:11.027088169Z" level=warning msg="container event discarded" container=3ec0a1c70fd885d9c74867abd0021a09f9fc40cecdc56355834074528fea8019 type=CONTAINER_DELETED_EVENT Jan 20 01:58:11.321084 containerd[1591]: time="2026-01-20T01:58:11.261610793Z" level=warning msg="container event discarded" container=7a0ef73204b6bb39dcea474f6b75f53b009562489cb076b267102ab0d2a5a094 type=CONTAINER_DELETED_EVENT Jan 20 01:58:11.354412 kubelet[3059]: E0120 01:58:11.354259 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:11.508261 kubelet[3059]: E0120 01:58:11.381464 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:11.508447 containerd[1591]: time="2026-01-20T01:58:11.391098926Z" level=warning msg="container event discarded" container=7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9 type=CONTAINER_CREATED_EVENT Jan 20 01:58:11.828548 sshd[5500]: Connection closed by 10.0.0.1 port 54072 Jan 20 01:58:11.820721 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:11.873328 systemd[1]: sshd@40-10.0.0.51:22-10.0.0.1:54072.service: Deactivated successfully. Jan 20 01:58:11.934213 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 01:58:11.967029 systemd-logind[1565]: Session 41 logged out. Waiting for processes to exit. Jan 20 01:58:12.006645 systemd-logind[1565]: Removed session 41. Jan 20 01:58:12.106729 containerd[1591]: time="2026-01-20T01:58:12.106107927Z" level=warning msg="container event discarded" container=7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9 type=CONTAINER_STARTED_EVENT Jan 20 01:58:16.884300 systemd[1]: Started sshd@41-10.0.0.51:22-10.0.0.1:34824.service - OpenSSH per-connection server daemon (10.0.0.1:34824). Jan 20 01:58:17.470337 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 34824 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:58:17.478600 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:17.859560 systemd-logind[1565]: New session 42 of user core. Jan 20 01:58:17.897694 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 01:58:18.605702 sshd[5517]: Connection closed by 10.0.0.1 port 34824 Jan 20 01:58:18.608737 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:18.637841 systemd[1]: sshd@41-10.0.0.51:22-10.0.0.1:34824.service: Deactivated successfully. Jan 20 01:58:18.687280 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 01:58:18.698134 systemd-logind[1565]: Session 42 logged out. Waiting for processes to exit. Jan 20 01:58:18.731728 systemd-logind[1565]: Removed session 42. Jan 20 01:58:23.370113 kubelet[3059]: E0120 01:58:23.365877 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:23.791187 systemd[1]: Started sshd@42-10.0.0.51:22-10.0.0.1:34826.service - OpenSSH per-connection server daemon (10.0.0.1:34826). Jan 20 01:58:24.374763 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 34826 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:58:24.393160 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:24.474320 systemd-logind[1565]: New session 43 of user core. Jan 20 01:58:24.493601 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 01:58:25.931764 sshd[5536]: Connection closed by 10.0.0.1 port 34826 Jan 20 01:58:25.931844 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:25.978084 systemd[1]: sshd@42-10.0.0.51:22-10.0.0.1:34826.service: Deactivated successfully. Jan 20 01:58:25.986586 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 01:58:26.007620 systemd-logind[1565]: Session 43 logged out. Waiting for processes to exit. Jan 20 01:58:26.023506 systemd-logind[1565]: Removed session 43. Jan 20 01:58:28.917113 containerd[1591]: time="2026-01-20T01:58:28.912431335Z" level=warning msg="container event discarded" container=d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0 type=CONTAINER_CREATED_EVENT Jan 20 01:58:29.965464 containerd[1591]: time="2026-01-20T01:58:29.960310342Z" level=warning msg="container event discarded" container=d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0 type=CONTAINER_STARTED_EVENT Jan 20 01:58:30.990767 containerd[1591]: time="2026-01-20T01:58:30.982237785Z" level=warning msg="container event discarded" container=bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af type=CONTAINER_CREATED_EVENT Jan 20 01:58:31.032245 systemd[1]: Started sshd@43-10.0.0.51:22-10.0.0.1:32878.service - OpenSSH per-connection server daemon (10.0.0.1:32878). Jan 20 01:58:31.720437 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 32878 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:58:31.739607 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:31.820602 systemd-logind[1565]: New session 44 of user core. Jan 20 01:58:31.868318 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 01:58:32.115116 containerd[1591]: time="2026-01-20T01:58:32.111043952Z" level=warning msg="container event discarded" container=bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af type=CONTAINER_STARTED_EVENT Jan 20 01:58:33.067224 sshd[5556]: Connection closed by 10.0.0.1 port 32878 Jan 20 01:58:33.071871 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:33.152349 systemd[1]: sshd@43-10.0.0.51:22-10.0.0.1:32878.service: Deactivated successfully. Jan 20 01:58:33.190999 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 01:58:33.238845 systemd-logind[1565]: Session 44 logged out. Waiting for processes to exit. Jan 20 01:58:33.256308 systemd-logind[1565]: Removed session 44. Jan 20 01:58:35.376513 kubelet[3059]: E0120 01:58:35.368320 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:38.193282 systemd[1]: Started sshd@44-10.0.0.51:22-10.0.0.1:48846.service - OpenSSH per-connection server daemon (10.0.0.1:48846). Jan 20 01:58:38.908515 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 48846 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:58:38.917439 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:38.974992 systemd-logind[1565]: New session 45 of user core. Jan 20 01:58:39.039424 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 01:58:40.616675 sshd[5573]: Connection closed by 10.0.0.1 port 48846 Jan 20 01:58:40.628157 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:40.686585 systemd[1]: sshd@44-10.0.0.51:22-10.0.0.1:48846.service: Deactivated successfully. Jan 20 01:58:40.732653 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 01:58:40.764041 systemd-logind[1565]: Session 45 logged out. Waiting for processes to exit. Jan 20 01:58:40.819426 systemd[1]: Started sshd@45-10.0.0.51:22-10.0.0.1:48858.service - OpenSSH per-connection server daemon (10.0.0.1:48858). Jan 20 01:58:40.858201 systemd-logind[1565]: Removed session 45. Jan 20 01:58:41.511724 sshd[5590]: Accepted publickey for core from 10.0.0.1 port 48858 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:58:41.518117 sshd-session[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:41.565203 systemd-logind[1565]: New session 46 of user core. Jan 20 01:58:41.585285 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 01:58:43.361211 sshd[5593]: Connection closed by 10.0.0.1 port 48858 Jan 20 01:58:43.367246 sshd-session[5590]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:43.471149 systemd[1]: sshd@45-10.0.0.51:22-10.0.0.1:48858.service: Deactivated successfully. Jan 20 01:58:43.501839 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 01:58:43.541042 systemd-logind[1565]: Session 46 logged out. Waiting for processes to exit. Jan 20 01:58:43.587303 systemd[1]: Started sshd@46-10.0.0.51:22-10.0.0.1:48866.service - OpenSSH per-connection server daemon (10.0.0.1:48866). Jan 20 01:58:43.603504 systemd-logind[1565]: Removed session 46. Jan 20 01:58:44.225451 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 48866 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:58:44.263756 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:44.365586 systemd-logind[1565]: New session 47 of user core. Jan 20 01:58:44.484264 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 01:58:45.711848 sshd[5615]: Connection closed by 10.0.0.1 port 48866 Jan 20 01:58:45.730669 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:45.794806 systemd[1]: sshd@46-10.0.0.51:22-10.0.0.1:48866.service: Deactivated successfully. Jan 20 01:58:45.834798 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 01:58:45.883842 systemd-logind[1565]: Session 47 logged out. Waiting for processes to exit. Jan 20 01:58:45.917634 systemd-logind[1565]: Removed session 47. Jan 20 01:58:50.812867 systemd[1]: Started sshd@47-10.0.0.51:22-10.0.0.1:48822.service - OpenSSH per-connection server daemon (10.0.0.1:48822). Jan 20 01:58:51.318601 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 48822 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:58:51.346738 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:51.433293 systemd-logind[1565]: New session 48 of user core. Jan 20 01:58:51.500761 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 01:58:52.623346 sshd[5633]: Connection closed by 10.0.0.1 port 48822 Jan 20 01:58:52.625791 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Jan 20 01:58:52.684846 systemd[1]: sshd@47-10.0.0.51:22-10.0.0.1:48822.service: Deactivated successfully. Jan 20 01:58:52.725872 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 01:58:52.770153 systemd-logind[1565]: Session 48 logged out. Waiting for processes to exit. Jan 20 01:58:52.800842 systemd-logind[1565]: Removed session 48. Jan 20 01:58:57.369080 kubelet[3059]: E0120 01:58:57.360969 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:57.837944 systemd[1]: Started sshd@48-10.0.0.51:22-10.0.0.1:54224.service - OpenSSH per-connection server daemon (10.0.0.1:54224). Jan 20 01:58:58.846978 sshd[5648]: Accepted publickey for core from 10.0.0.1 port 54224 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:58:58.980878 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:58:59.123791 systemd-logind[1565]: New session 49 of user core. Jan 20 01:58:59.195742 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 01:59:01.083352 sshd[5651]: Connection closed by 10.0.0.1 port 54224 Jan 20 01:59:01.089552 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:01.121249 systemd[1]: sshd@48-10.0.0.51:22-10.0.0.1:54224.service: Deactivated successfully. Jan 20 01:59:01.157261 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 01:59:01.182100 systemd-logind[1565]: Session 49 logged out. Waiting for processes to exit. Jan 20 01:59:01.202750 systemd-logind[1565]: Removed session 49. Jan 20 01:59:06.186209 systemd[1]: Started sshd@49-10.0.0.51:22-10.0.0.1:45580.service - OpenSSH per-connection server daemon (10.0.0.1:45580). Jan 20 01:59:06.342846 kubelet[3059]: E0120 01:59:06.341616 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:06.661172 sshd[5665]: Accepted publickey for core from 10.0.0.1 port 45580 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:59:06.660913 sshd-session[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:06.703673 systemd-logind[1565]: New session 50 of user core. Jan 20 01:59:06.740272 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 01:59:08.221937 sshd[5668]: Connection closed by 10.0.0.1 port 45580 Jan 20 01:59:08.222727 sshd-session[5665]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:08.268793 systemd[1]: sshd@49-10.0.0.51:22-10.0.0.1:45580.service: Deactivated successfully. Jan 20 01:59:08.344737 kubelet[3059]: E0120 01:59:08.341766 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:08.359321 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 01:59:08.385563 systemd-logind[1565]: Session 50 logged out. Waiting for processes to exit. Jan 20 01:59:08.400941 systemd-logind[1565]: Removed session 50. Jan 20 01:59:13.332129 systemd[1]: Started sshd@50-10.0.0.51:22-10.0.0.1:45582.service - OpenSSH per-connection server daemon (10.0.0.1:45582). Jan 20 01:59:13.768866 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 45582 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:59:13.769559 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:13.823195 systemd-logind[1565]: New session 51 of user core. Jan 20 01:59:13.872952 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 01:59:14.805843 sshd[5685]: Connection closed by 10.0.0.1 port 45582 Jan 20 01:59:14.805533 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:14.840339 systemd[1]: sshd@50-10.0.0.51:22-10.0.0.1:45582.service: Deactivated successfully. Jan 20 01:59:14.867642 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 01:59:14.883011 systemd-logind[1565]: Session 51 logged out. Waiting for processes to exit. Jan 20 01:59:14.909976 systemd-logind[1565]: Removed session 51. Jan 20 01:59:19.901474 systemd[1]: Started sshd@51-10.0.0.51:22-10.0.0.1:49122.service - OpenSSH per-connection server daemon (10.0.0.1:49122). Jan 20 01:59:20.729430 sshd[5700]: Accepted publickey for core from 10.0.0.1 port 49122 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:59:20.767944 sshd-session[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:20.872904 systemd-logind[1565]: New session 52 of user core. Jan 20 01:59:20.921326 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 01:59:21.844228 sshd[5703]: Connection closed by 10.0.0.1 port 49122 Jan 20 01:59:21.848042 sshd-session[5700]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:21.896527 systemd[1]: sshd@51-10.0.0.51:22-10.0.0.1:49122.service: Deactivated successfully. Jan 20 01:59:21.916446 systemd-logind[1565]: Session 52 logged out. Waiting for processes to exit. Jan 20 01:59:21.925837 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 01:59:21.942845 systemd-logind[1565]: Removed session 52. Jan 20 01:59:23.976490 kubelet[3059]: E0120 01:59:23.967041 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:23.991884 kubelet[3059]: E0120 01:59:23.990002 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:25.383689 kubelet[3059]: E0120 01:59:25.378691 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:26.986747 systemd[1]: Started sshd@52-10.0.0.51:22-10.0.0.1:37384.service - OpenSSH per-connection server daemon (10.0.0.1:37384). Jan 20 01:59:27.476133 sshd[5719]: Accepted publickey for core from 10.0.0.1 port 37384 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:59:27.488917 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:27.535780 systemd-logind[1565]: New session 53 of user core. Jan 20 01:59:27.587902 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 01:59:28.929253 sshd[5722]: Connection closed by 10.0.0.1 port 37384 Jan 20 01:59:28.931888 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:28.975656 systemd[1]: sshd@52-10.0.0.51:22-10.0.0.1:37384.service: Deactivated successfully. Jan 20 01:59:28.996302 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 01:59:29.031552 systemd-logind[1565]: Session 53 logged out. Waiting for processes to exit. Jan 20 01:59:29.049068 systemd-logind[1565]: Removed session 53. Jan 20 01:59:29.369453 kubelet[3059]: E0120 01:59:29.361621 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:34.077794 systemd[1]: Started sshd@53-10.0.0.51:22-10.0.0.1:37392.service - OpenSSH per-connection server daemon (10.0.0.1:37392). Jan 20 01:59:34.412617 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 37392 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:59:34.434031 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:34.480426 systemd-logind[1565]: New session 54 of user core. Jan 20 01:59:34.506539 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 01:59:36.001272 sshd[5739]: Connection closed by 10.0.0.1 port 37392 Jan 20 01:59:35.989776 sshd-session[5736]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:36.042450 systemd[1]: sshd@53-10.0.0.51:22-10.0.0.1:37392.service: Deactivated successfully. Jan 20 01:59:36.069022 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 01:59:36.087300 systemd-logind[1565]: Session 54 logged out. Waiting for processes to exit. Jan 20 01:59:36.096445 systemd-logind[1565]: Removed session 54. Jan 20 01:59:41.211300 systemd[1]: Started sshd@54-10.0.0.51:22-10.0.0.1:33866.service - OpenSSH per-connection server daemon (10.0.0.1:33866). Jan 20 01:59:42.223225 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 33866 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:59:42.240796 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:42.402435 systemd-logind[1565]: New session 55 of user core. Jan 20 01:59:42.486994 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 01:59:43.434035 sshd[5757]: Connection closed by 10.0.0.1 port 33866 Jan 20 01:59:43.440645 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:43.477325 systemd[1]: sshd@54-10.0.0.51:22-10.0.0.1:33866.service: Deactivated successfully. Jan 20 01:59:43.489703 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 01:59:43.506356 systemd-logind[1565]: Session 55 logged out. Waiting for processes to exit. Jan 20 01:59:43.521630 systemd-logind[1565]: Removed session 55. Jan 20 01:59:44.366410 kubelet[3059]: E0120 01:59:44.362840 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:48.506904 systemd[1]: Started sshd@55-10.0.0.51:22-10.0.0.1:56442.service - OpenSSH per-connection server daemon (10.0.0.1:56442). Jan 20 01:59:48.823917 sshd[5770]: Accepted publickey for core from 10.0.0.1 port 56442 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:59:48.846631 sshd-session[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:48.933456 systemd-logind[1565]: New session 56 of user core. Jan 20 01:59:48.952577 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 01:59:50.016703 sshd[5773]: Connection closed by 10.0.0.1 port 56442 Jan 20 01:59:50.020094 sshd-session[5770]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:50.044212 systemd[1]: sshd@55-10.0.0.51:22-10.0.0.1:56442.service: Deactivated successfully. Jan 20 01:59:50.050268 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 01:59:50.061245 systemd-logind[1565]: Session 56 logged out. Waiting for processes to exit. Jan 20 01:59:50.071906 systemd-logind[1565]: Removed session 56. Jan 20 01:59:55.107050 systemd[1]: Started sshd@56-10.0.0.51:22-10.0.0.1:37066.service - OpenSSH per-connection server daemon (10.0.0.1:37066). Jan 20 01:59:55.561937 sshd[5789]: Accepted publickey for core from 10.0.0.1 port 37066 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 01:59:55.592972 sshd-session[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:59:55.680510 systemd-logind[1565]: New session 57 of user core. Jan 20 01:59:55.700786 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 01:59:57.342581 sshd[5792]: Connection closed by 10.0.0.1 port 37066 Jan 20 01:59:57.392558 sshd-session[5789]: pam_unix(sshd:session): session closed for user core Jan 20 01:59:57.455032 systemd[1]: sshd@56-10.0.0.51:22-10.0.0.1:37066.service: Deactivated successfully. Jan 20 01:59:58.229706 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 01:59:58.284009 systemd-logind[1565]: Session 57 logged out. Waiting for processes to exit. Jan 20 01:59:58.297938 systemd-logind[1565]: Removed session 57. Jan 20 02:00:02.417532 systemd[1]: Started sshd@57-10.0.0.51:22-10.0.0.1:37104.service - OpenSSH per-connection server daemon (10.0.0.1:37104). Jan 20 02:00:03.019302 sshd[5805]: Accepted publickey for core from 10.0.0.1 port 37104 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:00:03.029125 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:03.078838 systemd-logind[1565]: New session 58 of user core. Jan 20 02:00:03.116702 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 02:00:04.329530 sshd[5808]: Connection closed by 10.0.0.1 port 37104 Jan 20 02:00:04.332750 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:04.388594 systemd[1]: sshd@57-10.0.0.51:22-10.0.0.1:37104.service: Deactivated successfully. Jan 20 02:00:04.414943 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 02:00:04.439828 systemd-logind[1565]: Session 58 logged out. Waiting for processes to exit. Jan 20 02:00:04.452064 systemd-logind[1565]: Removed session 58. Jan 20 02:00:09.393564 systemd[1]: Started sshd@58-10.0.0.51:22-10.0.0.1:38912.service - OpenSSH per-connection server daemon (10.0.0.1:38912). Jan 20 02:00:09.940877 sshd[5821]: Accepted publickey for core from 10.0.0.1 port 38912 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:00:09.963790 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:10.029531 systemd-logind[1565]: New session 59 of user core. Jan 20 02:00:10.080745 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 02:00:11.625504 sshd[5824]: Connection closed by 10.0.0.1 port 38912 Jan 20 02:00:11.619174 sshd-session[5821]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:11.708066 systemd[1]: sshd@58-10.0.0.51:22-10.0.0.1:38912.service: Deactivated successfully. Jan 20 02:00:11.760991 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 02:00:11.799487 systemd-logind[1565]: Session 59 logged out. Waiting for processes to exit. Jan 20 02:00:11.818554 systemd-logind[1565]: Removed session 59. Jan 20 02:00:16.716848 systemd[1]: Started sshd@59-10.0.0.51:22-10.0.0.1:57560.service - OpenSSH per-connection server daemon (10.0.0.1:57560). Jan 20 02:00:17.363636 sshd[5838]: Accepted publickey for core from 10.0.0.1 port 57560 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:00:17.371535 kubelet[3059]: E0120 02:00:17.370499 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:17.398619 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:17.502302 systemd-logind[1565]: New session 60 of user core. Jan 20 02:00:17.534875 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 02:00:18.729597 sshd[5841]: Connection closed by 10.0.0.1 port 57560 Jan 20 02:00:18.746189 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:18.797824 systemd[1]: sshd@59-10.0.0.51:22-10.0.0.1:57560.service: Deactivated successfully. Jan 20 02:00:18.820318 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 02:00:18.892071 systemd-logind[1565]: Session 60 logged out. Waiting for processes to exit. Jan 20 02:00:18.937974 systemd-logind[1565]: Removed session 60. Jan 20 02:00:23.877945 systemd[1]: Started sshd@60-10.0.0.51:22-10.0.0.1:57596.service - OpenSSH per-connection server daemon (10.0.0.1:57596). Jan 20 02:00:24.585951 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 57596 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:00:24.593030 sshd-session[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:24.678156 systemd-logind[1565]: New session 61 of user core. Jan 20 02:00:24.708979 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 02:00:25.646841 sshd[5859]: Connection closed by 10.0.0.1 port 57596 Jan 20 02:00:25.676894 sshd-session[5856]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:25.709129 systemd[1]: sshd@60-10.0.0.51:22-10.0.0.1:57596.service: Deactivated successfully. Jan 20 02:00:25.905923 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 02:00:25.923832 systemd-logind[1565]: Session 61 logged out. Waiting for processes to exit. Jan 20 02:00:25.934642 systemd-logind[1565]: Removed session 61. Jan 20 02:00:30.803802 systemd[1]: Started sshd@61-10.0.0.51:22-10.0.0.1:57206.service - OpenSSH per-connection server daemon (10.0.0.1:57206). Jan 20 02:00:31.383886 kubelet[3059]: E0120 02:00:31.383445 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:31.474817 sshd[5872]: Accepted publickey for core from 10.0.0.1 port 57206 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:00:31.488038 sshd-session[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:31.552914 systemd-logind[1565]: New session 62 of user core. Jan 20 02:00:31.611967 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 02:00:32.358653 kubelet[3059]: E0120 02:00:32.353125 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:33.375576 sshd[5875]: Connection closed by 10.0.0.1 port 57206 Jan 20 02:00:33.379728 sshd-session[5872]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:33.407990 systemd[1]: sshd@61-10.0.0.51:22-10.0.0.1:57206.service: Deactivated successfully. Jan 20 02:00:33.430520 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 02:00:33.455187 systemd-logind[1565]: Session 62 logged out. Waiting for processes to exit. Jan 20 02:00:33.467468 systemd-logind[1565]: Removed session 62. Jan 20 02:00:37.345033 kubelet[3059]: E0120 02:00:37.344771 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:38.777967 systemd[1]: Started sshd@62-10.0.0.51:22-10.0.0.1:59578.service - OpenSSH per-connection server daemon (10.0.0.1:59578). Jan 20 02:00:39.836751 sshd[5888]: Accepted publickey for core from 10.0.0.1 port 59578 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:00:39.856585 sshd-session[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:39.931651 systemd-logind[1565]: New session 63 of user core. Jan 20 02:00:40.060122 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 02:00:40.373562 kubelet[3059]: E0120 02:00:40.369990 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:41.641485 sshd[5893]: Connection closed by 10.0.0.1 port 59578 Jan 20 02:00:41.649585 sshd-session[5888]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:41.728858 systemd[1]: sshd@62-10.0.0.51:22-10.0.0.1:59578.service: Deactivated successfully. Jan 20 02:00:41.782122 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 02:00:41.822176 systemd-logind[1565]: Session 63 logged out. Waiting for processes to exit. Jan 20 02:00:41.876344 systemd-logind[1565]: Removed session 63. Jan 20 02:00:42.347154 kubelet[3059]: E0120 02:00:42.341138 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:46.752852 systemd[1]: Started sshd@63-10.0.0.51:22-10.0.0.1:35750.service - OpenSSH per-connection server daemon (10.0.0.1:35750). Jan 20 02:00:47.799259 sshd[5907]: Accepted publickey for core from 10.0.0.1 port 35750 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:00:47.841344 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:47.956687 systemd-logind[1565]: New session 64 of user core. Jan 20 02:00:48.004228 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 20 02:00:48.961774 sshd[5910]: Connection closed by 10.0.0.1 port 35750 Jan 20 02:00:48.969653 sshd-session[5907]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:49.004702 systemd[1]: sshd@63-10.0.0.51:22-10.0.0.1:35750.service: Deactivated successfully. Jan 20 02:00:49.009849 systemd[1]: session-64.scope: Deactivated successfully. Jan 20 02:00:49.053860 systemd-logind[1565]: Session 64 logged out. Waiting for processes to exit. Jan 20 02:00:49.095324 systemd-logind[1565]: Removed session 64. Jan 20 02:00:49.367472 kubelet[3059]: E0120 02:00:49.364221 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:54.044027 systemd[1]: Started sshd@64-10.0.0.51:22-10.0.0.1:35784.service - OpenSSH per-connection server daemon (10.0.0.1:35784). Jan 20 02:00:55.199860 sshd[5925]: Accepted publickey for core from 10.0.0.1 port 35784 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:00:55.228013 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:55.416564 systemd-logind[1565]: New session 65 of user core. Jan 20 02:00:55.479221 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 20 02:00:56.842006 sshd[5928]: Connection closed by 10.0.0.1 port 35784 Jan 20 02:00:56.869946 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:56.915802 systemd[1]: sshd@64-10.0.0.51:22-10.0.0.1:35784.service: Deactivated successfully. Jan 20 02:00:56.950933 systemd[1]: session-65.scope: Deactivated successfully. Jan 20 02:00:56.971240 systemd-logind[1565]: Session 65 logged out. Waiting for processes to exit. Jan 20 02:00:56.973680 systemd-logind[1565]: Removed session 65. Jan 20 02:01:01.971788 systemd[1]: Started sshd@65-10.0.0.51:22-10.0.0.1:48072.service - OpenSSH per-connection server daemon (10.0.0.1:48072). Jan 20 02:01:02.581748 sshd[5942]: Accepted publickey for core from 10.0.0.1 port 48072 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:01:02.590221 sshd-session[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:02.761982 systemd-logind[1565]: New session 66 of user core. Jan 20 02:01:02.895102 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 20 02:01:04.108493 sshd[5945]: Connection closed by 10.0.0.1 port 48072 Jan 20 02:01:04.111227 sshd-session[5942]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:04.182187 systemd[1]: sshd@65-10.0.0.51:22-10.0.0.1:48072.service: Deactivated successfully. Jan 20 02:01:04.226443 systemd[1]: session-66.scope: Deactivated successfully. Jan 20 02:01:04.277060 systemd-logind[1565]: Session 66 logged out. Waiting for processes to exit. Jan 20 02:01:04.307883 systemd-logind[1565]: Removed session 66. Jan 20 02:01:09.229634 systemd[1]: Started sshd@66-10.0.0.51:22-10.0.0.1:49484.service - OpenSSH per-connection server daemon (10.0.0.1:49484). Jan 20 02:01:10.041158 sshd[5959]: Accepted publickey for core from 10.0.0.1 port 49484 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:01:10.085736 sshd-session[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:10.186259 systemd-logind[1565]: New session 67 of user core. Jan 20 02:01:10.270181 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 20 02:01:11.444845 sshd[5963]: Connection closed by 10.0.0.1 port 49484 Jan 20 02:01:11.452121 sshd-session[5959]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:11.508737 systemd[1]: sshd@66-10.0.0.51:22-10.0.0.1:49484.service: Deactivated successfully. Jan 20 02:01:11.560841 systemd[1]: session-67.scope: Deactivated successfully. Jan 20 02:01:11.571691 systemd-logind[1565]: Session 67 logged out. Waiting for processes to exit. Jan 20 02:01:11.596111 systemd-logind[1565]: Removed session 67. Jan 20 02:01:12.342439 kubelet[3059]: E0120 02:01:12.339999 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:16.526846 systemd[1]: Started sshd@67-10.0.0.51:22-10.0.0.1:35828.service - OpenSSH per-connection server daemon (10.0.0.1:35828). Jan 20 02:01:17.038824 sshd[5977]: Accepted publickey for core from 10.0.0.1 port 35828 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:01:17.073485 sshd-session[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:17.139695 systemd-logind[1565]: New session 68 of user core. Jan 20 02:01:17.210113 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 20 02:01:18.478680 sshd[5980]: Connection closed by 10.0.0.1 port 35828 Jan 20 02:01:18.478040 sshd-session[5977]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:18.518003 systemd[1]: sshd@67-10.0.0.51:22-10.0.0.1:35828.service: Deactivated successfully. Jan 20 02:01:18.533018 systemd[1]: session-68.scope: Deactivated successfully. Jan 20 02:01:18.542035 systemd-logind[1565]: Session 68 logged out. Waiting for processes to exit. Jan 20 02:01:18.568986 systemd-logind[1565]: Removed session 68. Jan 20 02:01:23.619273 systemd[1]: Started sshd@68-10.0.0.51:22-10.0.0.1:35872.service - OpenSSH per-connection server daemon (10.0.0.1:35872). Jan 20 02:01:24.263671 sshd[5997]: Accepted publickey for core from 10.0.0.1 port 35872 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:01:24.315339 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:24.411354 systemd-logind[1565]: New session 69 of user core. Jan 20 02:01:24.466097 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 20 02:01:25.396470 kubelet[3059]: E0120 02:01:25.393158 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:25.612819 sshd[6000]: Connection closed by 10.0.0.1 port 35872 Jan 20 02:01:25.609198 sshd-session[5997]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:25.704941 systemd[1]: sshd@68-10.0.0.51:22-10.0.0.1:35872.service: Deactivated successfully. Jan 20 02:01:25.728821 systemd[1]: session-69.scope: Deactivated successfully. Jan 20 02:01:25.771287 systemd-logind[1565]: Session 69 logged out. Waiting for processes to exit. Jan 20 02:01:25.823513 systemd-logind[1565]: Removed session 69. Jan 20 02:01:30.667778 systemd[1]: Started sshd@69-10.0.0.51:22-10.0.0.1:49714.service - OpenSSH per-connection server daemon (10.0.0.1:49714). Jan 20 02:01:30.972802 sshd[6014]: Accepted publickey for core from 10.0.0.1 port 49714 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:01:30.974737 sshd-session[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:31.038119 systemd-logind[1565]: New session 70 of user core. Jan 20 02:01:31.079148 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 20 02:01:31.860704 sshd[6017]: Connection closed by 10.0.0.1 port 49714 Jan 20 02:01:31.861718 sshd-session[6014]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:31.895687 systemd[1]: sshd@69-10.0.0.51:22-10.0.0.1:49714.service: Deactivated successfully. Jan 20 02:01:31.943606 systemd[1]: session-70.scope: Deactivated successfully. Jan 20 02:01:31.962083 systemd-logind[1565]: Session 70 logged out. Waiting for processes to exit. Jan 20 02:01:31.976146 systemd-logind[1565]: Removed session 70. Jan 20 02:01:34.363527 kubelet[3059]: E0120 02:01:34.340798 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:36.993032 systemd[1]: Started sshd@70-10.0.0.51:22-10.0.0.1:55614.service - OpenSSH per-connection server daemon (10.0.0.1:55614). Jan 20 02:01:37.559868 sshd[6032]: Accepted publickey for core from 10.0.0.1 port 55614 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:01:37.572591 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:37.659942 systemd-logind[1565]: New session 71 of user core. Jan 20 02:01:37.690109 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 20 02:01:38.371489 kubelet[3059]: E0120 02:01:38.369060 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:38.587429 sshd[6036]: Connection closed by 10.0.0.1 port 55614 Jan 20 02:01:38.588779 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:38.601990 systemd-logind[1565]: Session 71 logged out. Waiting for processes to exit. Jan 20 02:01:38.615157 systemd[1]: sshd@70-10.0.0.51:22-10.0.0.1:55614.service: Deactivated successfully. Jan 20 02:01:38.639145 systemd[1]: session-71.scope: Deactivated successfully. Jan 20 02:01:38.693319 systemd-logind[1565]: Removed session 71. Jan 20 02:01:43.674982 systemd[1]: Started sshd@71-10.0.0.51:22-10.0.0.1:55672.service - OpenSSH per-connection server daemon (10.0.0.1:55672). Jan 20 02:01:44.116746 sshd[6051]: Accepted publickey for core from 10.0.0.1 port 55672 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:01:44.136809 sshd-session[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:44.193973 systemd-logind[1565]: New session 72 of user core. Jan 20 02:01:44.230525 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 20 02:01:44.770470 sshd[6054]: Connection closed by 10.0.0.1 port 55672 Jan 20 02:01:44.771712 sshd-session[6051]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:44.791642 systemd[1]: sshd@71-10.0.0.51:22-10.0.0.1:55672.service: Deactivated successfully. Jan 20 02:01:44.793611 systemd-logind[1565]: Session 72 logged out. Waiting for processes to exit. Jan 20 02:01:44.797182 systemd[1]: session-72.scope: Deactivated successfully. Jan 20 02:01:44.807986 systemd-logind[1565]: Removed session 72. Jan 20 02:01:55.137820 systemd[1]: Started sshd@72-10.0.0.51:22-10.0.0.1:34152.service - OpenSSH per-connection server daemon (10.0.0.1:34152). Jan 20 02:01:55.903801 sshd[6068]: Accepted publickey for core from 10.0.0.1 port 34152 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:01:55.918171 sshd-session[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:56.002539 systemd-logind[1565]: New session 73 of user core. Jan 20 02:01:56.069309 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 20 02:02:09.127099 kubelet[3059]: E0120 02:02:09.124937 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:09.127099 kubelet[3059]: E0120 02:02:09.126316 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:09.426604 kubelet[3059]: E0120 02:02:09.396048 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:09.673153 systemd[1]: cri-containerd-bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af.scope: Deactivated successfully. Jan 20 02:02:09.684754 systemd[1]: cri-containerd-bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af.scope: Consumed 23.310s CPU time, 60.6M memory peak, 3.8M read from disk. Jan 20 02:02:09.958349 systemd[1]: cri-containerd-d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0.scope: Deactivated successfully. Jan 20 02:02:09.970070 sshd[6073]: Connection closed by 10.0.0.1 port 34152 Jan 20 02:02:09.960820 systemd[1]: cri-containerd-d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0.scope: Consumed 14.187s CPU time, 26M memory peak, 816K read from disk. Jan 20 02:02:09.996294 sshd-session[6068]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:10.070295 containerd[1591]: time="2026-01-20T02:02:10.070238552Z" level=info msg="received container exit event container_id:\"d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0\" id:\"d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0\" pid:4852 exit_status:1 exited_at:{seconds:1768874530 nanos:67122517}" Jan 20 02:02:10.142535 systemd[1]: sshd@72-10.0.0.51:22-10.0.0.1:34152.service: Deactivated successfully. Jan 20 02:02:10.210112 containerd[1591]: time="2026-01-20T02:02:10.185745145Z" level=info msg="received container exit event container_id:\"bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af\" id:\"bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af\" pid:4885 exit_status:1 exited_at:{seconds:1768874529 nanos:758894301}" Jan 20 02:02:10.224308 kubelet[3059]: E0120 02:02:10.197226 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.025s" Jan 20 02:02:10.224308 kubelet[3059]: E0120 02:02:10.205281 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:10.371730 systemd[1]: session-73.scope: Deactivated successfully. Jan 20 02:02:10.442141 kubelet[3059]: E0120 02:02:10.441998 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:10.473187 systemd-logind[1565]: Session 73 logged out. Waiting for processes to exit. Jan 20 02:02:10.489595 systemd-logind[1565]: Removed session 73. Jan 20 02:02:11.224007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af-rootfs.mount: Deactivated successfully. Jan 20 02:02:11.331948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0-rootfs.mount: Deactivated successfully. Jan 20 02:02:12.092138 kubelet[3059]: I0120 02:02:12.073517 3059 scope.go:117] "RemoveContainer" containerID="c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9" Jan 20 02:02:12.092138 kubelet[3059]: I0120 02:02:12.074010 3059 scope.go:117] "RemoveContainer" containerID="d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0" Jan 20 02:02:12.092138 kubelet[3059]: E0120 02:02:12.074100 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:12.092138 kubelet[3059]: E0120 02:02:12.074242 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(6e6cfcfb327385445a9bb0d2bc2fd5d4)\"" pod="kube-system/kube-scheduler-localhost" podUID="6e6cfcfb327385445a9bb0d2bc2fd5d4" Jan 20 02:02:12.530044 containerd[1591]: time="2026-01-20T02:02:12.504093577Z" level=info msg="RemoveContainer for \"c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9\"" Jan 20 02:02:12.636557 kubelet[3059]: I0120 02:02:12.630600 3059 scope.go:117] "RemoveContainer" containerID="bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af" Jan 20 02:02:12.636557 kubelet[3059]: E0120 02:02:12.630955 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:12.680215 kubelet[3059]: E0120 02:02:12.631262 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(66e26b992bcd7ea6fb75e339cf7a3f7d)\"" pod="kube-system/kube-controller-manager-localhost" podUID="66e26b992bcd7ea6fb75e339cf7a3f7d" Jan 20 02:02:12.827147 containerd[1591]: time="2026-01-20T02:02:12.804807213Z" level=info msg="RemoveContainer for \"c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9\" returns successfully" Jan 20 02:02:12.855483 kubelet[3059]: I0120 02:02:12.815801 3059 scope.go:117] "RemoveContainer" containerID="8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867" Jan 20 02:02:12.869115 containerd[1591]: time="2026-01-20T02:02:12.866221065Z" level=info msg="RemoveContainer for \"8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867\"" Jan 20 02:02:13.130531 containerd[1591]: time="2026-01-20T02:02:13.125253849Z" level=info msg="RemoveContainer for \"8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867\" returns successfully" Jan 20 02:02:15.066326 systemd[1]: Started sshd@73-10.0.0.51:22-10.0.0.1:44624.service - OpenSSH per-connection server daemon (10.0.0.1:44624). Jan 20 02:02:15.473656 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 44624 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:02:15.477955 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:15.504262 systemd-logind[1565]: New session 74 of user core. Jan 20 02:02:15.564967 systemd[1]: Started session-74.scope - Session 74 of User core. Jan 20 02:02:16.214667 sshd[6116]: Connection closed by 10.0.0.1 port 44624 Jan 20 02:02:16.215733 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:16.238865 systemd[1]: sshd@73-10.0.0.51:22-10.0.0.1:44624.service: Deactivated successfully. Jan 20 02:02:16.265026 systemd[1]: session-74.scope: Deactivated successfully. Jan 20 02:02:16.272480 systemd-logind[1565]: Session 74 logged out. Waiting for processes to exit. Jan 20 02:02:16.303874 systemd-logind[1565]: Removed session 74. Jan 20 02:02:17.122955 kubelet[3059]: I0120 02:02:17.095338 3059 scope.go:117] "RemoveContainer" containerID="d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0" Jan 20 02:02:17.122955 kubelet[3059]: I0120 02:02:17.121782 3059 scope.go:117] "RemoveContainer" containerID="bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af" Jan 20 02:02:17.122955 kubelet[3059]: E0120 02:02:17.121961 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:17.122955 kubelet[3059]: E0120 02:02:17.122171 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(66e26b992bcd7ea6fb75e339cf7a3f7d)\"" pod="kube-system/kube-controller-manager-localhost" podUID="66e26b992bcd7ea6fb75e339cf7a3f7d" Jan 20 02:02:17.122955 kubelet[3059]: E0120 02:02:17.122723 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:17.122955 kubelet[3059]: E0120 02:02:17.122876 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(6e6cfcfb327385445a9bb0d2bc2fd5d4)\"" pod="kube-system/kube-scheduler-localhost" podUID="6e6cfcfb327385445a9bb0d2bc2fd5d4" Jan 20 02:02:21.258278 systemd[1]: Started sshd@74-10.0.0.51:22-10.0.0.1:44626.service - OpenSSH per-connection server daemon (10.0.0.1:44626). Jan 20 02:02:21.343578 kubelet[3059]: E0120 02:02:21.342751 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:21.598161 sshd[6129]: Accepted publickey for core from 10.0.0.1 port 44626 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:02:21.602298 sshd-session[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:21.684122 systemd-logind[1565]: New session 75 of user core. Jan 20 02:02:21.718006 systemd[1]: Started session-75.scope - Session 75 of User core. Jan 20 02:02:22.533917 sshd[6134]: Connection closed by 10.0.0.1 port 44626 Jan 20 02:02:22.533190 sshd-session[6129]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:22.741052 systemd[1]: sshd@74-10.0.0.51:22-10.0.0.1:44626.service: Deactivated successfully. Jan 20 02:02:22.782914 systemd[1]: session-75.scope: Deactivated successfully. Jan 20 02:02:22.807338 systemd-logind[1565]: Session 75 logged out. Waiting for processes to exit. Jan 20 02:02:22.817113 systemd-logind[1565]: Removed session 75. Jan 20 02:02:27.344152 kubelet[3059]: I0120 02:02:27.341546 3059 scope.go:117] "RemoveContainer" containerID="d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0" Jan 20 02:02:27.344152 kubelet[3059]: E0120 02:02:27.341696 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:27.344152 kubelet[3059]: E0120 02:02:27.341975 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(6e6cfcfb327385445a9bb0d2bc2fd5d4)\"" pod="kube-system/kube-scheduler-localhost" podUID="6e6cfcfb327385445a9bb0d2bc2fd5d4" Jan 20 02:02:27.657128 systemd[1]: Started sshd@75-10.0.0.51:22-10.0.0.1:43946.service - OpenSSH per-connection server daemon (10.0.0.1:43946). Jan 20 02:02:27.943524 sshd[6148]: Accepted publickey for core from 10.0.0.1 port 43946 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:02:27.956579 sshd-session[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:28.002849 systemd-logind[1565]: New session 76 of user core. Jan 20 02:02:28.024782 systemd[1]: Started session-76.scope - Session 76 of User core. Jan 20 02:02:28.733969 sshd[6151]: Connection closed by 10.0.0.1 port 43946 Jan 20 02:02:28.727719 sshd-session[6148]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:28.742087 systemd[1]: sshd@75-10.0.0.51:22-10.0.0.1:43946.service: Deactivated successfully. Jan 20 02:02:28.788855 systemd[1]: session-76.scope: Deactivated successfully. Jan 20 02:02:28.797175 systemd-logind[1565]: Session 76 logged out. Waiting for processes to exit. Jan 20 02:02:28.819603 systemd-logind[1565]: Removed session 76. Jan 20 02:02:31.357534 kubelet[3059]: I0120 02:02:31.356195 3059 scope.go:117] "RemoveContainer" containerID="bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af" Jan 20 02:02:31.357534 kubelet[3059]: E0120 02:02:31.356335 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:31.357534 kubelet[3059]: E0120 02:02:31.356575 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(66e26b992bcd7ea6fb75e339cf7a3f7d)\"" pod="kube-system/kube-controller-manager-localhost" podUID="66e26b992bcd7ea6fb75e339cf7a3f7d" Jan 20 02:02:33.784590 systemd[1]: Started sshd@76-10.0.0.51:22-10.0.0.1:43980.service - OpenSSH per-connection server daemon (10.0.0.1:43980). Jan 20 02:02:34.118559 sshd[6164]: Accepted publickey for core from 10.0.0.1 port 43980 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:02:34.122940 sshd-session[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:34.203291 systemd-logind[1565]: New session 77 of user core. Jan 20 02:02:34.235572 systemd[1]: Started session-77.scope - Session 77 of User core. Jan 20 02:02:35.102904 sshd[6167]: Connection closed by 10.0.0.1 port 43980 Jan 20 02:02:35.103339 sshd-session[6164]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:35.119204 systemd[1]: sshd@76-10.0.0.51:22-10.0.0.1:43980.service: Deactivated successfully. Jan 20 02:02:35.124587 systemd[1]: session-77.scope: Deactivated successfully. Jan 20 02:02:35.134096 systemd-logind[1565]: Session 77 logged out. Waiting for processes to exit. Jan 20 02:02:35.140126 systemd-logind[1565]: Removed session 77. Jan 20 02:02:38.342238 kubelet[3059]: I0120 02:02:38.340263 3059 scope.go:117] "RemoveContainer" containerID="d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0" Jan 20 02:02:38.342238 kubelet[3059]: E0120 02:02:38.340519 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:38.342238 kubelet[3059]: E0120 02:02:38.340682 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(6e6cfcfb327385445a9bb0d2bc2fd5d4)\"" pod="kube-system/kube-scheduler-localhost" podUID="6e6cfcfb327385445a9bb0d2bc2fd5d4" Jan 20 02:02:39.350032 kubelet[3059]: E0120 02:02:39.349735 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:40.197961 systemd[1]: Started sshd@77-10.0.0.51:22-10.0.0.1:54898.service - OpenSSH per-connection server daemon (10.0.0.1:54898). Jan 20 02:02:40.706539 sshd[6182]: Accepted publickey for core from 10.0.0.1 port 54898 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:02:40.714948 sshd-session[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:40.772553 systemd-logind[1565]: New session 78 of user core. Jan 20 02:02:40.793115 systemd[1]: Started session-78.scope - Session 78 of User core. Jan 20 02:02:41.794497 sshd[6185]: Connection closed by 10.0.0.1 port 54898 Jan 20 02:02:41.796096 sshd-session[6182]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:41.822742 systemd[1]: sshd@77-10.0.0.51:22-10.0.0.1:54898.service: Deactivated successfully. Jan 20 02:02:41.828220 systemd[1]: session-78.scope: Deactivated successfully. Jan 20 02:02:41.834189 systemd-logind[1565]: Session 78 logged out. Waiting for processes to exit. Jan 20 02:02:41.847305 systemd-logind[1565]: Removed session 78. Jan 20 02:02:43.358161 kubelet[3059]: I0120 02:02:43.351833 3059 scope.go:117] "RemoveContainer" containerID="bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af" Jan 20 02:02:43.358161 kubelet[3059]: E0120 02:02:43.351972 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:43.359105 kubelet[3059]: E0120 02:02:43.352157 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(66e26b992bcd7ea6fb75e339cf7a3f7d)\"" pod="kube-system/kube-controller-manager-localhost" podUID="66e26b992bcd7ea6fb75e339cf7a3f7d" Jan 20 02:02:46.866585 systemd[1]: Started sshd@78-10.0.0.51:22-10.0.0.1:51982.service - OpenSSH per-connection server daemon (10.0.0.1:51982). Jan 20 02:02:47.262979 sshd[6199]: Accepted publickey for core from 10.0.0.1 port 51982 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:02:47.271090 sshd-session[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:47.311876 systemd-logind[1565]: New session 79 of user core. Jan 20 02:02:47.332581 systemd[1]: Started session-79.scope - Session 79 of User core. Jan 20 02:02:48.021521 sshd[6202]: Connection closed by 10.0.0.1 port 51982 Jan 20 02:02:48.025960 sshd-session[6199]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:48.065012 systemd[1]: sshd@78-10.0.0.51:22-10.0.0.1:51982.service: Deactivated successfully. Jan 20 02:02:48.073944 systemd[1]: session-79.scope: Deactivated successfully. Jan 20 02:02:48.078843 systemd-logind[1565]: Session 79 logged out. Waiting for processes to exit. Jan 20 02:02:48.098883 systemd-logind[1565]: Removed session 79. Jan 20 02:02:49.353993 kubelet[3059]: I0120 02:02:49.352019 3059 scope.go:117] "RemoveContainer" containerID="d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0" Jan 20 02:02:49.353993 kubelet[3059]: E0120 02:02:49.352255 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:49.362220 kubelet[3059]: E0120 02:02:49.361669 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(6e6cfcfb327385445a9bb0d2bc2fd5d4)\"" pod="kube-system/kube-scheduler-localhost" podUID="6e6cfcfb327385445a9bb0d2bc2fd5d4" Jan 20 02:02:53.432595 kubelet[3059]: E0120 02:02:53.431979 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:53.513183 systemd[1]: Started sshd@79-10.0.0.51:22-10.0.0.1:52010.service - OpenSSH per-connection server daemon (10.0.0.1:52010). Jan 20 02:02:54.000742 sshd[6218]: Accepted publickey for core from 10.0.0.1 port 52010 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:02:54.011126 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:54.083606 systemd-logind[1565]: New session 80 of user core. Jan 20 02:02:54.118949 systemd[1]: Started session-80.scope - Session 80 of User core. Jan 20 02:02:54.968667 sshd[6221]: Connection closed by 10.0.0.1 port 52010 Jan 20 02:02:54.969760 sshd-session[6218]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:54.991513 systemd[1]: sshd@79-10.0.0.51:22-10.0.0.1:52010.service: Deactivated successfully. Jan 20 02:02:55.001262 systemd[1]: session-80.scope: Deactivated successfully. Jan 20 02:02:55.005514 systemd-logind[1565]: Session 80 logged out. Waiting for processes to exit. Jan 20 02:02:55.017762 systemd-logind[1565]: Removed session 80. Jan 20 02:02:58.341595 kubelet[3059]: I0120 02:02:58.341247 3059 scope.go:117] "RemoveContainer" containerID="bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af" Jan 20 02:02:58.352953 kubelet[3059]: E0120 02:02:58.346856 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:58.382493 containerd[1591]: time="2026-01-20T02:02:58.381221627Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:5,}" Jan 20 02:02:58.542754 containerd[1591]: time="2026-01-20T02:02:58.539163746Z" level=info msg="Container 5a657128289c6457cedf2d8ced4b03e62a591c38ce375445e5f0f779289951b6: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:02:58.589644 containerd[1591]: time="2026-01-20T02:02:58.589243743Z" level=info msg="CreateContainer within sandbox \"35466548dab3c27db896e24b7ee6a7d76f1bb837df9ddd020da68fd97ec5e0fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:5,} returns container id \"5a657128289c6457cedf2d8ced4b03e62a591c38ce375445e5f0f779289951b6\"" Jan 20 02:02:58.597859 containerd[1591]: time="2026-01-20T02:02:58.597571961Z" level=info msg="StartContainer for \"5a657128289c6457cedf2d8ced4b03e62a591c38ce375445e5f0f779289951b6\"" Jan 20 02:02:58.606244 containerd[1591]: time="2026-01-20T02:02:58.606194455Z" level=info msg="connecting to shim 5a657128289c6457cedf2d8ced4b03e62a591c38ce375445e5f0f779289951b6" address="unix:///run/containerd/s/bd2a1fdad2c63e6b97ea527fdb88e51d630cdf855c2be6bd3e0513bd6d003b8e" protocol=ttrpc version=3 Jan 20 02:02:58.754161 systemd[1]: Started cri-containerd-5a657128289c6457cedf2d8ced4b03e62a591c38ce375445e5f0f779289951b6.scope - libcontainer container 5a657128289c6457cedf2d8ced4b03e62a591c38ce375445e5f0f779289951b6. Jan 20 02:02:59.289986 containerd[1591]: time="2026-01-20T02:02:59.289819582Z" level=info msg="StartContainer for \"5a657128289c6457cedf2d8ced4b03e62a591c38ce375445e5f0f779289951b6\" returns successfully" Jan 20 02:02:59.542318 kubelet[3059]: E0120 02:02:59.542191 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:00.040042 systemd[1]: Started sshd@80-10.0.0.51:22-10.0.0.1:55578.service - OpenSSH per-connection server daemon (10.0.0.1:55578). Jan 20 02:03:00.356730 kubelet[3059]: I0120 02:03:00.348201 3059 scope.go:117] "RemoveContainer" containerID="d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0" Jan 20 02:03:00.356730 kubelet[3059]: E0120 02:03:00.348318 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:00.495532 containerd[1591]: time="2026-01-20T02:03:00.439214427Z" level=info msg="CreateContainer within sandbox \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:4,}" Jan 20 02:03:00.657292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558513762.mount: Deactivated successfully. Jan 20 02:03:00.696604 containerd[1591]: time="2026-01-20T02:03:00.696072514Z" level=info msg="Container cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:03:00.732865 containerd[1591]: time="2026-01-20T02:03:00.727779625Z" level=info msg="CreateContainer within sandbox \"ae8312eb11de7daac822cf849009657d7133e63f0ef44116529f60d2ca4752e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:4,} returns container id \"cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5\"" Jan 20 02:03:00.748903 containerd[1591]: time="2026-01-20T02:03:00.743703756Z" level=info msg="StartContainer for \"cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5\"" Jan 20 02:03:00.754988 containerd[1591]: time="2026-01-20T02:03:00.754931434Z" level=info msg="connecting to shim cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5" address="unix:///run/containerd/s/d1f127681a9c4311b456c6aab9e8ce8d82f6bff97094d53185fe0bdf6b34c086" protocol=ttrpc version=3 Jan 20 02:03:00.824945 sshd[6265]: Accepted publickey for core from 10.0.0.1 port 55578 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:00.838937 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:00.897602 systemd-logind[1565]: New session 81 of user core. Jan 20 02:03:00.935056 systemd[1]: Started cri-containerd-cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5.scope - libcontainer container cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5. Jan 20 02:03:00.936868 systemd[1]: Started session-81.scope - Session 81 of User core. Jan 20 02:03:01.287671 containerd[1591]: time="2026-01-20T02:03:01.282810132Z" level=info msg="StartContainer for \"cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5\" returns successfully" Jan 20 02:03:01.707677 kubelet[3059]: E0120 02:03:01.706949 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:01.978531 sshd[6286]: Connection closed by 10.0.0.1 port 55578 Jan 20 02:03:01.996690 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:02.039643 systemd[1]: sshd@80-10.0.0.51:22-10.0.0.1:55578.service: Deactivated successfully. Jan 20 02:03:02.072641 systemd[1]: session-81.scope: Deactivated successfully. Jan 20 02:03:02.105146 systemd-logind[1565]: Session 81 logged out. Waiting for processes to exit. Jan 20 02:03:02.152556 systemd-logind[1565]: Removed session 81. Jan 20 02:03:02.764008 kubelet[3059]: E0120 02:03:02.763676 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:07.050204 systemd[1]: Started sshd@81-10.0.0.51:22-10.0.0.1:34254.service - OpenSSH per-connection server daemon (10.0.0.1:34254). Jan 20 02:03:07.100239 kubelet[3059]: E0120 02:03:07.099992 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:07.114253 kubelet[3059]: E0120 02:03:07.113551 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:07.327181 sshd[6320]: Accepted publickey for core from 10.0.0.1 port 34254 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:07.364921 sshd-session[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:07.415716 systemd-logind[1565]: New session 82 of user core. Jan 20 02:03:07.432945 systemd[1]: Started session-82.scope - Session 82 of User core. Jan 20 02:03:07.824734 sshd[6323]: Connection closed by 10.0.0.1 port 34254 Jan 20 02:03:07.828525 sshd-session[6320]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:07.865291 systemd[1]: sshd@81-10.0.0.51:22-10.0.0.1:34254.service: Deactivated successfully. Jan 20 02:03:07.889534 systemd[1]: session-82.scope: Deactivated successfully. Jan 20 02:03:07.903595 systemd-logind[1565]: Session 82 logged out. Waiting for processes to exit. Jan 20 02:03:07.938209 systemd-logind[1565]: Removed session 82. Jan 20 02:03:07.972348 systemd[1]: Started sshd@82-10.0.0.51:22-10.0.0.1:34268.service - OpenSSH per-connection server daemon (10.0.0.1:34268). Jan 20 02:03:08.311881 sshd[6336]: Accepted publickey for core from 10.0.0.1 port 34268 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:08.313807 sshd-session[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:08.374682 systemd-logind[1565]: New session 83 of user core. Jan 20 02:03:08.401747 systemd[1]: Started session-83.scope - Session 83 of User core. Jan 20 02:03:10.326718 sshd[6339]: Connection closed by 10.0.0.1 port 34268 Jan 20 02:03:10.327851 sshd-session[6336]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:10.369576 systemd[1]: sshd@82-10.0.0.51:22-10.0.0.1:34268.service: Deactivated successfully. Jan 20 02:03:10.389656 systemd[1]: session-83.scope: Deactivated successfully. Jan 20 02:03:10.401227 systemd-logind[1565]: Session 83 logged out. Waiting for processes to exit. Jan 20 02:03:10.425753 systemd-logind[1565]: Removed session 83. Jan 20 02:03:10.444683 systemd[1]: Started sshd@83-10.0.0.51:22-10.0.0.1:34272.service - OpenSSH per-connection server daemon (10.0.0.1:34272). Jan 20 02:03:10.752620 sshd[6351]: Accepted publickey for core from 10.0.0.1 port 34272 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:10.752047 sshd-session[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:10.792294 systemd-logind[1565]: New session 84 of user core. Jan 20 02:03:10.821263 systemd[1]: Started session-84.scope - Session 84 of User core. Jan 20 02:03:14.997608 sshd[6355]: Connection closed by 10.0.0.1 port 34272 Jan 20 02:03:15.021635 sshd-session[6351]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:15.067609 systemd[1]: sshd@83-10.0.0.51:22-10.0.0.1:34272.service: Deactivated successfully. Jan 20 02:03:15.102911 systemd[1]: session-84.scope: Deactivated successfully. Jan 20 02:03:15.110577 systemd[1]: session-84.scope: Consumed 1.070s CPU time, 44.3M memory peak. Jan 20 02:03:15.117170 systemd-logind[1565]: Session 84 logged out. Waiting for processes to exit. Jan 20 02:03:15.161242 systemd[1]: Started sshd@84-10.0.0.51:22-10.0.0.1:60984.service - OpenSSH per-connection server daemon (10.0.0.1:60984). Jan 20 02:03:15.165810 systemd-logind[1565]: Removed session 84. Jan 20 02:03:16.060164 sshd[6372]: Accepted publickey for core from 10.0.0.1 port 60984 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:16.079751 sshd-session[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:16.164800 systemd-logind[1565]: New session 85 of user core. Jan 20 02:03:16.209784 systemd[1]: Started session-85.scope - Session 85 of User core. Jan 20 02:03:17.212598 kubelet[3059]: E0120 02:03:17.205054 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:17.249511 kubelet[3059]: E0120 02:03:17.247780 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:18.154298 kubelet[3059]: E0120 02:03:18.153342 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:18.348581 kubelet[3059]: E0120 02:03:18.343788 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:19.669670 sshd[6375]: Connection closed by 10.0.0.1 port 60984 Jan 20 02:03:19.678909 sshd-session[6372]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:19.774017 systemd[1]: sshd@84-10.0.0.51:22-10.0.0.1:60984.service: Deactivated successfully. Jan 20 02:03:19.792768 systemd[1]: session-85.scope: Deactivated successfully. Jan 20 02:03:19.807918 systemd-logind[1565]: Session 85 logged out. Waiting for processes to exit. Jan 20 02:03:19.839073 systemd[1]: Started sshd@85-10.0.0.51:22-10.0.0.1:60994.service - OpenSSH per-connection server daemon (10.0.0.1:60994). Jan 20 02:03:19.857939 systemd-logind[1565]: Removed session 85. Jan 20 02:03:20.246816 sshd[6387]: Accepted publickey for core from 10.0.0.1 port 60994 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:20.584998 sshd-session[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:20.642814 systemd-logind[1565]: New session 86 of user core. Jan 20 02:03:20.664891 systemd[1]: Started session-86.scope - Session 86 of User core. Jan 20 02:03:21.823681 sshd[6390]: Connection closed by 10.0.0.1 port 60994 Jan 20 02:03:21.825973 sshd-session[6387]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:21.868016 systemd[1]: sshd@85-10.0.0.51:22-10.0.0.1:60994.service: Deactivated successfully. Jan 20 02:03:21.909112 systemd[1]: session-86.scope: Deactivated successfully. Jan 20 02:03:21.933832 systemd-logind[1565]: Session 86 logged out. Waiting for processes to exit. Jan 20 02:03:22.007518 systemd-logind[1565]: Removed session 86. Jan 20 02:03:26.342307 kubelet[3059]: E0120 02:03:26.342204 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:26.923214 systemd[1]: Started sshd@86-10.0.0.51:22-10.0.0.1:57138.service - OpenSSH per-connection server daemon (10.0.0.1:57138). Jan 20 02:03:27.781905 sshd[6406]: Accepted publickey for core from 10.0.0.1 port 57138 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:27.783336 sshd-session[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:27.842213 systemd-logind[1565]: New session 87 of user core. Jan 20 02:03:27.912338 systemd[1]: Started session-87.scope - Session 87 of User core. Jan 20 02:03:29.254085 sshd[6411]: Connection closed by 10.0.0.1 port 57138 Jan 20 02:03:29.261137 sshd-session[6406]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:29.287723 systemd[1]: sshd@86-10.0.0.51:22-10.0.0.1:57138.service: Deactivated successfully. Jan 20 02:03:29.306679 systemd[1]: session-87.scope: Deactivated successfully. Jan 20 02:03:29.318903 systemd-logind[1565]: Session 87 logged out. Waiting for processes to exit. Jan 20 02:03:29.328122 systemd-logind[1565]: Removed session 87. Jan 20 02:03:33.365269 kubelet[3059]: E0120 02:03:33.346069 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:34.308908 systemd[1]: Started sshd@87-10.0.0.51:22-10.0.0.1:57146.service - OpenSSH per-connection server daemon (10.0.0.1:57146). Jan 20 02:03:35.033852 sshd[6424]: Accepted publickey for core from 10.0.0.1 port 57146 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:35.039074 sshd-session[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:35.095860 systemd-logind[1565]: New session 88 of user core. Jan 20 02:03:35.115745 systemd[1]: Started session-88.scope - Session 88 of User core. Jan 20 02:03:35.408032 kubelet[3059]: E0120 02:03:35.405018 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:36.161555 sshd[6427]: Connection closed by 10.0.0.1 port 57146 Jan 20 02:03:36.161870 sshd-session[6424]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:36.190268 systemd[1]: sshd@87-10.0.0.51:22-10.0.0.1:57146.service: Deactivated successfully. Jan 20 02:03:36.223582 systemd[1]: session-88.scope: Deactivated successfully. Jan 20 02:03:36.235028 systemd-logind[1565]: Session 88 logged out. Waiting for processes to exit. Jan 20 02:03:36.264919 systemd-logind[1565]: Removed session 88. Jan 20 02:03:41.319143 systemd[1]: Started sshd@88-10.0.0.51:22-10.0.0.1:57198.service - OpenSSH per-connection server daemon (10.0.0.1:57198). Jan 20 02:03:41.857641 sshd[6443]: Accepted publickey for core from 10.0.0.1 port 57198 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:41.866126 sshd-session[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:41.922885 systemd-logind[1565]: New session 89 of user core. Jan 20 02:03:41.937273 systemd[1]: Started session-89.scope - Session 89 of User core. Jan 20 02:03:43.287742 sshd[6446]: Connection closed by 10.0.0.1 port 57198 Jan 20 02:03:43.289258 sshd-session[6443]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:43.313286 systemd[1]: sshd@88-10.0.0.51:22-10.0.0.1:57198.service: Deactivated successfully. Jan 20 02:03:43.394803 systemd[1]: session-89.scope: Deactivated successfully. Jan 20 02:03:43.419099 systemd-logind[1565]: Session 89 logged out. Waiting for processes to exit. Jan 20 02:03:43.453761 systemd-logind[1565]: Removed session 89. Jan 20 02:03:45.357586 kubelet[3059]: E0120 02:03:45.344114 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:48.378084 systemd[1]: Started sshd@89-10.0.0.51:22-10.0.0.1:40254.service - OpenSSH per-connection server daemon (10.0.0.1:40254). Jan 20 02:03:48.875794 sshd[6459]: Accepted publickey for core from 10.0.0.1 port 40254 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:48.880821 sshd-session[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:48.928096 systemd-logind[1565]: New session 90 of user core. Jan 20 02:03:48.966568 systemd[1]: Started session-90.scope - Session 90 of User core. Jan 20 02:03:50.020269 sshd[6462]: Connection closed by 10.0.0.1 port 40254 Jan 20 02:03:50.022110 sshd-session[6459]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:50.063583 systemd[1]: sshd@89-10.0.0.51:22-10.0.0.1:40254.service: Deactivated successfully. Jan 20 02:03:50.074788 systemd[1]: session-90.scope: Deactivated successfully. Jan 20 02:03:50.110868 systemd-logind[1565]: Session 90 logged out. Waiting for processes to exit. Jan 20 02:03:50.134955 systemd-logind[1565]: Removed session 90. Jan 20 02:03:55.101180 systemd[1]: Started sshd@90-10.0.0.51:22-10.0.0.1:46496.service - OpenSSH per-connection server daemon (10.0.0.1:46496). Jan 20 02:03:55.585664 sshd[6477]: Accepted publickey for core from 10.0.0.1 port 46496 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:03:55.614573 sshd-session[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:55.688704 systemd-logind[1565]: New session 91 of user core. Jan 20 02:03:55.709255 systemd[1]: Started session-91.scope - Session 91 of User core. Jan 20 02:03:56.831242 sshd[6480]: Connection closed by 10.0.0.1 port 46496 Jan 20 02:03:56.835071 sshd-session[6477]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:56.871909 systemd[1]: sshd@90-10.0.0.51:22-10.0.0.1:46496.service: Deactivated successfully. Jan 20 02:03:56.887165 systemd[1]: session-91.scope: Deactivated successfully. Jan 20 02:03:56.935683 systemd-logind[1565]: Session 91 logged out. Waiting for processes to exit. Jan 20 02:03:56.953693 systemd-logind[1565]: Removed session 91. Jan 20 02:03:59.351820 kubelet[3059]: E0120 02:03:59.350193 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:01.939972 systemd[1]: Started sshd@91-10.0.0.51:22-10.0.0.1:46506.service - OpenSSH per-connection server daemon (10.0.0.1:46506). Jan 20 02:04:02.496632 sshd[6494]: Accepted publickey for core from 10.0.0.1 port 46506 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:04:02.515881 sshd-session[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:02.573630 systemd-logind[1565]: New session 92 of user core. Jan 20 02:04:02.893945 systemd[1]: Started session-92.scope - Session 92 of User core. Jan 20 02:04:03.840997 sshd[6497]: Connection closed by 10.0.0.1 port 46506 Jan 20 02:04:03.843348 sshd-session[6494]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:03.865328 systemd-logind[1565]: Session 92 logged out. Waiting for processes to exit. Jan 20 02:04:03.874938 systemd[1]: sshd@91-10.0.0.51:22-10.0.0.1:46506.service: Deactivated successfully. Jan 20 02:04:03.923909 systemd[1]: session-92.scope: Deactivated successfully. Jan 20 02:04:03.993681 systemd-logind[1565]: Removed session 92. Jan 20 02:04:08.946076 systemd[1]: Started sshd@92-10.0.0.51:22-10.0.0.1:52858.service - OpenSSH per-connection server daemon (10.0.0.1:52858). Jan 20 02:04:09.419580 sshd[6510]: Accepted publickey for core from 10.0.0.1 port 52858 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:04:09.457924 sshd-session[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:09.550737 systemd-logind[1565]: New session 93 of user core. Jan 20 02:04:09.595076 systemd[1]: Started session-93.scope - Session 93 of User core. Jan 20 02:04:11.024637 sshd[6513]: Connection closed by 10.0.0.1 port 52858 Jan 20 02:04:11.028077 sshd-session[6510]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:11.117849 systemd[1]: sshd@92-10.0.0.51:22-10.0.0.1:52858.service: Deactivated successfully. Jan 20 02:04:11.153274 systemd[1]: session-93.scope: Deactivated successfully. Jan 20 02:04:11.180959 systemd-logind[1565]: Session 93 logged out. Waiting for processes to exit. Jan 20 02:04:11.200836 systemd-logind[1565]: Removed session 93. Jan 20 02:04:16.076673 systemd[1]: Started sshd@93-10.0.0.51:22-10.0.0.1:34090.service - OpenSSH per-connection server daemon (10.0.0.1:34090). Jan 20 02:04:16.699051 sshd[6527]: Accepted publickey for core from 10.0.0.1 port 34090 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:04:16.699293 sshd-session[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:16.765326 systemd-logind[1565]: New session 94 of user core. Jan 20 02:04:16.792124 systemd[1]: Started session-94.scope - Session 94 of User core. Jan 20 02:04:17.912114 sshd[6530]: Connection closed by 10.0.0.1 port 34090 Jan 20 02:04:17.911852 sshd-session[6527]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:17.975108 systemd[1]: sshd@93-10.0.0.51:22-10.0.0.1:34090.service: Deactivated successfully. Jan 20 02:04:17.980898 systemd[1]: session-94.scope: Deactivated successfully. Jan 20 02:04:17.998579 systemd-logind[1565]: Session 94 logged out. Waiting for processes to exit. Jan 20 02:04:18.028962 systemd-logind[1565]: Removed session 94. Jan 20 02:04:23.026248 systemd[1]: Started sshd@94-10.0.0.51:22-10.0.0.1:34094.service - OpenSSH per-connection server daemon (10.0.0.1:34094). Jan 20 02:04:23.370167 kubelet[3059]: E0120 02:04:23.369630 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:23.436066 sshd[6546]: Accepted publickey for core from 10.0.0.1 port 34094 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:04:23.444341 sshd-session[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:23.510346 systemd-logind[1565]: New session 95 of user core. Jan 20 02:04:23.538849 systemd[1]: Started session-95.scope - Session 95 of User core. Jan 20 02:04:24.564594 sshd[6549]: Connection closed by 10.0.0.1 port 34094 Jan 20 02:04:24.563929 sshd-session[6546]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:24.619782 systemd[1]: sshd@94-10.0.0.51:22-10.0.0.1:34094.service: Deactivated successfully. Jan 20 02:04:24.659103 systemd[1]: session-95.scope: Deactivated successfully. Jan 20 02:04:24.677874 systemd-logind[1565]: Session 95 logged out. Waiting for processes to exit. Jan 20 02:04:24.696074 systemd-logind[1565]: Removed session 95. Jan 20 02:04:29.661083 systemd[1]: Started sshd@95-10.0.0.51:22-10.0.0.1:60964.service - OpenSSH per-connection server daemon (10.0.0.1:60964). Jan 20 02:04:30.051042 sshd[6564]: Accepted publickey for core from 10.0.0.1 port 60964 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:04:30.055068 sshd-session[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:30.125820 systemd-logind[1565]: New session 96 of user core. Jan 20 02:04:30.243208 systemd[1]: Started session-96.scope - Session 96 of User core. Jan 20 02:04:31.315305 sshd[6567]: Connection closed by 10.0.0.1 port 60964 Jan 20 02:04:31.329194 sshd-session[6564]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:31.367838 systemd[1]: sshd@95-10.0.0.51:22-10.0.0.1:60964.service: Deactivated successfully. Jan 20 02:04:31.379223 systemd[1]: session-96.scope: Deactivated successfully. Jan 20 02:04:31.388113 systemd-logind[1565]: Session 96 logged out. Waiting for processes to exit. Jan 20 02:04:31.390697 systemd-logind[1565]: Removed session 96. Jan 20 02:04:36.505971 systemd[1]: Started sshd@96-10.0.0.51:22-10.0.0.1:43270.service - OpenSSH per-connection server daemon (10.0.0.1:43270). Jan 20 02:04:37.086304 sshd[6581]: Accepted publickey for core from 10.0.0.1 port 43270 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:04:37.192206 sshd-session[6581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:37.306977 systemd-logind[1565]: New session 97 of user core. Jan 20 02:04:37.358301 systemd[1]: Started session-97.scope - Session 97 of User core. Jan 20 02:04:38.521560 sshd[6584]: Connection closed by 10.0.0.1 port 43270 Jan 20 02:04:38.518841 sshd-session[6581]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:38.567235 systemd[1]: sshd@96-10.0.0.51:22-10.0.0.1:43270.service: Deactivated successfully. Jan 20 02:04:38.587996 systemd[1]: session-97.scope: Deactivated successfully. Jan 20 02:04:38.608801 systemd-logind[1565]: Session 97 logged out. Waiting for processes to exit. Jan 20 02:04:38.620303 systemd-logind[1565]: Removed session 97. Jan 20 02:04:39.344735 kubelet[3059]: E0120 02:04:39.343549 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:40.340830 kubelet[3059]: E0120 02:04:40.339928 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:43.579890 systemd[1]: Started sshd@97-10.0.0.51:22-10.0.0.1:43284.service - OpenSSH per-connection server daemon (10.0.0.1:43284). Jan 20 02:04:43.862976 sshd[6600]: Accepted publickey for core from 10.0.0.1 port 43284 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:04:43.867020 sshd-session[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:43.892544 systemd-logind[1565]: New session 98 of user core. Jan 20 02:04:43.922230 systemd[1]: Started session-98.scope - Session 98 of User core. Jan 20 02:04:44.733507 sshd[6603]: Connection closed by 10.0.0.1 port 43284 Jan 20 02:04:44.734347 sshd-session[6600]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:44.790775 systemd[1]: sshd@97-10.0.0.51:22-10.0.0.1:43284.service: Deactivated successfully. Jan 20 02:04:44.805844 systemd[1]: session-98.scope: Deactivated successfully. Jan 20 02:04:44.823863 systemd-logind[1565]: Session 98 logged out. Waiting for processes to exit. Jan 20 02:04:44.888565 systemd-logind[1565]: Removed session 98. Jan 20 02:04:47.347880 kubelet[3059]: E0120 02:04:47.343617 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:49.841694 systemd[1]: Started sshd@98-10.0.0.51:22-10.0.0.1:51314.service - OpenSSH per-connection server daemon (10.0.0.1:51314). Jan 20 02:04:50.340873 sshd[6616]: Accepted publickey for core from 10.0.0.1 port 51314 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:04:50.362875 sshd-session[6616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:50.442762 systemd-logind[1565]: New session 99 of user core. Jan 20 02:04:50.479800 systemd[1]: Started session-99.scope - Session 99 of User core. Jan 20 02:04:51.524123 sshd[6619]: Connection closed by 10.0.0.1 port 51314 Jan 20 02:04:51.526227 sshd-session[6616]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:51.548176 systemd[1]: sshd@98-10.0.0.51:22-10.0.0.1:51314.service: Deactivated successfully. Jan 20 02:04:51.572331 systemd[1]: session-99.scope: Deactivated successfully. Jan 20 02:04:51.597290 systemd-logind[1565]: Session 99 logged out. Waiting for processes to exit. Jan 20 02:04:51.614496 systemd-logind[1565]: Removed session 99. Jan 20 02:04:56.363858 kubelet[3059]: E0120 02:04:56.361018 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:56.613234 systemd[1]: Started sshd@99-10.0.0.51:22-10.0.0.1:46000.service - OpenSSH per-connection server daemon (10.0.0.1:46000). Jan 20 02:04:57.072175 sshd[6634]: Accepted publickey for core from 10.0.0.1 port 46000 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:04:57.096010 sshd-session[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:57.201811 systemd-logind[1565]: New session 100 of user core. Jan 20 02:04:57.285706 systemd[1]: Started session-100.scope - Session 100 of User core. Jan 20 02:04:59.071648 sshd[6638]: Connection closed by 10.0.0.1 port 46000 Jan 20 02:04:59.086344 sshd-session[6634]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:59.158658 systemd[1]: sshd@99-10.0.0.51:22-10.0.0.1:46000.service: Deactivated successfully. Jan 20 02:04:59.217215 systemd[1]: session-100.scope: Deactivated successfully. Jan 20 02:04:59.239778 systemd-logind[1565]: Session 100 logged out. Waiting for processes to exit. Jan 20 02:04:59.268081 systemd-logind[1565]: Removed session 100. Jan 20 02:05:01.354495 kubelet[3059]: E0120 02:05:01.354322 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:04.216975 systemd[1]: Started sshd@100-10.0.0.51:22-10.0.0.1:46016.service - OpenSSH per-connection server daemon (10.0.0.1:46016). Jan 20 02:05:04.638905 sshd[6653]: Accepted publickey for core from 10.0.0.1 port 46016 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:05:04.655976 sshd-session[6653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:04.718051 systemd-logind[1565]: New session 101 of user core. Jan 20 02:05:04.754758 systemd[1]: Started session-101.scope - Session 101 of User core. Jan 20 02:05:05.556287 sshd[6656]: Connection closed by 10.0.0.1 port 46016 Jan 20 02:05:05.558189 sshd-session[6653]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:05.593061 systemd-logind[1565]: Session 101 logged out. Waiting for processes to exit. Jan 20 02:05:05.626087 systemd[1]: sshd@100-10.0.0.51:22-10.0.0.1:46016.service: Deactivated successfully. Jan 20 02:05:05.644282 systemd[1]: session-101.scope: Deactivated successfully. Jan 20 02:05:05.673091 systemd-logind[1565]: Removed session 101. Jan 20 02:05:16.535009 systemd[1]: Started sshd@101-10.0.0.51:22-10.0.0.1:46128.service - OpenSSH per-connection server daemon (10.0.0.1:46128). Jan 20 02:05:18.351500 kubelet[3059]: E0120 02:05:18.351119 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:18.505677 kubelet[3059]: E0120 02:05:18.501534 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.732s" Jan 20 02:05:18.575620 kubelet[3059]: E0120 02:05:18.575580 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:18.691919 sshd[6671]: Accepted publickey for core from 10.0.0.1 port 46128 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:05:18.702605 sshd-session[6671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:18.759738 systemd-logind[1565]: New session 102 of user core. Jan 20 02:05:18.774880 systemd[1]: Started session-102.scope - Session 102 of User core. Jan 20 02:05:19.900135 sshd[6676]: Connection closed by 10.0.0.1 port 46128 Jan 20 02:05:19.901733 sshd-session[6671]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:19.953229 systemd[1]: sshd@101-10.0.0.51:22-10.0.0.1:46128.service: Deactivated successfully. Jan 20 02:05:19.985086 systemd[1]: session-102.scope: Deactivated successfully. Jan 20 02:05:20.002303 systemd-logind[1565]: Session 102 logged out. Waiting for processes to exit. Jan 20 02:05:20.027516 systemd-logind[1565]: Removed session 102. Jan 20 02:05:25.016696 systemd[1]: Started sshd@102-10.0.0.51:22-10.0.0.1:42896.service - OpenSSH per-connection server daemon (10.0.0.1:42896). Jan 20 02:05:25.490834 sshd[6691]: Accepted publickey for core from 10.0.0.1 port 42896 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:05:25.497241 sshd-session[6691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:25.536707 systemd-logind[1565]: New session 103 of user core. Jan 20 02:05:25.550835 systemd[1]: Started session-103.scope - Session 103 of User core. Jan 20 02:05:27.184254 sshd[6694]: Connection closed by 10.0.0.1 port 42896 Jan 20 02:05:27.195317 sshd-session[6691]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:27.284742 systemd[1]: sshd@102-10.0.0.51:22-10.0.0.1:42896.service: Deactivated successfully. Jan 20 02:05:27.320002 systemd[1]: session-103.scope: Deactivated successfully. Jan 20 02:05:27.367720 systemd-logind[1565]: Session 103 logged out. Waiting for processes to exit. Jan 20 02:05:27.395044 systemd-logind[1565]: Removed session 103. Jan 20 02:05:32.289956 systemd[1]: Started sshd@103-10.0.0.51:22-10.0.0.1:42900.service - OpenSSH per-connection server daemon (10.0.0.1:42900). Jan 20 02:05:32.870534 sshd[6708]: Accepted publickey for core from 10.0.0.1 port 42900 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:05:32.912251 sshd-session[6708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:32.971550 systemd-logind[1565]: New session 104 of user core. Jan 20 02:05:32.989769 systemd[1]: Started session-104.scope - Session 104 of User core. Jan 20 02:05:34.097194 sshd[6711]: Connection closed by 10.0.0.1 port 42900 Jan 20 02:05:34.092947 sshd-session[6708]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:34.118341 systemd[1]: sshd@103-10.0.0.51:22-10.0.0.1:42900.service: Deactivated successfully. Jan 20 02:05:34.147876 systemd[1]: session-104.scope: Deactivated successfully. Jan 20 02:05:34.160802 systemd-logind[1565]: Session 104 logged out. Waiting for processes to exit. Jan 20 02:05:34.191537 systemd-logind[1565]: Removed session 104. Jan 20 02:05:39.280008 systemd[1]: Started sshd@104-10.0.0.51:22-10.0.0.1:38606.service - OpenSSH per-connection server daemon (10.0.0.1:38606). Jan 20 02:05:40.035097 sshd[6725]: Accepted publickey for core from 10.0.0.1 port 38606 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:05:40.057678 sshd-session[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:40.132268 systemd-logind[1565]: New session 105 of user core. Jan 20 02:05:40.154316 systemd[1]: Started session-105.scope - Session 105 of User core. Jan 20 02:05:41.242703 sshd[6732]: Connection closed by 10.0.0.1 port 38606 Jan 20 02:05:41.242071 sshd-session[6725]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:41.288332 systemd[1]: sshd@104-10.0.0.51:22-10.0.0.1:38606.service: Deactivated successfully. Jan 20 02:05:41.311040 systemd[1]: session-105.scope: Deactivated successfully. Jan 20 02:05:41.342621 systemd-logind[1565]: Session 105 logged out. Waiting for processes to exit. Jan 20 02:05:41.362668 systemd-logind[1565]: Removed session 105. Jan 20 02:05:42.345829 kubelet[3059]: E0120 02:05:42.345709 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:46.348169 systemd[1]: Started sshd@105-10.0.0.51:22-10.0.0.1:58690.service - OpenSSH per-connection server daemon (10.0.0.1:58690). Jan 20 02:05:46.842232 sshd[6745]: Accepted publickey for core from 10.0.0.1 port 58690 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:05:46.860807 sshd-session[6745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:46.944019 systemd-logind[1565]: New session 106 of user core. Jan 20 02:05:47.000716 systemd[1]: Started session-106.scope - Session 106 of User core. Jan 20 02:05:47.932783 sshd[6748]: Connection closed by 10.0.0.1 port 58690 Jan 20 02:05:47.927866 sshd-session[6745]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:47.958083 systemd[1]: sshd@105-10.0.0.51:22-10.0.0.1:58690.service: Deactivated successfully. Jan 20 02:05:47.984041 systemd[1]: session-106.scope: Deactivated successfully. Jan 20 02:05:47.996199 systemd-logind[1565]: Session 106 logged out. Waiting for processes to exit. Jan 20 02:05:48.009172 systemd-logind[1565]: Removed session 106. Jan 20 02:05:52.965957 systemd[1]: Started sshd@106-10.0.0.51:22-10.0.0.1:58698.service - OpenSSH per-connection server daemon (10.0.0.1:58698). Jan 20 02:05:53.208034 sshd[6764]: Accepted publickey for core from 10.0.0.1 port 58698 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:05:53.226974 sshd-session[6764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:53.313283 systemd-logind[1565]: New session 107 of user core. Jan 20 02:05:53.374759 systemd[1]: Started session-107.scope - Session 107 of User core. Jan 20 02:05:54.335847 sshd[6767]: Connection closed by 10.0.0.1 port 58698 Jan 20 02:05:54.334246 sshd-session[6764]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:54.364857 systemd[1]: sshd@106-10.0.0.51:22-10.0.0.1:58698.service: Deactivated successfully. Jan 20 02:05:54.387005 systemd[1]: session-107.scope: Deactivated successfully. Jan 20 02:05:54.408936 systemd-logind[1565]: Session 107 logged out. Waiting for processes to exit. Jan 20 02:05:54.429976 systemd-logind[1565]: Removed session 107. Jan 20 02:05:59.522549 systemd[1]: Started sshd@107-10.0.0.51:22-10.0.0.1:48208.service - OpenSSH per-connection server daemon (10.0.0.1:48208). Jan 20 02:06:00.664800 sshd[6781]: Accepted publickey for core from 10.0.0.1 port 48208 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:06:00.663566 sshd-session[6781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:01.119667 systemd-logind[1565]: New session 108 of user core. Jan 20 02:06:01.170062 systemd[1]: Started session-108.scope - Session 108 of User core. Jan 20 02:06:02.719731 sshd[6784]: Connection closed by 10.0.0.1 port 48208 Jan 20 02:06:02.715944 sshd-session[6781]: pam_unix(sshd:session): session closed for user core Jan 20 02:06:02.793174 systemd[1]: sshd@107-10.0.0.51:22-10.0.0.1:48208.service: Deactivated successfully. Jan 20 02:06:02.812545 systemd[1]: session-108.scope: Deactivated successfully. Jan 20 02:06:02.831651 systemd-logind[1565]: Session 108 logged out. Waiting for processes to exit. Jan 20 02:06:02.867610 systemd-logind[1565]: Removed session 108. Jan 20 02:06:04.342674 kubelet[3059]: E0120 02:06:04.342550 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:07.391814 kubelet[3059]: E0120 02:06:07.379848 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:07.780768 systemd[1]: Started sshd@108-10.0.0.51:22-10.0.0.1:43092.service - OpenSSH per-connection server daemon (10.0.0.1:43092). Jan 20 02:06:08.233004 sshd[6797]: Accepted publickey for core from 10.0.0.1 port 43092 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:06:08.254021 sshd-session[6797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:08.291846 systemd-logind[1565]: New session 109 of user core. Jan 20 02:06:08.339695 systemd[1]: Started session-109.scope - Session 109 of User core. Jan 20 02:06:09.170575 sshd[6800]: Connection closed by 10.0.0.1 port 43092 Jan 20 02:06:09.175774 sshd-session[6797]: pam_unix(sshd:session): session closed for user core Jan 20 02:06:09.222051 systemd[1]: sshd@108-10.0.0.51:22-10.0.0.1:43092.service: Deactivated successfully. Jan 20 02:06:09.256959 systemd[1]: session-109.scope: Deactivated successfully. Jan 20 02:06:09.295008 systemd-logind[1565]: Session 109 logged out. Waiting for processes to exit. Jan 20 02:06:09.314251 systemd-logind[1565]: Removed session 109. Jan 20 02:06:14.305592 systemd[1]: Started sshd@109-10.0.0.51:22-10.0.0.1:43098.service - OpenSSH per-connection server daemon (10.0.0.1:43098). Jan 20 02:06:14.691424 sshd[6814]: Accepted publickey for core from 10.0.0.1 port 43098 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:06:14.713322 sshd-session[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:14.767656 systemd-logind[1565]: New session 110 of user core. Jan 20 02:06:14.791776 systemd[1]: Started session-110.scope - Session 110 of User core. Jan 20 02:06:15.676519 sshd[6817]: Connection closed by 10.0.0.1 port 43098 Jan 20 02:06:15.673890 sshd-session[6814]: pam_unix(sshd:session): session closed for user core Jan 20 02:06:15.697245 systemd[1]: sshd@109-10.0.0.51:22-10.0.0.1:43098.service: Deactivated successfully. Jan 20 02:06:15.721896 systemd[1]: session-110.scope: Deactivated successfully. Jan 20 02:06:15.735294 systemd-logind[1565]: Session 110 logged out. Waiting for processes to exit. Jan 20 02:06:15.757037 systemd-logind[1565]: Removed session 110. Jan 20 02:06:17.343529 kubelet[3059]: E0120 02:06:17.342141 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:20.341938 kubelet[3059]: E0120 02:06:20.341559 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:20.765264 systemd[1]: Started sshd@110-10.0.0.51:22-10.0.0.1:38198.service - OpenSSH per-connection server daemon (10.0.0.1:38198). Jan 20 02:06:20.956184 sshd[6831]: Accepted publickey for core from 10.0.0.1 port 38198 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:06:20.953949 sshd-session[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:20.985141 systemd-logind[1565]: New session 111 of user core. Jan 20 02:06:21.029941 systemd[1]: Started session-111.scope - Session 111 of User core. Jan 20 02:06:21.708838 sshd[6834]: Connection closed by 10.0.0.1 port 38198 Jan 20 02:06:21.706700 sshd-session[6831]: pam_unix(sshd:session): session closed for user core Jan 20 02:06:21.785598 systemd[1]: sshd@110-10.0.0.51:22-10.0.0.1:38198.service: Deactivated successfully. Jan 20 02:06:21.818231 systemd[1]: session-111.scope: Deactivated successfully. Jan 20 02:06:21.841415 systemd-logind[1565]: Session 111 logged out. Waiting for processes to exit. Jan 20 02:06:21.898692 systemd[1]: Started sshd@111-10.0.0.51:22-10.0.0.1:38208.service - OpenSSH per-connection server daemon (10.0.0.1:38208). Jan 20 02:06:21.913811 systemd-logind[1565]: Removed session 111. Jan 20 02:06:22.318530 sshd[6850]: Accepted publickey for core from 10.0.0.1 port 38208 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:06:22.335694 sshd-session[6850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:22.384708 systemd-logind[1565]: New session 112 of user core. Jan 20 02:06:22.394746 systemd[1]: Started session-112.scope - Session 112 of User core. Jan 20 02:06:23.346863 kubelet[3059]: E0120 02:06:23.345219 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:25.358900 kubelet[3059]: E0120 02:06:25.356170 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:26.224318 containerd[1591]: time="2026-01-20T02:06:26.218831726Z" level=info msg="StopContainer for \"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\" with timeout 30 (s)" Jan 20 02:06:26.235218 containerd[1591]: time="2026-01-20T02:06:26.228291669Z" level=info msg="Stop container \"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\" with signal terminated" Jan 20 02:06:26.406112 kubelet[3059]: E0120 02:06:26.406057 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:26.529061 systemd[1]: cri-containerd-7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9.scope: Deactivated successfully. Jan 20 02:06:26.547512 containerd[1591]: time="2026-01-20T02:06:26.547307477Z" level=info msg="received container exit event container_id:\"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\" id:\"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\" pid:4818 exited_at:{seconds:1768874786 nanos:539209984}" Jan 20 02:06:26.551564 systemd[1]: cri-containerd-7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9.scope: Consumed 5.293s CPU time, 31.1M memory peak, 4K written to disk. Jan 20 02:06:26.878862 containerd[1591]: time="2026-01-20T02:06:26.878591039Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:06:26.931191 kubelet[3059]: E0120 02:06:26.931098 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:06:26.977113 containerd[1591]: time="2026-01-20T02:06:26.977054876Z" level=info msg="StopContainer for \"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\" with timeout 2 (s)" Jan 20 02:06:26.987015 containerd[1591]: time="2026-01-20T02:06:26.985850805Z" level=info msg="Stop container \"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\" with signal terminated" Jan 20 02:06:27.093074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9-rootfs.mount: Deactivated successfully. Jan 20 02:06:27.260064 systemd-networkd[1486]: lxc_health: Link DOWN Jan 20 02:06:27.260082 systemd-networkd[1486]: lxc_health: Lost carrier Jan 20 02:06:27.675265 systemd[1]: cri-containerd-bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4.scope: Deactivated successfully. Jan 20 02:06:27.675898 systemd[1]: cri-containerd-bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4.scope: Consumed 37.985s CPU time, 129.2M memory peak, 496K read from disk, 13.3M written to disk. Jan 20 02:06:27.709002 containerd[1591]: time="2026-01-20T02:06:27.694174160Z" level=info msg="received container exit event container_id:\"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\" id:\"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\" pid:3939 exited_at:{seconds:1768874787 nanos:686590513}" Jan 20 02:06:27.709002 containerd[1591]: time="2026-01-20T02:06:27.695950142Z" level=info msg="StopContainer for \"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\" returns successfully" Jan 20 02:06:27.784820 sshd[6853]: Connection closed by 10.0.0.1 port 38208 Jan 20 02:06:27.784223 sshd-session[6850]: pam_unix(sshd:session): session closed for user core Jan 20 02:06:27.792509 kubelet[3059]: I0120 02:06:27.788115 3059 scope.go:117] "RemoveContainer" containerID="964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6" Jan 20 02:06:27.838454 containerd[1591]: time="2026-01-20T02:06:27.837967463Z" level=info msg="StopPodSandbox for \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\"" Jan 20 02:06:27.848570 containerd[1591]: time="2026-01-20T02:06:27.848230122Z" level=info msg="Container to stop \"964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:06:27.848570 containerd[1591]: time="2026-01-20T02:06:27.848308387Z" level=info msg="Container to stop \"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:06:27.895561 containerd[1591]: time="2026-01-20T02:06:27.894557460Z" level=info msg="RemoveContainer for \"964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6\"" Jan 20 02:06:27.903141 systemd[1]: sshd@111-10.0.0.51:22-10.0.0.1:38208.service: Deactivated successfully. Jan 20 02:06:27.944100 systemd[1]: session-112.scope: Deactivated successfully. Jan 20 02:06:27.970559 systemd-logind[1565]: Session 112 logged out. Waiting for processes to exit. Jan 20 02:06:28.011038 systemd[1]: Started sshd@112-10.0.0.51:22-10.0.0.1:40160.service - OpenSSH per-connection server daemon (10.0.0.1:40160). Jan 20 02:06:28.048722 systemd-logind[1565]: Removed session 112. Jan 20 02:06:28.056122 systemd[1]: cri-containerd-20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b.scope: Deactivated successfully. Jan 20 02:06:28.117436 containerd[1591]: time="2026-01-20T02:06:28.117252952Z" level=info msg="received sandbox exit event container_id:\"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" id:\"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" exit_status:137 exited_at:{seconds:1768874788 nanos:110000991}" monitor_name=podsandbox Jan 20 02:06:28.187688 containerd[1591]: time="2026-01-20T02:06:28.182458739Z" level=info msg="RemoveContainer for \"964b3ffea1bc5447a15745eefe111eff73acc5dd1eaf6843a021975662caefd6\" returns successfully" Jan 20 02:06:28.316977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4-rootfs.mount: Deactivated successfully. Jan 20 02:06:28.540515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b-rootfs.mount: Deactivated successfully. Jan 20 02:06:28.569895 containerd[1591]: time="2026-01-20T02:06:28.567718870Z" level=info msg="StopContainer for \"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\" returns successfully" Jan 20 02:06:28.569895 containerd[1591]: time="2026-01-20T02:06:28.569171340Z" level=info msg="StopPodSandbox for \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\"" Jan 20 02:06:28.569895 containerd[1591]: time="2026-01-20T02:06:28.569261137Z" level=info msg="Container to stop \"c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:06:28.569895 containerd[1591]: time="2026-01-20T02:06:28.569284931Z" level=info msg="Container to stop \"4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:06:28.569895 containerd[1591]: time="2026-01-20T02:06:28.569300821Z" level=info msg="Container to stop \"7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:06:28.569895 containerd[1591]: time="2026-01-20T02:06:28.569353489Z" level=info msg="Container to stop \"ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:06:28.569895 containerd[1591]: time="2026-01-20T02:06:28.569433337Z" level=info msg="Container to stop \"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:06:28.595840 containerd[1591]: time="2026-01-20T02:06:28.595680049Z" level=info msg="shim disconnected" id=20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b namespace=k8s.io Jan 20 02:06:28.596663 containerd[1591]: time="2026-01-20T02:06:28.596626288Z" level=warning msg="cleaning up after shim disconnected" id=20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b namespace=k8s.io Jan 20 02:06:28.596840 containerd[1591]: time="2026-01-20T02:06:28.596789612Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 02:06:28.613483 sshd[6926]: Accepted publickey for core from 10.0.0.1 port 40160 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:06:28.632879 sshd-session[6926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:28.668087 containerd[1591]: time="2026-01-20T02:06:28.668030939Z" level=info msg="received sandbox exit event container_id:\"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" id:\"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" exit_status:137 exited_at:{seconds:1768874788 nanos:667750137}" monitor_name=podsandbox Jan 20 02:06:28.675268 systemd[1]: cri-containerd-c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6.scope: Deactivated successfully. Jan 20 02:06:28.677852 systemd-logind[1565]: New session 113 of user core. Jan 20 02:06:28.703867 systemd[1]: Started session-113.scope - Session 113 of User core. Jan 20 02:06:28.774101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b-shm.mount: Deactivated successfully. Jan 20 02:06:28.779027 containerd[1591]: time="2026-01-20T02:06:28.778629464Z" level=info msg="received sandbox container exit event sandbox_id:\"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" exit_status:137 exited_at:{seconds:1768874788 nanos:110000991}" monitor_name=criService Jan 20 02:06:28.788114 containerd[1591]: time="2026-01-20T02:06:28.787940632Z" level=info msg="TearDown network for sandbox \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" successfully" Jan 20 02:06:28.788672 containerd[1591]: time="2026-01-20T02:06:28.788637118Z" level=info msg="StopPodSandbox for \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" returns successfully" Jan 20 02:06:28.819901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6-rootfs.mount: Deactivated successfully. Jan 20 02:06:28.841448 kubelet[3059]: I0120 02:06:28.841204 3059 scope.go:117] "RemoveContainer" containerID="7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9" Jan 20 02:06:28.843805 containerd[1591]: time="2026-01-20T02:06:28.843297778Z" level=info msg="RemoveContainer for \"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\"" Jan 20 02:06:28.864001 containerd[1591]: time="2026-01-20T02:06:28.863701322Z" level=info msg="shim disconnected" id=c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6 namespace=k8s.io Jan 20 02:06:28.864001 containerd[1591]: time="2026-01-20T02:06:28.863741946Z" level=warning msg="cleaning up after shim disconnected" id=c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6 namespace=k8s.io Jan 20 02:06:28.864001 containerd[1591]: time="2026-01-20T02:06:28.863753758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 02:06:28.889632 containerd[1591]: time="2026-01-20T02:06:28.886826939Z" level=info msg="RemoveContainer for \"7f10ac78a421c9b19724bda153ab975c73da46082d08435fe25e59beab123ee9\" returns successfully" Jan 20 02:06:28.919088 kubelet[3059]: I0120 02:06:28.917266 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1707fe08-91f0-4065-a008-ede32ebd2110-cilium-config-path\") pod \"1707fe08-91f0-4065-a008-ede32ebd2110\" (UID: \"1707fe08-91f0-4065-a008-ede32ebd2110\") " Jan 20 02:06:28.919444 kubelet[3059]: I0120 02:06:28.919273 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9blht\" (UniqueName: \"kubernetes.io/projected/1707fe08-91f0-4065-a008-ede32ebd2110-kube-api-access-9blht\") pod \"1707fe08-91f0-4065-a008-ede32ebd2110\" (UID: \"1707fe08-91f0-4065-a008-ede32ebd2110\") " Jan 20 02:06:28.932532 containerd[1591]: time="2026-01-20T02:06:28.932478828Z" level=info msg="received sandbox container exit event sandbox_id:\"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" exit_status:137 exited_at:{seconds:1768874788 nanos:667750137}" monitor_name=criService Jan 20 02:06:28.939089 kubelet[3059]: I0120 02:06:28.938829 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1707fe08-91f0-4065-a008-ede32ebd2110-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1707fe08-91f0-4065-a008-ede32ebd2110" (UID: "1707fe08-91f0-4065-a008-ede32ebd2110"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 02:06:28.952861 containerd[1591]: time="2026-01-20T02:06:28.952757345Z" level=info msg="TearDown network for sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" successfully" Jan 20 02:06:28.952861 containerd[1591]: time="2026-01-20T02:06:28.952804994Z" level=info msg="StopPodSandbox for \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" returns successfully" Jan 20 02:06:28.958012 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6-shm.mount: Deactivated successfully. Jan 20 02:06:28.963490 systemd[1]: var-lib-kubelet-pods-1707fe08\x2d91f0\x2d4065\x2da008\x2dede32ebd2110-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9blht.mount: Deactivated successfully. Jan 20 02:06:28.975951 kubelet[3059]: I0120 02:06:28.974176 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1707fe08-91f0-4065-a008-ede32ebd2110-kube-api-access-9blht" (OuterVolumeSpecName: "kube-api-access-9blht") pod "1707fe08-91f0-4065-a008-ede32ebd2110" (UID: "1707fe08-91f0-4065-a008-ede32ebd2110"). InnerVolumeSpecName "kube-api-access-9blht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:06:29.026138 kubelet[3059]: I0120 02:06:29.023109 3059 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1707fe08-91f0-4065-a008-ede32ebd2110-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.026138 kubelet[3059]: I0120 02:06:29.023259 3059 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9blht\" (UniqueName: \"kubernetes.io/projected/1707fe08-91f0-4065-a008-ede32ebd2110-kube-api-access-9blht\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.225645 systemd[1]: Removed slice kubepods-besteffort-pod1707fe08_91f0_4065_a008_ede32ebd2110.slice - libcontainer container kubepods-besteffort-pod1707fe08_91f0_4065_a008_ede32ebd2110.slice. Jan 20 02:06:29.227032 systemd[1]: kubepods-besteffort-pod1707fe08_91f0_4065_a008_ede32ebd2110.slice: Consumed 7.370s CPU time, 31.4M memory peak, 8K written to disk. Jan 20 02:06:29.249754 kubelet[3059]: I0120 02:06:29.248464 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:06:29.249754 kubelet[3059]: I0120 02:06:29.248567 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-run\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.249754 kubelet[3059]: I0120 02:06:29.248752 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:06:29.249754 kubelet[3059]: I0120 02:06:29.248660 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-xtables-lock\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.249754 kubelet[3059]: I0120 02:06:29.248851 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-host-proc-sys-net\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.250586 kubelet[3059]: I0120 02:06:29.248933 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:06:29.255703 kubelet[3059]: I0120 02:06:29.250834 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:06:29.255703 kubelet[3059]: I0120 02:06:29.250963 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-host-proc-sys-kernel\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.255703 kubelet[3059]: I0120 02:06:29.251132 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bec2d1f6-0191-44c5-91d0-e947fbda26bc-hubble-tls\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.255703 kubelet[3059]: I0120 02:06:29.251889 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjgq2\" (UniqueName: \"kubernetes.io/projected/bec2d1f6-0191-44c5-91d0-e947fbda26bc-kube-api-access-cjgq2\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.255703 kubelet[3059]: I0120 02:06:29.252464 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-hostproc\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.255954 kubelet[3059]: I0120 02:06:29.252564 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-hostproc" (OuterVolumeSpecName: "hostproc") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:06:29.255954 kubelet[3059]: I0120 02:06:29.252765 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-bpf-maps\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.255954 kubelet[3059]: I0120 02:06:29.252856 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:06:29.255954 kubelet[3059]: I0120 02:06:29.253055 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-lib-modules\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.255954 kubelet[3059]: I0120 02:06:29.253140 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:06:29.284442 kubelet[3059]: I0120 02:06:29.284005 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:06:29.310799 kubelet[3059]: I0120 02:06:29.306615 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-cgroup\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.311131 kubelet[3059]: I0120 02:06:29.311097 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cni-path\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.320897 kubelet[3059]: I0120 02:06:29.311988 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cni-path" (OuterVolumeSpecName: "cni-path") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:06:29.321803 kubelet[3059]: I0120 02:06:29.317619 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:06:29.321803 kubelet[3059]: I0120 02:06:29.317557 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-etc-cni-netd\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.321803 kubelet[3059]: I0120 02:06:29.321195 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bec2d1f6-0191-44c5-91d0-e947fbda26bc-clustermesh-secrets\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.321803 kubelet[3059]: I0120 02:06:29.321238 3059 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-config-path\") pod \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\" (UID: \"bec2d1f6-0191-44c5-91d0-e947fbda26bc\") " Jan 20 02:06:29.321803 kubelet[3059]: I0120 02:06:29.321308 3059 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.321803 kubelet[3059]: I0120 02:06:29.321430 3059 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.321803 kubelet[3059]: I0120 02:06:29.321449 3059 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.322180 kubelet[3059]: I0120 02:06:29.321466 3059 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.322180 kubelet[3059]: I0120 02:06:29.321477 3059 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.322180 kubelet[3059]: I0120 02:06:29.321492 3059 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.322180 kubelet[3059]: I0120 02:06:29.321506 3059 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.322180 kubelet[3059]: I0120 02:06:29.321520 3059 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.322180 kubelet[3059]: I0120 02:06:29.321534 3059 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.322180 kubelet[3059]: I0120 02:06:29.321551 3059 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bec2d1f6-0191-44c5-91d0-e947fbda26bc-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.382009 systemd[1]: var-lib-kubelet-pods-bec2d1f6\x2d0191\x2d44c5\x2d91d0\x2de947fbda26bc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 02:06:29.387772 kubelet[3059]: I0120 02:06:29.387717 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 02:06:29.445000 kubelet[3059]: I0120 02:06:29.437648 3059 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bec2d1f6-0191-44c5-91d0-e947fbda26bc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.446749 kubelet[3059]: I0120 02:06:29.446695 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec2d1f6-0191-44c5-91d0-e947fbda26bc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:06:29.459738 systemd[1]: var-lib-kubelet-pods-bec2d1f6\x2d0191\x2d44c5\x2d91d0\x2de947fbda26bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcjgq2.mount: Deactivated successfully. Jan 20 02:06:29.493253 systemd[1]: var-lib-kubelet-pods-bec2d1f6\x2d0191\x2d44c5\x2d91d0\x2de947fbda26bc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 02:06:29.526842 kubelet[3059]: I0120 02:06:29.525901 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec2d1f6-0191-44c5-91d0-e947fbda26bc-kube-api-access-cjgq2" (OuterVolumeSpecName: "kube-api-access-cjgq2") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "kube-api-access-cjgq2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:06:29.542542 kubelet[3059]: I0120 02:06:29.540521 3059 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bec2d1f6-0191-44c5-91d0-e947fbda26bc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.542542 kubelet[3059]: I0120 02:06:29.540572 3059 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cjgq2\" (UniqueName: \"kubernetes.io/projected/bec2d1f6-0191-44c5-91d0-e947fbda26bc-kube-api-access-cjgq2\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.562133 kubelet[3059]: I0120 02:06:29.562070 3059 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec2d1f6-0191-44c5-91d0-e947fbda26bc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bec2d1f6-0191-44c5-91d0-e947fbda26bc" (UID: "bec2d1f6-0191-44c5-91d0-e947fbda26bc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 02:06:29.642479 kubelet[3059]: I0120 02:06:29.640917 3059 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bec2d1f6-0191-44c5-91d0-e947fbda26bc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 20 02:06:29.884429 kubelet[3059]: I0120 02:06:29.883657 3059 scope.go:117] "RemoveContainer" containerID="bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4" Jan 20 02:06:29.911112 containerd[1591]: time="2026-01-20T02:06:29.910754547Z" level=info msg="RemoveContainer for \"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\"" Jan 20 02:06:29.976568 systemd[1]: Removed slice kubepods-burstable-podbec2d1f6_0191_44c5_91d0_e947fbda26bc.slice - libcontainer container kubepods-burstable-podbec2d1f6_0191_44c5_91d0_e947fbda26bc.slice. Jan 20 02:06:29.978109 systemd[1]: kubepods-burstable-podbec2d1f6_0191_44c5_91d0_e947fbda26bc.slice: Consumed 38.433s CPU time, 129.6M memory peak, 516K read from disk, 13.3M written to disk. Jan 20 02:06:30.032051 containerd[1591]: time="2026-01-20T02:06:30.025143938Z" level=info msg="RemoveContainer for \"bea6a734f8a3146ffd7c4878243264d6cfbc4415ef55dead780ed9898f7531f4\" returns successfully" Jan 20 02:06:30.035536 kubelet[3059]: I0120 02:06:30.035278 3059 scope.go:117] "RemoveContainer" containerID="ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f" Jan 20 02:06:30.063131 containerd[1591]: time="2026-01-20T02:06:30.061041278Z" level=info msg="RemoveContainer for \"ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f\"" Jan 20 02:06:30.139006 containerd[1591]: time="2026-01-20T02:06:30.138845094Z" level=info msg="RemoveContainer for \"ee9acea8e5497f175b59c746eda8e2cf998d453a14f89a76ec3f3c67adea7d5f\" returns successfully" Jan 20 02:06:30.157026 kubelet[3059]: I0120 02:06:30.156868 3059 scope.go:117] "RemoveContainer" containerID="7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b" Jan 20 02:06:30.181657 containerd[1591]: time="2026-01-20T02:06:30.181606659Z" level=info msg="RemoveContainer for \"7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b\"" Jan 20 02:06:30.220426 containerd[1591]: time="2026-01-20T02:06:30.220163083Z" level=info msg="RemoveContainer for \"7fd365b47cc395240679f8716c2724b389cedea7040f91b55d8af48a8ba94f1b\" returns successfully" Jan 20 02:06:30.224684 kubelet[3059]: I0120 02:06:30.224528 3059 scope.go:117] "RemoveContainer" containerID="c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e" Jan 20 02:06:30.246159 containerd[1591]: time="2026-01-20T02:06:30.244911780Z" level=info msg="RemoveContainer for \"c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e\"" Jan 20 02:06:30.279158 containerd[1591]: time="2026-01-20T02:06:30.279019739Z" level=info msg="RemoveContainer for \"c82daf69ecafdee423eb7b97afebf4650c21c7fe342cb6b12e6bdb1c01430c5e\" returns successfully" Jan 20 02:06:30.282798 kubelet[3059]: I0120 02:06:30.282762 3059 scope.go:117] "RemoveContainer" containerID="4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332" Jan 20 02:06:30.305105 containerd[1591]: time="2026-01-20T02:06:30.304920468Z" level=info msg="RemoveContainer for \"4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332\"" Jan 20 02:06:30.370849 containerd[1591]: time="2026-01-20T02:06:30.370791091Z" level=info msg="RemoveContainer for \"4ef37d70ef86fa22b0ee1bf1535f12feeae73af5f95345e131a18438ac5de332\" returns successfully" Jan 20 02:06:30.468475 kubelet[3059]: I0120 02:06:30.468100 3059 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T02:06:30Z","lastTransitionTime":"2026-01-20T02:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 02:06:31.389248 kubelet[3059]: I0120 02:06:31.385236 3059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1707fe08-91f0-4065-a008-ede32ebd2110" path="/var/lib/kubelet/pods/1707fe08-91f0-4065-a008-ede32ebd2110/volumes" Jan 20 02:06:31.404781 kubelet[3059]: I0120 02:06:31.395862 3059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec2d1f6-0191-44c5-91d0-e947fbda26bc" path="/var/lib/kubelet/pods/bec2d1f6-0191-44c5-91d0-e947fbda26bc/volumes" Jan 20 02:06:31.452022 sshd[6974]: Connection closed by 10.0.0.1 port 40160 Jan 20 02:06:31.450775 sshd-session[6926]: pam_unix(sshd:session): session closed for user core Jan 20 02:06:31.553519 systemd[1]: sshd@112-10.0.0.51:22-10.0.0.1:40160.service: Deactivated successfully. Jan 20 02:06:31.587912 systemd[1]: session-113.scope: Deactivated successfully. Jan 20 02:06:31.618086 systemd-logind[1565]: Session 113 logged out. Waiting for processes to exit. Jan 20 02:06:31.678205 systemd[1]: Started sshd@113-10.0.0.51:22-10.0.0.1:40164.service - OpenSSH per-connection server daemon (10.0.0.1:40164). Jan 20 02:06:31.709028 systemd-logind[1565]: Removed session 113. Jan 20 02:06:32.013732 kubelet[3059]: E0120 02:06:32.004146 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:06:32.085136 systemd[1]: Created slice kubepods-burstable-pod28ac4243_c2b8_4ca0_b075_6d4024e91b0a.slice - libcontainer container kubepods-burstable-pod28ac4243_c2b8_4ca0_b075_6d4024e91b0a.slice. Jan 20 02:06:32.131496 sshd[7012]: Accepted publickey for core from 10.0.0.1 port 40164 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:06:32.126169 sshd-session[7012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:32.141907 systemd-logind[1565]: New session 114 of user core. Jan 20 02:06:32.148824 kubelet[3059]: I0120 02:06:32.148548 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-hubble-tls\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153552 kubelet[3059]: I0120 02:06:32.152463 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-bpf-maps\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153552 kubelet[3059]: I0120 02:06:32.152519 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-hostproc\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153552 kubelet[3059]: I0120 02:06:32.152550 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-xtables-lock\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153552 kubelet[3059]: I0120 02:06:32.152579 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-cilium-config-path\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153552 kubelet[3059]: I0120 02:06:32.152605 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-cilium-cgroup\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153552 kubelet[3059]: I0120 02:06:32.152629 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-clustermesh-secrets\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153888 kubelet[3059]: I0120 02:06:32.152661 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-cni-path\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153888 kubelet[3059]: I0120 02:06:32.152687 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-cilium-run\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153888 kubelet[3059]: I0120 02:06:32.152712 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-etc-cni-netd\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153888 kubelet[3059]: I0120 02:06:32.152740 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcg2k\" (UniqueName: \"kubernetes.io/projected/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-kube-api-access-rcg2k\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153888 kubelet[3059]: I0120 02:06:32.152769 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-lib-modules\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.153888 kubelet[3059]: I0120 02:06:32.152792 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-cilium-ipsec-secrets\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.154131 kubelet[3059]: I0120 02:06:32.152812 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-host-proc-sys-net\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.154131 kubelet[3059]: I0120 02:06:32.152833 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28ac4243-c2b8-4ca0-b075-6d4024e91b0a-host-proc-sys-kernel\") pod \"cilium-47b82\" (UID: \"28ac4243-c2b8-4ca0-b075-6d4024e91b0a\") " pod="kube-system/cilium-47b82" Jan 20 02:06:32.190579 systemd[1]: Started session-114.scope - Session 114 of User core. Jan 20 02:06:32.340913 sshd[7015]: Connection closed by 10.0.0.1 port 40164 Jan 20 02:06:32.343417 sshd-session[7012]: pam_unix(sshd:session): session closed for user core Jan 20 02:06:32.393985 systemd[1]: sshd@113-10.0.0.51:22-10.0.0.1:40164.service: Deactivated successfully. Jan 20 02:06:32.397778 systemd[1]: session-114.scope: Deactivated successfully. Jan 20 02:06:32.407910 systemd-logind[1565]: Session 114 logged out. Waiting for processes to exit. Jan 20 02:06:32.425097 systemd[1]: Started sshd@114-10.0.0.51:22-10.0.0.1:40180.service - OpenSSH per-connection server daemon (10.0.0.1:40180). Jan 20 02:06:32.428601 systemd-logind[1565]: Removed session 114. Jan 20 02:06:32.646731 sshd[7026]: Accepted publickey for core from 10.0.0.1 port 40180 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:06:32.676815 sshd-session[7026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:32.698784 kubelet[3059]: E0120 02:06:32.696636 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:32.720528 containerd[1591]: time="2026-01-20T02:06:32.705681747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47b82,Uid:28ac4243-c2b8-4ca0-b075-6d4024e91b0a,Namespace:kube-system,Attempt:0,}" Jan 20 02:06:32.755051 systemd-logind[1565]: New session 115 of user core. Jan 20 02:06:32.795078 systemd[1]: Started session-115.scope - Session 115 of User core. Jan 20 02:06:32.938217 containerd[1591]: time="2026-01-20T02:06:32.937178635Z" level=info msg="connecting to shim 3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3" address="unix:///run/containerd/s/b8e2f2ac07b5b2994dc7a6362f10f37a248c3ac2f06afda6b2d727cb955e4ae1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:06:45.243652 kubelet[3059]: E0120 02:06:44.210043 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.222s" Jan 20 02:06:48.422927 kubelet[3059]: E0120 02:06:48.413593 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:06:49.376759 containerd[1591]: time="2026-01-20T02:06:49.375825996Z" level=info msg="StopPodSandbox for \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\"" Jan 20 02:06:49.403651 systemd[1]: Started cri-containerd-3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3.scope - libcontainer container 3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3. Jan 20 02:06:49.773200 kubelet[3059]: E0120 02:06:49.750535 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.336s" Jan 20 02:06:49.792035 containerd[1591]: time="2026-01-20T02:06:49.411673702Z" level=info msg="TearDown network for sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" successfully" Jan 20 02:06:49.792035 containerd[1591]: time="2026-01-20T02:06:49.411709538Z" level=info msg="StopPodSandbox for \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" returns successfully" Jan 20 02:06:49.792035 containerd[1591]: time="2026-01-20T02:06:49.718318059Z" level=info msg="RemovePodSandbox for \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\"" Jan 20 02:06:49.792035 containerd[1591]: time="2026-01-20T02:06:49.718603719Z" level=info msg="Forcibly stopping sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\"" Jan 20 02:06:49.792035 containerd[1591]: time="2026-01-20T02:06:49.718985490Z" level=info msg="TearDown network for sandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" successfully" Jan 20 02:06:49.792035 containerd[1591]: time="2026-01-20T02:06:49.762578116Z" level=info msg="Ensure that sandbox c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6 in task-service has been cleanup successfully" Jan 20 02:06:49.816512 containerd[1591]: time="2026-01-20T02:06:49.813618391Z" level=info msg="RemovePodSandbox \"c3be1c3b8055e4445b652117b2fd18ff5ee3b01d6fab4db9c02b0fa27e36d3d6\" returns successfully" Jan 20 02:06:49.831433 containerd[1591]: time="2026-01-20T02:06:49.828629332Z" level=info msg="StopPodSandbox for \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\"" Jan 20 02:06:49.831433 containerd[1591]: time="2026-01-20T02:06:49.828950610Z" level=info msg="TearDown network for sandbox \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" successfully" Jan 20 02:06:49.831433 containerd[1591]: time="2026-01-20T02:06:49.828977951Z" level=info msg="StopPodSandbox for \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" returns successfully" Jan 20 02:06:49.868213 containerd[1591]: time="2026-01-20T02:06:49.864952947Z" level=info msg="RemovePodSandbox for \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\"" Jan 20 02:06:49.868213 containerd[1591]: time="2026-01-20T02:06:49.865038766Z" level=info msg="Forcibly stopping sandbox \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\"" Jan 20 02:06:49.868213 containerd[1591]: time="2026-01-20T02:06:49.865182703Z" level=info msg="TearDown network for sandbox \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" successfully" Jan 20 02:06:49.885013 containerd[1591]: time="2026-01-20T02:06:49.883014312Z" level=info msg="Ensure that sandbox 20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b in task-service has been cleanup successfully" Jan 20 02:06:49.940089 containerd[1591]: time="2026-01-20T02:06:49.934831147Z" level=info msg="RemovePodSandbox \"20e3555c103ef24ef918d20bdd86c0a2c6b38208c43f3950ba873bc970dfd86b\" returns successfully" Jan 20 02:06:50.102184 containerd[1591]: time="2026-01-20T02:06:50.100649625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47b82,Uid:28ac4243-c2b8-4ca0-b075-6d4024e91b0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\"" Jan 20 02:06:50.105611 kubelet[3059]: E0120 02:06:50.105577 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:50.147885 containerd[1591]: time="2026-01-20T02:06:50.147824892Z" level=info msg="CreateContainer within sandbox \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 02:06:50.413857 containerd[1591]: time="2026-01-20T02:06:50.405200824Z" level=info msg="Container 6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:06:50.607761 containerd[1591]: time="2026-01-20T02:06:50.606818702Z" level=info msg="CreateContainer within sandbox \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55\"" Jan 20 02:06:50.641479 containerd[1591]: time="2026-01-20T02:06:50.613336890Z" level=info msg="StartContainer for \"6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55\"" Jan 20 02:06:50.641479 containerd[1591]: time="2026-01-20T02:06:50.624233265Z" level=info msg="connecting to shim 6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55" address="unix:///run/containerd/s/b8e2f2ac07b5b2994dc7a6362f10f37a248c3ac2f06afda6b2d727cb955e4ae1" protocol=ttrpc version=3 Jan 20 02:06:50.638868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151404553.mount: Deactivated successfully. Jan 20 02:06:50.878319 systemd[1]: Started cri-containerd-6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55.scope - libcontainer container 6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55. Jan 20 02:06:51.257860 containerd[1591]: time="2026-01-20T02:06:51.257181435Z" level=info msg="StartContainer for \"6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55\" returns successfully" Jan 20 02:06:51.374696 systemd[1]: cri-containerd-6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55.scope: Deactivated successfully. Jan 20 02:06:51.384551 containerd[1591]: time="2026-01-20T02:06:51.379830814Z" level=info msg="received container exit event container_id:\"6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55\" id:\"6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55\" pid:7100 exited_at:{seconds:1768874811 nanos:379029495}" Jan 20 02:06:51.587724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b9e6e43063f80753039a2cfc81aa21ecf2c4cd00397a6eb66687d0eadfe2e55-rootfs.mount: Deactivated successfully. Jan 20 02:06:52.282238 kubelet[3059]: E0120 02:06:52.280465 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:52.325468 containerd[1591]: time="2026-01-20T02:06:52.324712051Z" level=info msg="CreateContainer within sandbox \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 02:06:52.500143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107159687.mount: Deactivated successfully. Jan 20 02:06:52.510705 containerd[1591]: time="2026-01-20T02:06:52.509681225Z" level=info msg="Container 7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:06:52.578664 containerd[1591]: time="2026-01-20T02:06:52.567161847Z" level=info msg="CreateContainer within sandbox \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094\"" Jan 20 02:06:52.578664 containerd[1591]: time="2026-01-20T02:06:52.572596340Z" level=info msg="StartContainer for \"7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094\"" Jan 20 02:06:52.604443 containerd[1591]: time="2026-01-20T02:06:52.604058203Z" level=info msg="connecting to shim 7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094" address="unix:///run/containerd/s/b8e2f2ac07b5b2994dc7a6362f10f37a248c3ac2f06afda6b2d727cb955e4ae1" protocol=ttrpc version=3 Jan 20 02:06:52.823693 systemd[1]: Started cri-containerd-7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094.scope - libcontainer container 7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094. Jan 20 02:06:53.282618 containerd[1591]: time="2026-01-20T02:06:53.279853103Z" level=info msg="StartContainer for \"7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094\" returns successfully" Jan 20 02:06:53.317852 kubelet[3059]: E0120 02:06:53.315087 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:53.326066 systemd[1]: cri-containerd-7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094.scope: Deactivated successfully. Jan 20 02:06:53.350449 containerd[1591]: time="2026-01-20T02:06:53.348755310Z" level=info msg="received container exit event container_id:\"7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094\" id:\"7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094\" pid:7149 exited_at:{seconds:1768874813 nanos:345584165}" Jan 20 02:06:53.436333 kubelet[3059]: E0120 02:06:53.436085 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:06:53.596793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7352d80ef853bae7ff2624a6e0767f396e3a5faa07c023cf412da79c52358094-rootfs.mount: Deactivated successfully. Jan 20 02:06:54.807073 kubelet[3059]: E0120 02:06:54.803423 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:54.931840 containerd[1591]: time="2026-01-20T02:06:54.926005718Z" level=info msg="CreateContainer within sandbox \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 02:06:55.293613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755040788.mount: Deactivated successfully. Jan 20 02:06:55.413611 containerd[1591]: time="2026-01-20T02:06:55.412468212Z" level=info msg="Container e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:06:55.503464 containerd[1591]: time="2026-01-20T02:06:55.503315737Z" level=info msg="CreateContainer within sandbox \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f\"" Jan 20 02:06:55.507709 containerd[1591]: time="2026-01-20T02:06:55.507664915Z" level=info msg="StartContainer for \"e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f\"" Jan 20 02:06:55.515774 containerd[1591]: time="2026-01-20T02:06:55.515732024Z" level=info msg="connecting to shim e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f" address="unix:///run/containerd/s/b8e2f2ac07b5b2994dc7a6362f10f37a248c3ac2f06afda6b2d727cb955e4ae1" protocol=ttrpc version=3 Jan 20 02:06:55.671994 systemd[1]: Started cri-containerd-e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f.scope - libcontainer container e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f. Jan 20 02:06:56.036100 containerd[1591]: time="2026-01-20T02:06:56.028442753Z" level=info msg="StartContainer for \"e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f\" returns successfully" Jan 20 02:06:56.088543 systemd[1]: cri-containerd-e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f.scope: Deactivated successfully. Jan 20 02:06:56.102527 containerd[1591]: time="2026-01-20T02:06:56.102471088Z" level=info msg="received container exit event container_id:\"e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f\" id:\"e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f\" pid:7192 exited_at:{seconds:1768874816 nanos:93735779}" Jan 20 02:06:56.476183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4e56aa99392805118b590e8df00905780b18dc45e4e2a3f29ddb559f845864f-rootfs.mount: Deactivated successfully. Jan 20 02:06:56.916456 kubelet[3059]: E0120 02:06:56.914701 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:57.114678 containerd[1591]: time="2026-01-20T02:06:57.113632804Z" level=info msg="CreateContainer within sandbox \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 02:06:57.497126 containerd[1591]: time="2026-01-20T02:06:57.496678246Z" level=info msg="Container 230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:06:59.373966 containerd[1591]: time="2026-01-20T02:06:59.373901108Z" level=info msg="CreateContainer within sandbox \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d\"" Jan 20 02:06:59.533074 kubelet[3059]: E0120 02:06:59.510182 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:06:59.587759 containerd[1591]: time="2026-01-20T02:06:59.587569871Z" level=info msg="StartContainer for \"230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d\"" Jan 20 02:06:59.609579 containerd[1591]: time="2026-01-20T02:06:59.609522019Z" level=info msg="connecting to shim 230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d" address="unix:///run/containerd/s/b8e2f2ac07b5b2994dc7a6362f10f37a248c3ac2f06afda6b2d727cb955e4ae1" protocol=ttrpc version=3 Jan 20 02:07:00.106336 systemd[1]: Started cri-containerd-230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d.scope - libcontainer container 230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d. Jan 20 02:07:03.276919 containerd[1591]: time="2026-01-20T02:07:03.276468755Z" level=info msg="StartContainer for \"230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d\" returns successfully" Jan 20 02:07:03.318791 systemd[1]: cri-containerd-230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d.scope: Deactivated successfully. Jan 20 02:07:04.900514 kubelet[3059]: E0120 02:07:04.900001 3059 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.445s" Jan 20 02:07:04.928010 containerd[1591]: time="2026-01-20T02:07:04.927935679Z" level=info msg="received container exit event container_id:\"230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d\" id:\"230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d\" pid:7229 exited_at:{seconds:1768874823 nanos:333166703}" Jan 20 02:07:04.967755 kubelet[3059]: E0120 02:07:04.961089 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:07:05.019691 kubelet[3059]: E0120 02:07:05.019105 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:06.824689 kubelet[3059]: E0120 02:07:06.823681 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:07.191756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-230f0af398bf665473006981717a360befc41df1f8613af646d10070a3469f5d-rootfs.mount: Deactivated successfully. Jan 20 02:07:08.002435 kubelet[3059]: E0120 02:07:07.986726 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:08.044630 containerd[1591]: time="2026-01-20T02:07:08.036963093Z" level=info msg="CreateContainer within sandbox \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 02:07:08.582086 containerd[1591]: time="2026-01-20T02:07:08.582027761Z" level=info msg="Container 96aba8ceff7489d149dc7eb48a37fa9b2f1d1bbe3d4457a4a6bdc62e446d275a: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:07:08.698179 containerd[1591]: time="2026-01-20T02:07:08.695955270Z" level=info msg="CreateContainer within sandbox \"3ace3118d98cdda1d524104a297d28ed123bfe70120012ab3de294e5c9337cd3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96aba8ceff7489d149dc7eb48a37fa9b2f1d1bbe3d4457a4a6bdc62e446d275a\"" Jan 20 02:07:08.706652 containerd[1591]: time="2026-01-20T02:07:08.706341992Z" level=info msg="StartContainer for \"96aba8ceff7489d149dc7eb48a37fa9b2f1d1bbe3d4457a4a6bdc62e446d275a\"" Jan 20 02:07:08.732773 containerd[1591]: time="2026-01-20T02:07:08.727881843Z" level=info msg="connecting to shim 96aba8ceff7489d149dc7eb48a37fa9b2f1d1bbe3d4457a4a6bdc62e446d275a" address="unix:///run/containerd/s/b8e2f2ac07b5b2994dc7a6362f10f37a248c3ac2f06afda6b2d727cb955e4ae1" protocol=ttrpc version=3 Jan 20 02:07:09.020103 systemd[1]: Started cri-containerd-96aba8ceff7489d149dc7eb48a37fa9b2f1d1bbe3d4457a4a6bdc62e446d275a.scope - libcontainer container 96aba8ceff7489d149dc7eb48a37fa9b2f1d1bbe3d4457a4a6bdc62e446d275a. Jan 20 02:07:09.343495 kubelet[3059]: E0120 02:07:09.342652 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:09.546158 containerd[1591]: time="2026-01-20T02:07:09.543880291Z" level=info msg="StartContainer for \"96aba8ceff7489d149dc7eb48a37fa9b2f1d1bbe3d4457a4a6bdc62e446d275a\" returns successfully" Jan 20 02:07:09.969448 kubelet[3059]: E0120 02:07:09.968015 3059 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:07:11.320034 kubelet[3059]: E0120 02:07:11.319117 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:11.468496 containerd[1591]: time="2026-01-20T02:07:11.461291766Z" level=warning msg="container event discarded" container=d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0 type=CONTAINER_STOPPED_EVENT Jan 20 02:07:11.498481 containerd[1591]: time="2026-01-20T02:07:11.498140354Z" level=warning msg="container event discarded" container=bfd6f3582cfe83f32cca85c2f905b390e3a926f73a8752e96e385531927e92af type=CONTAINER_STOPPED_EVENT Jan 20 02:07:11.524471 kubelet[3059]: I0120 02:07:11.524068 3059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-47b82" podStartSLOduration=40.524042945 podStartE2EDuration="40.524042945s" podCreationTimestamp="2026-01-20 02:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:07:11.503808771 +0000 UTC m=+1295.573695989" watchObservedRunningTime="2026-01-20 02:07:11.524042945 +0000 UTC m=+1295.593930163" Jan 20 02:07:12.701207 kubelet[3059]: E0120 02:07:12.699288 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:12.823638 containerd[1591]: time="2026-01-20T02:07:12.823557891Z" level=warning msg="container event discarded" container=c4ff29453778f3f09914cbaad5a336b1bf63e30cc5e489cb60a876440abecbb9 type=CONTAINER_DELETED_EVENT Jan 20 02:07:13.141586 containerd[1591]: time="2026-01-20T02:07:13.141340767Z" level=warning msg="container event discarded" container=8b2c324350042e2395edf7c296c87890822526d5f5c5b26f021e79c96a80c867 type=CONTAINER_DELETED_EVENT Jan 20 02:07:15.248001 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 20 02:07:18.340459 kubelet[3059]: E0120 02:07:18.340341 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:24.532454 kubelet[3059]: E0120 02:07:24.529003 3059 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56312->127.0.0.1:43945: write tcp 127.0.0.1:56312->127.0.0.1:43945: write: broken pipe Jan 20 02:07:27.368864 kubelet[3059]: E0120 02:07:27.365699 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:32.718018 kubelet[3059]: E0120 02:07:32.708787 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:33.373671 kubelet[3059]: E0120 02:07:33.369293 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:37.554875 systemd-networkd[1486]: lxc_health: Link UP Jan 20 02:07:37.555624 systemd-networkd[1486]: lxc_health: Gained carrier Jan 20 02:07:39.881623 systemd-networkd[1486]: lxc_health: Gained IPv6LL Jan 20 02:07:50.381079 systemd[1]: cri-containerd-cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5.scope: Deactivated successfully. Jan 20 02:07:50.383074 systemd[1]: cri-containerd-cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5.scope: Consumed 8.641s CPU time, 24.4M memory peak, 1.3M read from disk. Jan 20 02:07:50.611451 containerd[1591]: time="2026-01-20T02:07:50.582050868Z" level=info msg="received container exit event container_id:\"cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5\" id:\"cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5\" pid:6280 exit_status:1 exited_at:{seconds:1768874870 nanos:574761140}" Jan 20 02:07:50.810866 kubelet[3059]: E0120 02:07:50.810831 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:50.818242 kubelet[3059]: E0120 02:07:50.818157 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:51.250139 kubelet[3059]: E0120 02:07:51.244521 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:51.313062 kubelet[3059]: E0120 02:07:51.310598 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:51.429521 sshd[7029]: Connection closed by 10.0.0.1 port 40180 Jan 20 02:07:51.507941 sshd-session[7026]: pam_unix(sshd:session): session closed for user core Jan 20 02:07:51.556144 systemd[1]: sshd@114-10.0.0.51:22-10.0.0.1:40180.service: Deactivated successfully. Jan 20 02:07:51.566315 systemd[1]: session-115.scope: Deactivated successfully. Jan 20 02:07:51.595546 systemd[1]: session-115.scope: Consumed 2.645s CPU time, 28M memory peak. Jan 20 02:07:51.630921 systemd-logind[1565]: Session 115 logged out. Waiting for processes to exit. Jan 20 02:07:51.649662 systemd-logind[1565]: Removed session 115. Jan 20 02:07:51.999947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5-rootfs.mount: Deactivated successfully. Jan 20 02:07:52.439581 kubelet[3059]: I0120 02:07:52.434102 3059 scope.go:117] "RemoveContainer" containerID="cda1d4e8bfc5b1e085a3f5a99b775742f5940f2e3873c8d8eab661300655a4c5" Jan 20 02:07:52.439581 kubelet[3059]: E0120 02:07:52.435945 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:52.439581 kubelet[3059]: I0120 02:07:52.438450 3059 scope.go:117] "RemoveContainer" containerID="d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0" Jan 20 02:07:52.439581 kubelet[3059]: E0120 02:07:52.438748 3059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:07:52.439581 kubelet[3059]: E0120 02:07:52.438887 3059 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(6e6cfcfb327385445a9bb0d2bc2fd5d4)\"" pod="kube-system/kube-scheduler-localhost" podUID="6e6cfcfb327385445a9bb0d2bc2fd5d4" Jan 20 02:07:52.805346 containerd[1591]: time="2026-01-20T02:07:52.718676548Z" level=info msg="RemoveContainer for \"d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0\"" Jan 20 02:07:52.971577 containerd[1591]: time="2026-01-20T02:07:52.971293658Z" level=info msg="RemoveContainer for \"d0692dda316f70319abc01c915f4b3fa081cd997bffecd305b1ea0a8a9e08eb0\" returns successfully"