Jan 23 19:23:10.750014 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 19:23:10.750039 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:23:10.750047 kernel: BIOS-provided physical RAM map: Jan 23 19:23:10.750056 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 19:23:10.750061 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 19:23:10.750067 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 19:23:10.750073 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 19:23:10.750079 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 19:23:10.750085 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 19:23:10.750090 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 19:23:10.750096 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 23 19:23:10.750102 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 19:23:10.750109 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 19:23:10.750118 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 19:23:10.750131 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 19:23:10.750143 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 19:23:10.750154 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 19:23:10.750166 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 19:23:10.750175 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 19:23:10.750183 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 19:23:10.750191 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 19:23:10.750199 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 19:23:10.750208 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 19:23:10.750216 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 19:23:10.750226 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 19:23:10.750237 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 19:23:10.750247 kernel: NX (Execute Disable) protection: active Jan 23 19:23:10.750255 kernel: APIC: Static calls initialized Jan 23 19:23:10.750267 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 23 19:23:10.750276 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 23 19:23:10.750284 kernel: extended physical RAM map: Jan 23 19:23:10.750292 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 19:23:10.750302 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 19:23:10.750312 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 19:23:10.750321 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 19:23:10.750329 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 19:23:10.750337 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 19:23:10.750346 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 19:23:10.750354 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 23 19:23:10.750368 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 23 19:23:10.750382 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 23 19:23:10.750494 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 23 19:23:10.750503 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 23 19:23:10.750512 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 19:23:10.750528 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 19:23:10.750536 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 19:23:10.750545 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 19:23:10.750554 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 19:23:10.750563 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 19:23:10.750571 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 19:23:10.750582 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 19:23:10.750593 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 19:23:10.750603 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 19:23:10.750614 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 19:23:10.750623 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 19:23:10.750638 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 19:23:10.750648 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 19:23:10.750657 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 19:23:10.750666 kernel: efi: EFI v2.7 by EDK II Jan 23 19:23:10.750675 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 23 19:23:10.750685 kernel: random: crng init done Jan 23 19:23:10.750695 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 19:23:10.750704 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 19:23:10.750713 kernel: secureboot: Secure boot disabled Jan 23 19:23:10.750723 kernel: SMBIOS 2.8 present. Jan 23 19:23:10.750732 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 19:23:10.750744 kernel: DMI: Memory slots populated: 1/1 Jan 23 19:23:10.750754 kernel: Hypervisor detected: KVM Jan 23 19:23:10.750764 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 19:23:10.750775 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 19:23:10.750955 kernel: kvm-clock: using sched offset of 14779475589 cycles Jan 23 19:23:10.750963 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 19:23:10.750970 kernel: tsc: Detected 2445.426 MHz processor Jan 23 19:23:10.750977 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 19:23:10.750983 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 19:23:10.750990 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 19:23:10.750996 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 19:23:10.751007 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 19:23:10.751014 kernel: Using GB pages for direct mapping Jan 23 19:23:10.751020 kernel: ACPI: Early table checksum verification disabled Jan 23 19:23:10.751026 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 23 19:23:10.751033 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 23 19:23:10.751040 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:23:10.751046 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:23:10.751053 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 23 19:23:10.751061 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:23:10.751068 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:23:10.751080 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:23:10.751092 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:23:10.751102 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 19:23:10.751111 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 23 19:23:10.751120 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 23 19:23:10.751129 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 23 19:23:10.751138 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 23 19:23:10.751151 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 23 19:23:10.751163 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 23 19:23:10.751175 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 23 19:23:10.751184 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 23 19:23:10.751193 kernel: No NUMA configuration found Jan 23 19:23:10.751202 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 23 19:23:10.751211 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 23 19:23:10.751220 kernel: Zone ranges: Jan 23 19:23:10.751232 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 19:23:10.751246 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 23 19:23:10.751254 kernel: Normal empty Jan 23 19:23:10.751263 kernel: Device empty Jan 23 19:23:10.751272 kernel: Movable zone start for each node Jan 23 19:23:10.751281 kernel: Early memory node ranges Jan 23 19:23:10.751291 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 19:23:10.751303 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 19:23:10.751313 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 19:23:10.751322 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 19:23:10.751334 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 23 19:23:10.751343 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 23 19:23:10.751352 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 23 19:23:10.751364 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 23 19:23:10.751375 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 23 19:23:10.751483 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 19:23:10.751504 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 19:23:10.751518 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 19:23:10.751529 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 19:23:10.751541 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 19:23:10.751552 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 19:23:10.751561 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 19:23:10.751574 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 19:23:10.751584 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 23 19:23:10.751594 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 19:23:10.751604 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 19:23:10.751615 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 19:23:10.751629 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 19:23:10.751641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 19:23:10.751652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 19:23:10.751664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 19:23:10.751676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 19:23:10.751686 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 19:23:10.751693 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 19:23:10.751700 kernel: TSC deadline timer available Jan 23 19:23:10.751706 kernel: CPU topo: Max. logical packages: 1 Jan 23 19:23:10.751716 kernel: CPU topo: Max. logical dies: 1 Jan 23 19:23:10.751723 kernel: CPU topo: Max. dies per package: 1 Jan 23 19:23:10.751730 kernel: CPU topo: Max. threads per core: 1 Jan 23 19:23:10.751736 kernel: CPU topo: Num. cores per package: 4 Jan 23 19:23:10.751743 kernel: CPU topo: Num. threads per package: 4 Jan 23 19:23:10.751750 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 19:23:10.751756 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 19:23:10.751763 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 19:23:10.751770 kernel: kvm-guest: setup PV sched yield Jan 23 19:23:10.751939 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 23 19:23:10.751948 kernel: Booting paravirtualized kernel on KVM Jan 23 19:23:10.751955 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 19:23:10.751962 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 19:23:10.751969 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 19:23:10.751975 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 19:23:10.751982 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 19:23:10.751989 kernel: kvm-guest: PV spinlocks enabled Jan 23 19:23:10.751996 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 19:23:10.752007 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:23:10.752014 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 19:23:10.752021 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 19:23:10.752027 kernel: Fallback order for Node 0: 0 Jan 23 19:23:10.752034 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 23 19:23:10.752041 kernel: Policy zone: DMA32 Jan 23 19:23:10.752048 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 19:23:10.752054 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 19:23:10.752061 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 19:23:10.752070 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 19:23:10.752077 kernel: Dynamic Preempt: voluntary Jan 23 19:23:10.752083 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 19:23:10.752091 kernel: rcu: RCU event tracing is enabled. Jan 23 19:23:10.752098 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 19:23:10.752105 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 19:23:10.752117 kernel: Rude variant of Tasks RCU enabled. Jan 23 19:23:10.752129 kernel: Tracing variant of Tasks RCU enabled. Jan 23 19:23:10.752141 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 19:23:10.752154 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 19:23:10.752164 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:23:10.752173 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:23:10.752183 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:23:10.752192 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 19:23:10.752204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 19:23:10.752217 kernel: Console: colour dummy device 80x25 Jan 23 19:23:10.752226 kernel: printk: legacy console [ttyS0] enabled Jan 23 19:23:10.752236 kernel: ACPI: Core revision 20240827 Jan 23 19:23:10.752249 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 19:23:10.752258 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 19:23:10.752268 kernel: x2apic enabled Jan 23 19:23:10.752281 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 19:23:10.752290 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 19:23:10.752300 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 19:23:10.752309 kernel: kvm-guest: setup PV IPIs Jan 23 19:23:10.752319 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 19:23:10.752328 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 19:23:10.752345 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 23 19:23:10.752356 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 19:23:10.752362 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 19:23:10.752369 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 19:23:10.752376 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 19:23:10.752472 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 19:23:10.752481 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 19:23:10.752488 kernel: Speculative Store Bypass: Vulnerable Jan 23 19:23:10.752494 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 19:23:10.752505 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 19:23:10.752512 kernel: active return thunk: srso_alias_return_thunk Jan 23 19:23:10.752518 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 19:23:10.752525 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 19:23:10.752532 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 19:23:10.752539 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 19:23:10.752546 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 19:23:10.752552 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 19:23:10.752561 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 19:23:10.752568 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 19:23:10.752575 kernel: Freeing SMP alternatives memory: 32K Jan 23 19:23:10.752582 kernel: pid_max: default: 32768 minimum: 301 Jan 23 19:23:10.752588 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 19:23:10.752595 kernel: landlock: Up and running. Jan 23 19:23:10.752602 kernel: SELinux: Initializing. Jan 23 19:23:10.752609 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 19:23:10.752616 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 19:23:10.752625 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 19:23:10.752631 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 19:23:10.752638 kernel: signal: max sigframe size: 1776 Jan 23 19:23:10.752645 kernel: rcu: Hierarchical SRCU implementation. Jan 23 19:23:10.752652 kernel: rcu: Max phase no-delay instances is 400. Jan 23 19:23:10.752659 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 19:23:10.752665 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 19:23:10.752672 kernel: smp: Bringing up secondary CPUs ... Jan 23 19:23:10.752679 kernel: smpboot: x86: Booting SMP configuration: Jan 23 19:23:10.752687 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 19:23:10.752694 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 19:23:10.752701 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 23 19:23:10.752708 kernel: Memory: 2414472K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145388K reserved, 0K cma-reserved) Jan 23 19:23:10.752715 kernel: devtmpfs: initialized Jan 23 19:23:10.752721 kernel: x86/mm: Memory block size: 128MB Jan 23 19:23:10.752728 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 19:23:10.752735 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 19:23:10.752741 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 19:23:10.752753 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 23 19:23:10.753136 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 23 19:23:10.753150 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 23 19:23:10.753160 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 19:23:10.753170 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 19:23:10.753266 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 19:23:10.753281 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 19:23:10.753291 kernel: audit: initializing netlink subsys (disabled) Jan 23 19:23:10.753305 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 19:23:10.753314 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 19:23:10.753324 kernel: audit: type=2000 audit(1769196177.432:1): state=initialized audit_enabled=0 res=1 Jan 23 19:23:10.753333 kernel: cpuidle: using governor menu Jan 23 19:23:10.753347 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 19:23:10.753356 kernel: dca service started, version 1.12.1 Jan 23 19:23:10.753365 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 19:23:10.753375 kernel: PCI: Using configuration type 1 for base access Jan 23 19:23:10.753484 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 19:23:10.753505 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 19:23:10.753515 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 19:23:10.753524 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 19:23:10.753534 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 19:23:10.753543 kernel: ACPI: Added _OSI(Module Device) Jan 23 19:23:10.753552 kernel: ACPI: Added _OSI(Processor Device) Jan 23 19:23:10.753563 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 19:23:10.753576 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 19:23:10.753586 kernel: ACPI: Interpreter enabled Jan 23 19:23:10.753598 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 19:23:10.753608 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 19:23:10.753617 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 19:23:10.753627 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 19:23:10.753640 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 19:23:10.753649 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 19:23:10.754048 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 19:23:10.754223 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 19:23:10.754501 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 19:23:10.754518 kernel: PCI host bridge to bus 0000:00 Jan 23 19:23:10.754671 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 19:23:10.754990 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 19:23:10.755120 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 19:23:10.755268 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 23 19:23:10.755679 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 19:23:10.756172 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 23 19:23:10.756532 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 19:23:10.757108 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 19:23:10.757525 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 19:23:10.758000 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 23 19:23:10.758148 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 23 19:23:10.758314 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 19:23:10.758623 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 19:23:10.758745 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 16601 usecs Jan 23 19:23:10.759113 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 19:23:10.759237 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 23 19:23:10.759354 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 23 19:23:10.759568 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 23 19:23:10.759702 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 19:23:10.760040 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 23 19:23:10.760210 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 23 19:23:10.760378 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 23 19:23:10.760640 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 19:23:10.760770 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 23 19:23:10.761068 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 23 19:23:10.761314 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 23 19:23:10.761543 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 23 19:23:10.761692 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 19:23:10.762227 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 19:23:10.762506 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 16601 usecs Jan 23 19:23:10.762686 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 19:23:10.763296 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 23 19:23:10.763724 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 23 19:23:10.764281 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 19:23:10.764559 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 23 19:23:10.764576 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 19:23:10.764587 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 19:23:10.764596 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 19:23:10.764606 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 19:23:10.764624 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 19:23:10.764634 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 19:23:10.764645 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 19:23:10.764656 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 19:23:10.764667 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 19:23:10.764677 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 19:23:10.764684 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 19:23:10.764690 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 19:23:10.764697 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 19:23:10.764713 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 19:23:10.764725 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 19:23:10.764735 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 19:23:10.764745 kernel: iommu: Default domain type: Translated Jan 23 19:23:10.764754 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 19:23:10.764763 kernel: efivars: Registered efivars operations Jan 23 19:23:10.764773 kernel: PCI: Using ACPI for IRQ routing Jan 23 19:23:10.764972 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 19:23:10.764983 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 19:23:10.764997 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 19:23:10.765006 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 23 19:23:10.765015 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 23 19:23:10.765028 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 23 19:23:10.765038 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 23 19:23:10.765048 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 23 19:23:10.765057 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 23 19:23:10.765695 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 19:23:10.766057 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 19:23:10.766228 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 19:23:10.766244 kernel: vgaarb: loaded Jan 23 19:23:10.766256 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 19:23:10.766268 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 19:23:10.766275 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 19:23:10.766282 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 19:23:10.766289 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 19:23:10.766295 kernel: pnp: PnP ACPI init Jan 23 19:23:10.766559 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 19:23:10.766576 kernel: pnp: PnP ACPI: found 6 devices Jan 23 19:23:10.766587 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 19:23:10.766597 kernel: NET: Registered PF_INET protocol family Jan 23 19:23:10.766606 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 19:23:10.766620 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 19:23:10.766738 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 19:23:10.766753 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 19:23:10.766765 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 19:23:10.766777 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 19:23:10.766940 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 19:23:10.766950 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 19:23:10.766960 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 19:23:10.766970 kernel: NET: Registered PF_XDP protocol family Jan 23 19:23:10.767139 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 19:23:10.767303 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 23 19:23:10.767553 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 19:23:10.767670 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 19:23:10.767778 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 19:23:10.768160 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 23 19:23:10.768269 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 19:23:10.768375 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 23 19:23:10.768488 kernel: PCI: CLS 0 bytes, default 64 Jan 23 19:23:10.768501 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 19:23:10.768512 kernel: Initialise system trusted keyrings Jan 23 19:23:10.768530 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 19:23:10.768540 kernel: Key type asymmetric registered Jan 23 19:23:10.768550 kernel: Asymmetric key parser 'x509' registered Jan 23 19:23:10.768559 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 19:23:10.768569 kernel: io scheduler mq-deadline registered Jan 23 19:23:10.768579 kernel: io scheduler kyber registered Jan 23 19:23:10.768590 kernel: io scheduler bfq registered Jan 23 19:23:10.768603 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 19:23:10.768615 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 19:23:10.768629 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 19:23:10.768639 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 19:23:10.768649 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 19:23:10.768659 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 19:23:10.768671 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 19:23:10.768684 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 19:23:10.768699 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 19:23:10.769109 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 19:23:10.769127 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 19:23:10.769283 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 19:23:10.769546 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T19:23:08 UTC (1769196188) Jan 23 19:23:10.769704 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 23 19:23:10.769719 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 19:23:10.769734 kernel: efifb: probing for efifb Jan 23 19:23:10.769747 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 23 19:23:10.769761 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 19:23:10.769772 kernel: efifb: scrolling: redraw Jan 23 19:23:10.769960 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 19:23:10.769973 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 19:23:10.769983 kernel: fb0: EFI VGA frame buffer device Jan 23 19:23:10.769993 kernel: pstore: Using crash dump compression: deflate Jan 23 19:23:10.770002 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 19:23:10.770018 kernel: NET: Registered PF_INET6 protocol family Jan 23 19:23:10.770032 kernel: Segment Routing with IPv6 Jan 23 19:23:10.770045 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 19:23:10.770059 kernel: NET: Registered PF_PACKET protocol family Jan 23 19:23:10.770072 kernel: Key type dns_resolver registered Jan 23 19:23:10.770082 kernel: IPI shorthand broadcast: enabled Jan 23 19:23:10.770092 kernel: sched_clock: Marking stable (8121234904, 2840102663)->(12145077719, -1183740152) Jan 23 19:23:10.770102 kernel: registered taskstats version 1 Jan 23 19:23:10.770113 kernel: Loading compiled-in X.509 certificates Jan 23 19:23:10.770126 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 19:23:10.770140 kernel: Demotion targets for Node 0: null Jan 23 19:23:10.770150 kernel: Key type .fscrypt registered Jan 23 19:23:10.770159 kernel: Key type fscrypt-provisioning registered Jan 23 19:23:10.770169 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 19:23:10.770179 kernel: ima: Allocated hash algorithm: sha1 Jan 23 19:23:10.770191 kernel: ima: No architecture policies found Jan 23 19:23:10.770202 kernel: clk: Disabling unused clocks Jan 23 19:23:10.770212 kernel: Warning: unable to open an initial console. Jan 23 19:23:10.770225 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 19:23:10.770235 kernel: Write protecting the kernel read-only data: 40960k Jan 23 19:23:10.770245 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 19:23:10.770256 kernel: Run /init as init process Jan 23 19:23:10.770268 kernel: with arguments: Jan 23 19:23:10.770280 kernel: /init Jan 23 19:23:10.770292 kernel: with environment: Jan 23 19:23:10.770302 kernel: HOME=/ Jan 23 19:23:10.770313 kernel: TERM=linux Jan 23 19:23:10.770326 systemd[1]: Successfully made /usr/ read-only. Jan 23 19:23:10.770345 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:23:10.770360 systemd[1]: Detected virtualization kvm. Jan 23 19:23:10.770370 systemd[1]: Detected architecture x86-64. Jan 23 19:23:10.770381 systemd[1]: Running in initrd. Jan 23 19:23:10.770495 systemd[1]: No hostname configured, using default hostname. Jan 23 19:23:10.770508 systemd[1]: Hostname set to . Jan 23 19:23:10.770522 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:23:10.770533 systemd[1]: Queued start job for default target initrd.target. Jan 23 19:23:10.770544 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:23:10.770555 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:23:10.770570 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 19:23:10.770582 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:23:10.770592 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 19:23:10.770603 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 19:23:10.770619 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 19:23:10.770632 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 19:23:10.770647 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:23:10.770658 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:23:10.770668 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:23:10.770679 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:23:10.770689 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:23:10.770700 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:23:10.770718 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:23:10.770728 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:23:10.770739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 19:23:10.770749 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 19:23:10.770759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:23:10.770772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:23:10.770955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:23:10.770967 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:23:10.770982 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 19:23:10.770993 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:23:10.771004 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 19:23:10.771015 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 19:23:10.771029 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 19:23:10.771039 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:23:10.771050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:23:10.771060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:23:10.771105 systemd-journald[203]: Collecting audit messages is disabled. Jan 23 19:23:10.771134 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 19:23:10.771149 systemd-journald[203]: Journal started Jan 23 19:23:10.771174 systemd-journald[203]: Runtime Journal (/run/log/journal/93ee8d8939054737ade346787d57c85b) is 6M, max 48.1M, 42.1M free. Jan 23 19:23:10.759261 systemd-modules-load[205]: Inserted module 'overlay' Jan 23 19:23:10.818287 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:23:10.829105 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:23:10.837980 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 19:23:10.875274 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:23:11.013131 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 19:23:11.016696 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:23:11.022255 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:23:11.074628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 19:23:11.112256 kernel: Bridge firewalling registered Jan 23 19:23:11.113066 systemd-modules-load[205]: Inserted module 'br_netfilter' Jan 23 19:23:11.114563 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:23:11.125730 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 19:23:11.131594 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:23:11.155363 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:23:11.184309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:23:11.199300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:23:11.283238 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:23:11.312340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:23:11.341051 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:23:11.376621 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 19:23:11.385157 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:23:11.496109 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:23:11.512604 systemd-resolved[245]: Positive Trust Anchors: Jan 23 19:23:11.512617 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:23:11.512659 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:23:11.517312 systemd-resolved[245]: Defaulting to hostname 'linux'. Jan 23 19:23:11.519572 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:23:11.672772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:23:11.955068 kernel: SCSI subsystem initialized Jan 23 19:23:11.988601 kernel: Loading iSCSI transport class v2.0-870. Jan 23 19:23:12.043969 kernel: iscsi: registered transport (tcp) Jan 23 19:23:12.100955 kernel: iscsi: registered transport (qla4xxx) Jan 23 19:23:12.101028 kernel: QLogic iSCSI HBA Driver Jan 23 19:23:12.210373 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:23:12.293092 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:23:12.307148 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:23:12.550550 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 19:23:12.569636 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 19:23:12.736701 kernel: raid6: avx2x4 gen() 17939 MB/s Jan 23 19:23:12.758621 kernel: raid6: avx2x2 gen() 18849 MB/s Jan 23 19:23:12.788630 kernel: raid6: avx2x1 gen() 8537 MB/s Jan 23 19:23:12.788705 kernel: raid6: using algorithm avx2x2 gen() 18849 MB/s Jan 23 19:23:12.820152 kernel: raid6: .... xor() 15201 MB/s, rmw enabled Jan 23 19:23:12.820241 kernel: raid6: using avx2x2 recovery algorithm Jan 23 19:23:12.870601 kernel: xor: automatically using best checksumming function avx Jan 23 19:23:13.698031 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 19:23:13.732766 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:23:13.749648 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:23:13.848334 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 23 19:23:13.865248 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:23:13.879345 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 19:23:13.977029 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Jan 23 19:23:14.097774 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:23:14.124587 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:23:14.322707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:23:14.336313 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 19:23:14.566200 kernel: libata version 3.00 loaded. Jan 23 19:23:14.585231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:23:14.669223 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 19:23:14.669644 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 19:23:14.670020 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 19:23:14.670049 kernel: GPT:9289727 != 19775487 Jan 23 19:23:14.670062 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 19:23:14.670077 kernel: GPT:9289727 != 19775487 Jan 23 19:23:14.670090 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 19:23:14.670103 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:23:14.585591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:23:14.653096 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:23:14.729709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:23:14.761361 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:23:14.789395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:23:14.832558 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 19:23:14.832588 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 19:23:14.789719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:23:14.830415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:23:14.908054 kernel: AES CTR mode by8 optimization enabled Jan 23 19:23:14.916584 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 19:23:14.946008 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 19:23:14.946283 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 19:23:14.991091 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 19:23:14.991368 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 19:23:14.991673 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 19:23:15.009147 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 19:23:15.038341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:23:15.051358 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 19:23:15.076747 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 19:23:15.137368 kernel: scsi host0: ahci Jan 23 19:23:15.125763 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:23:15.150542 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 19:23:15.197046 kernel: scsi host1: ahci Jan 23 19:23:15.206115 kernel: scsi host2: ahci Jan 23 19:23:15.215266 kernel: scsi host3: ahci Jan 23 19:23:15.225159 kernel: scsi host4: ahci Jan 23 19:23:15.226777 disk-uuid[620]: Primary Header is updated. Jan 23 19:23:15.226777 disk-uuid[620]: Secondary Entries is updated. Jan 23 19:23:15.226777 disk-uuid[620]: Secondary Header is updated. Jan 23 19:23:15.358080 kernel: scsi host5: ahci Jan 23 19:23:15.358371 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Jan 23 19:23:15.358392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:23:15.358408 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Jan 23 19:23:15.358425 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Jan 23 19:23:15.358546 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Jan 23 19:23:15.358572 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Jan 23 19:23:15.358588 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Jan 23 19:23:15.358605 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:23:15.667161 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 19:23:15.679381 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 19:23:15.692026 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 19:23:15.703346 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 19:23:15.722244 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 19:23:15.734577 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 19:23:15.752990 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 19:23:15.753252 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 19:23:15.753275 kernel: ata3.00: applying bridge limits Jan 23 19:23:15.778204 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 19:23:15.778282 kernel: ata3.00: configured for UDMA/100 Jan 23 19:23:15.797925 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 19:23:15.911183 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 19:23:15.911736 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 19:23:15.934164 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 19:23:16.294056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:23:16.304664 disk-uuid[626]: The operation has completed successfully. Jan 23 19:23:16.467668 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 19:23:16.468268 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 19:23:16.507066 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 19:23:16.534980 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:23:16.551212 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:23:16.568285 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:23:16.582243 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 19:23:16.609108 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 19:23:16.691310 sh[647]: Success Jan 23 19:23:16.715229 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:23:16.801206 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 19:23:16.801284 kernel: device-mapper: uevent: version 1.0.3 Jan 23 19:23:16.813022 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 19:23:16.880054 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 19:23:17.011381 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 19:23:17.046390 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 19:23:17.096279 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 19:23:17.153754 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (666) Jan 23 19:23:17.170066 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 19:23:17.170124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:23:17.256014 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 19:23:17.256089 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 19:23:17.271211 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 19:23:17.279333 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:23:17.321722 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 19:23:17.324718 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 19:23:17.379063 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 19:23:17.494106 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (695) Jan 23 19:23:17.512093 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:23:17.527053 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:23:17.592110 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:23:17.592188 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:23:17.640445 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:23:17.680364 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 19:23:17.690093 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 19:23:18.034277 ignition[750]: Ignition 2.22.0 Jan 23 19:23:18.034382 ignition[750]: Stage: fetch-offline Jan 23 19:23:18.034422 ignition[750]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:23:18.034435 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:23:18.034635 ignition[750]: parsed url from cmdline: "" Jan 23 19:23:18.034641 ignition[750]: no config URL provided Jan 23 19:23:18.034648 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 19:23:18.034658 ignition[750]: no config at "/usr/lib/ignition/user.ign" Jan 23 19:23:18.034687 ignition[750]: op(1): [started] loading QEMU firmware config module Jan 23 19:23:18.034693 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 19:23:18.131093 ignition[750]: op(1): [finished] loading QEMU firmware config module Jan 23 19:23:18.221606 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:23:18.253681 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:23:18.381983 systemd-networkd[843]: lo: Link UP Jan 23 19:23:18.381991 systemd-networkd[843]: lo: Gained carrier Jan 23 19:23:18.385024 systemd-networkd[843]: Enumeration completed Jan 23 19:23:18.388380 systemd-networkd[843]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:23:18.388386 systemd-networkd[843]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:23:18.391586 systemd-networkd[843]: eth0: Link UP Jan 23 19:23:18.393653 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:23:18.394106 systemd-networkd[843]: eth0: Gained carrier Jan 23 19:23:18.394128 systemd-networkd[843]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:23:18.409453 systemd[1]: Reached target network.target - Network. Jan 23 19:23:18.553301 systemd-networkd[843]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 19:23:19.534730 ignition[750]: parsing config with SHA512: faabe8a2b427d8b5edd1160b502e44515bba7fbcbc96262d8790d763ef03409bb911f953e6305ddbd6c78c1d95c60127d6916e9d866f9ea2fe36b4e003521b94 Jan 23 19:23:19.575719 unknown[750]: fetched base config from "system" Jan 23 19:23:19.575736 unknown[750]: fetched user config from "qemu" Jan 23 19:23:19.582661 ignition[750]: fetch-offline: fetch-offline passed Jan 23 19:23:19.582732 ignition[750]: Ignition finished successfully Jan 23 19:23:19.616357 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:23:19.627190 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 19:23:19.629224 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 19:23:19.767981 ignition[848]: Ignition 2.22.0 Jan 23 19:23:19.768063 ignition[848]: Stage: kargs Jan 23 19:23:19.768241 ignition[848]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:23:19.768255 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:23:19.778386 ignition[848]: kargs: kargs passed Jan 23 19:23:19.778454 ignition[848]: Ignition finished successfully Jan 23 19:23:19.824159 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 19:23:19.850254 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 19:23:19.850296 systemd-networkd[843]: eth0: Gained IPv6LL Jan 23 19:23:19.945094 ignition[857]: Ignition 2.22.0 Jan 23 19:23:19.945207 ignition[857]: Stage: disks Jan 23 19:23:19.945363 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:23:19.945380 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:23:19.961276 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 19:23:19.947127 ignition[857]: disks: disks passed Jan 23 19:23:19.991575 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 19:23:19.947186 ignition[857]: Ignition finished successfully Jan 23 19:23:20.015272 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 19:23:20.035373 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:23:20.059206 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:23:20.091275 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:23:20.117591 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 19:23:20.214760 systemd-fsck[867]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 19:23:20.232385 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 19:23:20.269601 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 19:23:20.987016 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 19:23:20.991400 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 19:23:21.003177 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 19:23:21.025397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:23:21.072091 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 19:23:21.082029 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 19:23:21.137723 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (875) Jan 23 19:23:21.082104 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 19:23:21.082140 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:23:21.148093 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 19:23:21.170227 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 19:23:21.236076 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:23:21.247954 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:23:21.289777 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:23:21.290030 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:23:21.295654 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:23:21.323154 initrd-setup-root[899]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 19:23:21.341604 initrd-setup-root[906]: cut: /sysroot/etc/group: No such file or directory Jan 23 19:23:21.370137 initrd-setup-root[913]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 19:23:21.398590 initrd-setup-root[920]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 19:23:21.871184 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 19:23:21.909191 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 19:23:21.953235 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 19:23:21.977708 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 19:23:22.002720 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:23:22.097399 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 19:23:22.152674 ignition[987]: INFO : Ignition 2.22.0 Jan 23 19:23:22.152674 ignition[987]: INFO : Stage: mount Jan 23 19:23:22.152674 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:23:22.152674 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:23:22.228345 ignition[987]: INFO : mount: mount passed Jan 23 19:23:22.228345 ignition[987]: INFO : Ignition finished successfully Jan 23 19:23:22.161206 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 19:23:22.182134 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 19:23:22.312693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:23:22.390075 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1001) Jan 23 19:23:22.414304 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:23:22.414477 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:23:22.462294 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:23:22.462377 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:23:22.467253 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:23:22.559047 ignition[1018]: INFO : Ignition 2.22.0 Jan 23 19:23:22.559047 ignition[1018]: INFO : Stage: files Jan 23 19:23:22.559047 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:23:22.559047 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:23:22.601092 ignition[1018]: DEBUG : files: compiled without relabeling support, skipping Jan 23 19:23:22.601092 ignition[1018]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 19:23:22.601092 ignition[1018]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 19:23:22.601092 ignition[1018]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 19:23:22.601092 ignition[1018]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 19:23:22.601092 ignition[1018]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 19:23:22.592044 unknown[1018]: wrote ssh authorized keys file for user: core Jan 23 19:23:22.705659 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 19:23:22.705659 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 19:23:22.779414 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 19:23:22.926239 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 19:23:22.926239 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 19:23:22.926239 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 19:23:23.111374 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 19:23:23.288053 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 19:23:23.308362 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 19:23:23.719230 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 19:23:24.074327 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 19:23:24.074327 ignition[1018]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 19:23:24.121755 ignition[1018]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:23:24.121755 ignition[1018]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:23:24.121755 ignition[1018]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 19:23:24.121755 ignition[1018]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 23 19:23:24.121755 ignition[1018]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 19:23:24.121755 ignition[1018]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 19:23:24.121755 ignition[1018]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 23 19:23:24.121755 ignition[1018]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 19:23:24.313622 ignition[1018]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 19:23:24.334103 ignition[1018]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 19:23:24.334103 ignition[1018]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 19:23:24.334103 ignition[1018]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 23 19:23:24.334103 ignition[1018]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 19:23:24.334103 ignition[1018]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:23:24.334103 ignition[1018]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:23:24.334103 ignition[1018]: INFO : files: files passed Jan 23 19:23:24.334103 ignition[1018]: INFO : Ignition finished successfully Jan 23 19:23:24.462089 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 19:23:24.492756 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 19:23:24.504245 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 19:23:24.576693 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 19:23:24.577172 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 19:23:24.615243 initrd-setup-root-after-ignition[1046]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 19:23:24.644480 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:23:24.644480 initrd-setup-root-after-ignition[1049]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:23:24.701289 initrd-setup-root-after-ignition[1053]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:23:24.657358 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:23:24.667683 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 19:23:24.768409 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 19:23:24.981071 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 19:23:24.994641 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 19:23:25.013367 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 19:23:25.021752 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 19:23:25.053746 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 19:23:25.056411 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 19:23:25.203500 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:23:25.229506 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 19:23:25.305314 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:23:25.312185 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:23:25.349374 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 19:23:25.374358 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 19:23:25.374673 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:23:25.408735 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 19:23:25.429095 systemd[1]: Stopped target basic.target - Basic System. Jan 23 19:23:25.450294 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 19:23:25.472030 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:23:25.495067 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 19:23:25.498176 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:23:25.529029 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 19:23:25.558183 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:23:25.574513 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 19:23:25.598399 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 19:23:25.621987 systemd[1]: Stopped target swap.target - Swaps. Jan 23 19:23:25.670495 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 19:23:25.670976 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:23:25.710651 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:23:25.713482 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:23:25.754454 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 19:23:25.756071 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:23:25.764030 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 19:23:25.764207 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 19:23:25.827166 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 19:23:25.827481 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:23:25.843347 systemd[1]: Stopped target paths.target - Path Units. Jan 23 19:23:25.858367 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 19:23:25.864490 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:23:25.887034 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 19:23:25.915457 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 19:23:25.929195 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 19:23:25.929319 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:23:25.958352 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 19:23:25.958481 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:23:25.979746 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 19:23:25.980278 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:23:26.006975 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 19:23:26.007248 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 19:23:26.074148 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 19:23:26.086140 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 19:23:26.119180 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 19:23:26.119471 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:23:26.161656 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 19:23:26.162168 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:23:26.305230 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 19:23:26.325153 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 19:23:26.391964 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 19:23:26.445028 ignition[1073]: INFO : Ignition 2.22.0 Jan 23 19:23:26.445028 ignition[1073]: INFO : Stage: umount Jan 23 19:23:26.445028 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:23:26.445028 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:23:26.458707 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 19:23:26.640430 ignition[1073]: INFO : umount: umount passed Jan 23 19:23:26.640430 ignition[1073]: INFO : Ignition finished successfully Jan 23 19:23:26.459127 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 19:23:26.489320 systemd[1]: Stopped target network.target - Network. Jan 23 19:23:26.495683 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 19:23:26.496133 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 19:23:26.496256 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 19:23:26.496333 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 19:23:26.496422 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 19:23:26.496760 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 19:23:26.497117 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 19:23:26.497179 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 19:23:26.497629 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 19:23:26.498023 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 19:23:26.586466 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 19:23:26.586738 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 19:23:26.645687 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 19:23:26.647036 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 19:23:26.647263 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 19:23:26.669408 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 19:23:26.669687 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 19:23:26.758767 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 19:23:26.773775 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 19:23:26.797328 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 19:23:26.797450 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:23:26.809299 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 19:23:26.811438 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 19:23:26.854209 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 19:23:26.864676 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 19:23:26.864762 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:23:26.879212 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:23:26.879296 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:23:26.986528 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 19:23:26.986734 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 19:23:27.017381 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 19:23:27.017487 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:23:27.092463 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:23:27.118665 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 19:23:27.118775 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:23:27.191658 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 19:23:27.192111 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:23:27.220212 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 19:23:27.220317 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 19:23:27.260483 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 19:23:27.260656 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:23:27.293125 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 19:23:27.293225 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:23:27.335342 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 19:23:27.335453 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 19:23:27.351460 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 19:23:27.351673 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:23:27.377084 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 19:23:27.398195 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 19:23:27.398299 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:23:27.555263 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 19:23:27.555371 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:23:27.663003 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 19:23:27.663103 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:23:27.813006 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 19:23:27.813103 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:23:27.819720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:23:27.820017 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:23:27.882090 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 19:23:27.882184 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 19:23:27.882243 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 19:23:27.882301 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:23:27.883473 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 19:23:27.884093 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 19:23:27.895525 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 19:23:27.896446 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 19:23:27.907354 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 19:23:27.960100 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 19:23:28.126315 systemd[1]: Switching root. Jan 23 19:23:28.195016 systemd-journald[203]: Journal stopped Jan 23 19:23:32.998941 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 23 19:23:32.999018 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 19:23:32.999043 kernel: SELinux: policy capability open_perms=1 Jan 23 19:23:32.999057 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 19:23:32.999071 kernel: SELinux: policy capability always_check_network=0 Jan 23 19:23:32.999085 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 19:23:32.999189 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 19:23:32.999205 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 19:23:32.999225 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 19:23:32.999239 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 19:23:32.999253 kernel: audit: type=1403 audit(1769196208.671:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 19:23:32.999272 systemd[1]: Successfully loaded SELinux policy in 197.894ms. Jan 23 19:23:32.999299 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.157ms. Jan 23 19:23:32.999319 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:23:32.999337 systemd[1]: Detected virtualization kvm. Jan 23 19:23:32.999352 systemd[1]: Detected architecture x86-64. Jan 23 19:23:32.999367 systemd[1]: Detected first boot. Jan 23 19:23:32.999382 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:23:32.999397 zram_generator::config[1119]: No configuration found. Jan 23 19:23:32.999414 kernel: Guest personality initialized and is inactive Jan 23 19:23:32.999437 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 19:23:32.999451 kernel: Initialized host personality Jan 23 19:23:32.999466 kernel: NET: Registered PF_VSOCK protocol family Jan 23 19:23:32.999481 systemd[1]: Populated /etc with preset unit settings. Jan 23 19:23:32.999497 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 19:23:32.999513 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 19:23:32.999528 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 19:23:32.999544 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 19:23:32.999562 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 19:23:32.999579 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 19:23:32.999695 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 19:23:32.999713 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 19:23:32.999729 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 19:23:32.999746 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 19:23:32.999763 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 19:23:32.999942 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 19:23:32.999964 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:23:32.999984 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:23:33.000000 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 19:23:33.000015 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 19:23:33.000036 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 19:23:33.000052 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:23:33.000068 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 19:23:33.000083 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:23:33.000099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:23:33.000117 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 19:23:33.000132 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 19:23:33.000148 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 19:23:33.000163 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 19:23:33.000179 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:23:33.000194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:23:33.000209 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:23:33.000224 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:23:33.000241 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 19:23:33.000260 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 19:23:33.000276 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 19:23:33.000291 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:23:33.000306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:23:33.000321 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:23:33.000337 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 19:23:33.000352 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 19:23:33.000367 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 19:23:33.000388 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 19:23:33.000407 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:23:33.000422 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 19:23:33.000437 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 19:23:33.000452 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 19:23:33.000468 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 19:23:33.000484 systemd[1]: Reached target machines.target - Containers. Jan 23 19:23:33.000499 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 19:23:33.000514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:23:33.000533 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:23:33.000549 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 19:23:33.000566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:23:33.000582 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:23:33.000700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:23:33.000719 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 19:23:33.000735 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:23:33.000751 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 19:23:33.000766 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 19:23:33.000949 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 19:23:33.000968 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 19:23:33.000983 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 19:23:33.001000 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:23:33.001015 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:23:33.001030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:23:33.001045 kernel: fuse: init (API version 7.41) Jan 23 19:23:33.001060 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:23:33.001079 kernel: loop: module loaded Jan 23 19:23:33.001094 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 19:23:33.001109 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 19:23:33.001152 systemd-journald[1204]: Collecting audit messages is disabled. Jan 23 19:23:33.001182 systemd-journald[1204]: Journal started Jan 23 19:23:33.001213 systemd-journald[1204]: Runtime Journal (/run/log/journal/93ee8d8939054737ade346787d57c85b) is 6M, max 48.1M, 42.1M free. Jan 23 19:23:30.859706 systemd[1]: Queued start job for default target multi-user.target. Jan 23 19:23:30.897077 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 19:23:30.900287 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 19:23:30.901241 systemd[1]: systemd-journald.service: Consumed 4.436s CPU time. Jan 23 19:23:33.029298 kernel: ACPI: bus type drm_connector registered Jan 23 19:23:33.029366 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:23:33.087052 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 19:23:33.087129 systemd[1]: Stopped verity-setup.service. Jan 23 19:23:33.128082 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:23:33.154306 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:23:33.167733 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 19:23:33.187704 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 19:23:33.201703 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 19:23:33.214563 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 19:23:33.231065 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 19:23:33.245133 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 19:23:33.261135 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 19:23:33.284153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:23:33.306709 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 19:23:33.309497 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 19:23:33.327462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:23:33.328362 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:23:33.343294 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:23:33.344307 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:23:33.361362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:23:33.362373 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:23:33.388287 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 19:23:33.389077 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 19:23:33.409404 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:23:33.413193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:23:33.436511 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:23:33.466253 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:23:33.485733 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 19:23:33.512262 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 19:23:33.538559 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:23:33.580048 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:23:33.601992 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 19:23:33.635538 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 19:23:33.658347 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 19:23:33.658401 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:23:33.675219 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 19:23:33.711289 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 19:23:33.724576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:23:33.764236 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 19:23:33.779986 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 19:23:33.806511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:23:33.834125 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 19:23:33.850230 systemd-journald[1204]: Time spent on flushing to /var/log/journal/93ee8d8939054737ade346787d57c85b is 29.659ms for 1069 entries. Jan 23 19:23:33.850230 systemd-journald[1204]: System Journal (/var/log/journal/93ee8d8939054737ade346787d57c85b) is 8M, max 195.6M, 187.6M free. Jan 23 19:23:33.927067 systemd-journald[1204]: Received client request to flush runtime journal. Jan 23 19:23:33.886484 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:23:33.894710 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:23:33.941498 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 19:23:33.969391 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:23:34.003286 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 19:23:34.023452 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 19:23:34.054953 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 19:23:34.056028 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 19:23:34.097565 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 19:23:34.138383 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 19:23:34.159166 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 19:23:34.188574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:23:34.232004 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 19:23:34.254270 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 23 19:23:34.255026 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 23 19:23:34.266123 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:23:34.301178 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 19:23:34.369577 kernel: loop1: detected capacity change from 0 to 219144 Jan 23 19:23:34.397228 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 19:23:34.407579 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 19:23:34.526265 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 19:23:34.559949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:23:34.611768 kernel: loop2: detected capacity change from 0 to 128560 Jan 23 19:23:34.661264 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 23 19:23:34.662039 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 23 19:23:34.680762 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:23:34.770254 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 19:23:34.886728 kernel: loop4: detected capacity change from 0 to 219144 Jan 23 19:23:35.102098 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 19:23:35.244342 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 19:23:35.267166 (sd-merge)[1265]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 19:23:35.268334 (sd-merge)[1265]: Merged extensions into '/usr'. Jan 23 19:23:35.273084 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:23:35.311146 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 19:23:35.311255 systemd[1]: Reloading... Jan 23 19:23:35.444721 systemd-udevd[1267]: Using default interface naming scheme 'v255'. Jan 23 19:23:35.517277 zram_generator::config[1291]: No configuration found. Jan 23 19:23:36.040535 ldconfig[1234]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 19:23:36.135470 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 19:23:36.143579 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 19:23:36.150046 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 19:23:36.151092 systemd[1]: Reloading finished in 838 ms. Jan 23 19:23:36.171164 kernel: ACPI: button: Power Button [PWRF] Jan 23 19:23:36.209313 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:23:36.234392 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 19:23:36.259442 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 19:23:36.345614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:23:36.421177 systemd[1]: Starting ensure-sysext.service... Jan 23 19:23:36.439517 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 19:23:36.478273 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:23:36.511270 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:23:36.570177 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 19:23:36.578074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 19:23:36.596060 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 19:23:36.619115 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 19:23:36.641234 systemd[1]: Reload requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Jan 23 19:23:36.641352 systemd[1]: Reloading... Jan 23 19:23:36.685740 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 19:23:36.688168 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 19:23:36.689070 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 19:23:36.690423 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 19:23:36.692289 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 19:23:36.692768 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 23 19:23:36.693067 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 23 19:23:36.707170 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:23:36.707185 systemd-tmpfiles[1385]: Skipping /boot Jan 23 19:23:36.752616 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:23:36.752752 systemd-tmpfiles[1385]: Skipping /boot Jan 23 19:23:36.951290 zram_generator::config[1413]: No configuration found. Jan 23 19:23:37.644474 systemd[1]: Reloading finished in 999 ms. Jan 23 19:23:37.989470 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:23:38.052322 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:23:38.062244 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:23:38.085258 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 19:23:38.106263 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:23:38.256105 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:23:38.274524 kernel: kvm_amd: TSC scaling supported Jan 23 19:23:38.274626 kernel: kvm_amd: Nested Virtualization enabled Jan 23 19:23:38.274767 kernel: kvm_amd: Nested Paging enabled Jan 23 19:23:38.293066 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 19:23:38.293148 kernel: kvm_amd: PMU virtualization is disabled Jan 23 19:23:38.326519 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:23:38.349172 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:23:38.367159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:23:38.367327 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:23:38.376069 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 19:23:38.442634 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:23:38.465292 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 19:23:38.493509 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 19:23:38.538199 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:23:38.557048 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:23:38.576301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:23:38.578267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:23:38.600638 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:23:38.607324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:23:38.616442 augenrules[1481]: No rules Jan 23 19:23:38.636407 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:23:38.639344 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:23:38.662624 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:23:38.663326 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:23:38.691009 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 19:23:38.718777 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 19:23:38.781050 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 19:23:38.826438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:23:38.842165 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:23:38.851037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:23:38.857076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:23:38.943493 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:23:38.961287 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:23:38.988534 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:23:39.001243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:23:39.001552 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:23:39.032451 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 19:23:39.046032 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 19:23:39.046206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:23:39.059590 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 19:23:39.092778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:23:39.127762 augenrules[1495]: /sbin/augenrules: No change Jan 23 19:23:39.106600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:23:39.107219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:23:39.124400 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:23:39.124966 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:23:39.164628 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:23:39.168283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:23:39.183030 augenrules[1526]: No rules Jan 23 19:23:39.222498 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:23:39.223390 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:23:39.256305 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:23:39.256610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:23:39.274632 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 19:23:39.316389 systemd[1]: Finished ensure-sysext.service. Jan 23 19:23:39.403292 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:23:39.403394 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:23:39.418560 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 19:23:39.719116 systemd-networkd[1384]: lo: Link UP Jan 23 19:23:39.719128 systemd-networkd[1384]: lo: Gained carrier Jan 23 19:23:39.723178 systemd-networkd[1384]: Enumeration completed Jan 23 19:23:39.723382 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:23:39.735371 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:23:39.735385 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:23:39.740037 systemd-networkd[1384]: eth0: Link UP Jan 23 19:23:39.740256 systemd-networkd[1384]: eth0: Gained carrier Jan 23 19:23:39.740279 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:23:39.748236 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 19:23:39.767622 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 19:23:39.782038 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 19:23:39.792376 systemd-resolved[1473]: Positive Trust Anchors: Jan 23 19:23:39.793620 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:23:39.794248 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:23:39.803565 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 19:23:39.804229 systemd-resolved[1473]: Defaulting to hostname 'linux'. Jan 23 19:23:39.818593 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:23:39.820255 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 19:23:39.822399 systemd-timesyncd[1539]: Network configuration changed, trying to establish connection. Jan 23 19:23:39.832960 systemd-timesyncd[1539]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 19:23:39.833039 systemd-timesyncd[1539]: Initial clock synchronization to Fri 2026-01-23 19:23:40.120176 UTC. Jan 23 19:23:39.833558 systemd[1]: Reached target network.target - Network. Jan 23 19:23:39.843234 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:23:39.856318 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:23:39.867463 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 19:23:39.880760 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 19:23:39.896254 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 19:23:39.909531 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 19:23:39.925277 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 19:23:39.939334 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 19:23:39.949377 kernel: EDAC MC: Ver: 3.0.0 Jan 23 19:23:39.960551 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 19:23:39.960977 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:23:39.972145 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:23:39.992518 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 19:23:40.015775 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 19:23:40.037806 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 19:23:40.054757 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 19:23:40.073440 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 19:23:40.126517 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 19:23:40.149784 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 19:23:40.180590 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 19:23:40.206512 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 19:23:40.232588 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:23:40.257386 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:23:40.272731 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:23:40.273228 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:23:40.283576 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 19:23:40.324300 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 19:23:40.353431 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 19:23:40.386226 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 19:23:40.413508 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 19:23:40.432756 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 19:23:40.437537 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 19:23:40.472154 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 19:23:40.494721 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 19:23:40.502000 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Refreshing passwd entry cache Jan 23 19:23:40.501615 oslogin_cache_refresh[1553]: Refreshing passwd entry cache Jan 23 19:23:40.516441 jq[1551]: false Jan 23 19:23:40.522442 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 19:23:40.550162 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Failure getting users, quitting Jan 23 19:23:40.550162 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:23:40.550089 oslogin_cache_refresh[1553]: Failure getting users, quitting Jan 23 19:23:40.550332 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Refreshing group entry cache Jan 23 19:23:40.550121 oslogin_cache_refresh[1553]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:23:40.550198 oslogin_cache_refresh[1553]: Refreshing group entry cache Jan 23 19:23:40.551217 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 19:23:40.579645 oslogin_cache_refresh[1553]: Failure getting groups, quitting Jan 23 19:23:40.570417 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 19:23:40.586395 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Failure getting groups, quitting Jan 23 19:23:40.586395 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:23:40.579665 oslogin_cache_refresh[1553]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:23:40.583398 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 19:23:40.584554 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 19:23:40.587743 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 19:23:40.595283 extend-filesystems[1552]: Found /dev/vda6 Jan 23 19:23:40.604126 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 19:23:40.647777 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 19:23:40.655165 extend-filesystems[1552]: Found /dev/vda9 Jan 23 19:23:40.676204 extend-filesystems[1552]: Checking size of /dev/vda9 Jan 23 19:23:40.676319 jq[1563]: true Jan 23 19:23:40.699384 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 19:23:40.700304 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 19:23:40.701124 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 19:23:40.702183 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 19:23:40.719230 extend-filesystems[1552]: Resized partition /dev/vda9 Jan 23 19:23:40.746685 extend-filesystems[1576]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 19:23:40.782659 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 19:23:40.728189 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 19:23:40.817391 update_engine[1562]: I20260123 19:23:40.747445 1562 main.cc:92] Flatcar Update Engine starting Jan 23 19:23:40.803481 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 19:23:40.826063 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 19:23:40.826680 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 19:23:40.878796 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 19:23:40.888669 jq[1584]: true Jan 23 19:23:40.929537 tar[1582]: linux-amd64/LICENSE Jan 23 19:23:40.998742 tar[1582]: linux-amd64/helm Jan 23 19:23:40.959315 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 19:23:40.998451 systemd-logind[1561]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 19:23:40.998482 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 19:23:41.000129 systemd-logind[1561]: New seat seat0. Jan 23 19:23:41.003206 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 19:23:41.053196 dbus-daemon[1549]: [system] SELinux support is enabled Jan 23 19:23:41.054204 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 19:23:41.066146 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 19:23:41.156024 update_engine[1562]: I20260123 19:23:41.074414 1562 update_check_scheduler.cc:74] Next update check in 5m42s Jan 23 19:23:41.092099 dbus-daemon[1549]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 19:23:41.078564 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 19:23:41.078599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 19:23:41.105158 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 19:23:41.105193 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 19:23:41.119714 systemd[1]: Started update-engine.service - Update Engine. Jan 23 19:23:41.134603 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 19:23:41.163522 systemd-networkd[1384]: eth0: Gained IPv6LL Jan 23 19:23:41.166269 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 19:23:41.166269 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 19:23:41.166269 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 19:23:41.237109 extend-filesystems[1552]: Resized filesystem in /dev/vda9 Jan 23 19:23:41.237207 bash[1612]: Updated "/home/core/.ssh/authorized_keys" Jan 23 19:23:41.187272 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 19:23:41.200374 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 19:23:41.201475 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 19:23:41.271549 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 19:23:41.311470 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 19:23:41.330696 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 19:23:41.349723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:23:41.381655 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 19:23:41.398361 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 19:23:41.443748 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 19:23:41.539156 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 19:23:41.539505 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 19:23:41.553088 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 19:23:41.587781 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 19:23:41.605208 containerd[1585]: time="2026-01-23T19:23:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 19:23:41.609073 containerd[1585]: time="2026-01-23T19:23:41.608105213Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 19:23:41.633169 containerd[1585]: time="2026-01-23T19:23:41.633115201Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.334µs" Jan 23 19:23:41.633328 containerd[1585]: time="2026-01-23T19:23:41.633305220Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 19:23:41.633417 containerd[1585]: time="2026-01-23T19:23:41.633396695Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 19:23:41.634031 containerd[1585]: time="2026-01-23T19:23:41.634007603Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 19:23:41.634120 containerd[1585]: time="2026-01-23T19:23:41.634102479Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 19:23:41.634211 containerd[1585]: time="2026-01-23T19:23:41.634192528Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:23:41.634361 containerd[1585]: time="2026-01-23T19:23:41.634339321Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:23:41.634433 containerd[1585]: time="2026-01-23T19:23:41.634416670Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:23:41.635304 containerd[1585]: time="2026-01-23T19:23:41.635279475Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:23:41.635379 containerd[1585]: time="2026-01-23T19:23:41.635362466Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:23:41.635443 containerd[1585]: time="2026-01-23T19:23:41.635424501Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:23:41.635527 containerd[1585]: time="2026-01-23T19:23:41.635508949Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 19:23:41.635722 containerd[1585]: time="2026-01-23T19:23:41.635701479Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 19:23:41.636375 containerd[1585]: time="2026-01-23T19:23:41.636352979Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:23:41.636482 containerd[1585]: time="2026-01-23T19:23:41.636461195Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:23:41.636566 containerd[1585]: time="2026-01-23T19:23:41.636546067Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 19:23:41.637272 containerd[1585]: time="2026-01-23T19:23:41.637108694Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 19:23:41.637587 containerd[1585]: time="2026-01-23T19:23:41.637565244Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 19:23:41.637772 containerd[1585]: time="2026-01-23T19:23:41.637745209Z" level=info msg="metadata content store policy set" policy=shared Jan 23 19:23:41.659568 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 19:23:41.668085 containerd[1585]: time="2026-01-23T19:23:41.668033924Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 19:23:41.669313 containerd[1585]: time="2026-01-23T19:23:41.669287020Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 19:23:41.672343 containerd[1585]: time="2026-01-23T19:23:41.669394751Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 19:23:41.672343 containerd[1585]: time="2026-01-23T19:23:41.669419862Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 19:23:41.672343 containerd[1585]: time="2026-01-23T19:23:41.669450853Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 19:23:41.672343 containerd[1585]: time="2026-01-23T19:23:41.669467532Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 19:23:41.672343 containerd[1585]: time="2026-01-23T19:23:41.669485854Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 19:23:41.672343 containerd[1585]: time="2026-01-23T19:23:41.669500776Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 19:23:41.672343 containerd[1585]: time="2026-01-23T19:23:41.669521960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 19:23:41.672343 containerd[1585]: time="2026-01-23T19:23:41.669536386Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 19:23:41.672343 containerd[1585]: time="2026-01-23T19:23:41.669547847Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 19:23:41.672343 containerd[1585]: time="2026-01-23T19:23:41.669564019Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 19:23:41.679364 containerd[1585]: time="2026-01-23T19:23:41.679263316Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 19:23:41.679364 containerd[1585]: time="2026-01-23T19:23:41.679336852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 19:23:41.679364 containerd[1585]: time="2026-01-23T19:23:41.679361715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679377432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679393202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679411783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679429681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679442723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679458543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679474396Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679489194Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679554679Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679578653Z" level=info msg="Start snapshots syncer" Jan 23 19:23:41.679615 containerd[1585]: time="2026-01-23T19:23:41.679617200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 19:23:41.684532 containerd[1585]: time="2026-01-23T19:23:41.683161768Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 19:23:41.684532 containerd[1585]: time="2026-01-23T19:23:41.683366875Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 19:23:41.685031 containerd[1585]: time="2026-01-23T19:23:41.683443790Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 19:23:41.685031 containerd[1585]: time="2026-01-23T19:23:41.683634997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 19:23:41.685031 containerd[1585]: time="2026-01-23T19:23:41.683672416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 19:23:41.685031 containerd[1585]: time="2026-01-23T19:23:41.683693817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 19:23:41.685031 containerd[1585]: time="2026-01-23T19:23:41.683711737Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 19:23:41.685031 containerd[1585]: time="2026-01-23T19:23:41.683731247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 19:23:41.685031 containerd[1585]: time="2026-01-23T19:23:41.683751635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 19:23:41.685031 containerd[1585]: time="2026-01-23T19:23:41.683769823Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.683806167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687204377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687236206Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687300792Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687432766Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687450737Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687469069Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687486534Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687502861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687534483Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687568977Z" level=info msg="runtime interface created" Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687579001Z" level=info msg="created NRI interface" Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687591474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687621938Z" level=info msg="Connect containerd service" Jan 23 19:23:41.688638 containerd[1585]: time="2026-01-23T19:23:41.687657641Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 19:23:41.690650 containerd[1585]: time="2026-01-23T19:23:41.690521229Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:23:41.766518 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 19:23:41.800607 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 19:23:41.818278 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:39658.service - OpenSSH per-connection server daemon (10.0.0.1:39658). Jan 23 19:23:41.904103 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 19:23:41.904669 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 19:23:41.929163 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 19:23:41.968375 tar[1582]: linux-amd64/README.md Jan 23 19:23:42.044657 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 19:23:42.065283 containerd[1585]: time="2026-01-23T19:23:42.065059645Z" level=info msg="Start subscribing containerd event" Jan 23 19:23:42.065283 containerd[1585]: time="2026-01-23T19:23:42.065217326Z" level=info msg="Start recovering state" Jan 23 19:23:42.065440 containerd[1585]: time="2026-01-23T19:23:42.065340736Z" level=info msg="Start event monitor" Jan 23 19:23:42.065440 containerd[1585]: time="2026-01-23T19:23:42.065358793Z" level=info msg="Start cni network conf syncer for default" Jan 23 19:23:42.065440 containerd[1585]: time="2026-01-23T19:23:42.065367677Z" level=info msg="Start streaming server" Jan 23 19:23:42.065440 containerd[1585]: time="2026-01-23T19:23:42.065379968Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 19:23:42.065440 containerd[1585]: time="2026-01-23T19:23:42.065389779Z" level=info msg="runtime interface starting up..." Jan 23 19:23:42.065440 containerd[1585]: time="2026-01-23T19:23:42.065397222Z" level=info msg="starting plugins..." Jan 23 19:23:42.065440 containerd[1585]: time="2026-01-23T19:23:42.065415577Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 19:23:42.071386 containerd[1585]: time="2026-01-23T19:23:42.067012744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 19:23:42.071386 containerd[1585]: time="2026-01-23T19:23:42.067087060Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 19:23:42.071386 containerd[1585]: time="2026-01-23T19:23:42.067160181Z" level=info msg="containerd successfully booted in 0.465839s" Jan 23 19:23:42.075358 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 19:23:42.093122 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 19:23:42.117592 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 19:23:42.144380 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 19:23:42.165272 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 19:23:42.369142 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 39658 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:23:42.377278 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:23:42.401319 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 19:23:42.416574 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 19:23:42.456031 systemd-logind[1561]: New session 1 of user core. Jan 23 19:23:42.504166 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 19:23:42.531021 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 19:23:42.571142 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 19:23:42.591636 systemd-logind[1561]: New session c1 of user core. Jan 23 19:23:42.974101 systemd[1687]: Queued start job for default target default.target. Jan 23 19:23:42.986178 systemd[1687]: Created slice app.slice - User Application Slice. Jan 23 19:23:42.986216 systemd[1687]: Reached target paths.target - Paths. Jan 23 19:23:42.988163 systemd[1687]: Reached target timers.target - Timers. Jan 23 19:23:42.995015 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 19:23:43.070660 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 19:23:43.073748 systemd[1687]: Reached target sockets.target - Sockets. Jan 23 19:23:43.075127 systemd[1687]: Reached target basic.target - Basic System. Jan 23 19:23:43.076346 systemd[1687]: Reached target default.target - Main User Target. Jan 23 19:23:43.076366 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 19:23:43.076397 systemd[1687]: Startup finished in 433ms. Jan 23 19:23:43.108635 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 19:23:43.245108 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:39722.service - OpenSSH per-connection server daemon (10.0.0.1:39722). Jan 23 19:23:43.388482 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 39722 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:23:43.398594 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:23:43.424371 systemd-logind[1561]: New session 2 of user core. Jan 23 19:23:43.431312 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 19:23:43.547062 sshd[1701]: Connection closed by 10.0.0.1 port 39722 Jan 23 19:23:43.549708 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Jan 23 19:23:43.561471 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:39722.service: Deactivated successfully. Jan 23 19:23:43.566637 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 19:23:43.574298 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Jan 23 19:23:43.584593 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:39734.service - OpenSSH per-connection server daemon (10.0.0.1:39734). Jan 23 19:23:43.611422 systemd-logind[1561]: Removed session 2. Jan 23 19:23:43.738167 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 39734 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:23:43.742350 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:23:43.761696 systemd-logind[1561]: New session 3 of user core. Jan 23 19:23:43.778305 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 19:23:43.897525 sshd[1710]: Connection closed by 10.0.0.1 port 39734 Jan 23 19:23:43.898514 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Jan 23 19:23:43.911370 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:39734.service: Deactivated successfully. Jan 23 19:23:43.915783 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 19:23:43.920262 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Jan 23 19:23:43.925708 systemd-logind[1561]: Removed session 3. Jan 23 19:23:44.368060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:23:44.383750 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 19:23:44.397780 systemd[1]: Startup finished in 8.454s (kernel) + 19.284s (initrd) + 15.922s (userspace) = 43.661s. Jan 23 19:23:44.403286 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:23:45.840958 kubelet[1719]: E0123 19:23:45.840656 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:23:45.857712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:23:45.858339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:23:45.860043 systemd[1]: kubelet.service: Consumed 1.527s CPU time, 259.5M memory peak. Jan 23 19:23:54.064619 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:55414.service - OpenSSH per-connection server daemon (10.0.0.1:55414). Jan 23 19:23:54.235886 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 55414 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:23:54.239208 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:23:54.276751 systemd-logind[1561]: New session 4 of user core. Jan 23 19:23:54.309975 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 19:23:54.419527 sshd[1737]: Connection closed by 10.0.0.1 port 55414 Jan 23 19:23:54.423673 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Jan 23 19:23:54.447214 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:34832.service - OpenSSH per-connection server daemon (10.0.0.1:34832). Jan 23 19:23:54.448178 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:55414.service: Deactivated successfully. Jan 23 19:23:54.452691 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 19:23:54.457714 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Jan 23 19:23:54.469551 systemd-logind[1561]: Removed session 4. Jan 23 19:23:54.584406 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 34832 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:23:54.588322 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:23:54.613237 systemd-logind[1561]: New session 5 of user core. Jan 23 19:23:54.628578 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 19:23:54.738522 sshd[1746]: Connection closed by 10.0.0.1 port 34832 Jan 23 19:23:54.745627 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 23 19:23:54.764465 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:34832.service: Deactivated successfully. Jan 23 19:23:54.770689 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 19:23:54.783004 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Jan 23 19:23:54.790438 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:34840.service - OpenSSH per-connection server daemon (10.0.0.1:34840). Jan 23 19:23:54.802096 systemd-logind[1561]: Removed session 5. Jan 23 19:23:54.952943 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 34840 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:23:54.959669 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:23:55.004210 systemd-logind[1561]: New session 6 of user core. Jan 23 19:23:55.041389 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 19:23:55.148309 sshd[1755]: Connection closed by 10.0.0.1 port 34840 Jan 23 19:23:55.156546 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jan 23 19:23:55.179522 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:34842.service - OpenSSH per-connection server daemon (10.0.0.1:34842). Jan 23 19:23:55.182234 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:34840.service: Deactivated successfully. Jan 23 19:23:55.192576 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 19:23:55.226473 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Jan 23 19:23:55.243007 systemd-logind[1561]: Removed session 6. Jan 23 19:23:55.322564 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 34842 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:23:55.325698 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:23:55.358550 systemd-logind[1561]: New session 7 of user core. Jan 23 19:23:55.368774 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 19:23:55.517376 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 19:23:55.518177 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:23:55.576265 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 23 19:23:55.598183 sshd[1764]: Connection closed by 10.0.0.1 port 34842 Jan 23 19:23:55.601050 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jan 23 19:23:55.621307 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:34842.service: Deactivated successfully. Jan 23 19:23:55.626324 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 19:23:55.632388 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Jan 23 19:23:55.634687 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:34848.service - OpenSSH per-connection server daemon (10.0.0.1:34848). Jan 23 19:23:55.643453 systemd-logind[1561]: Removed session 7. Jan 23 19:23:55.829253 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 34848 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:23:55.832437 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:23:55.865196 systemd-logind[1561]: New session 8 of user core. Jan 23 19:23:55.867319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 19:23:55.897338 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 19:23:55.908590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:23:56.007705 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 19:23:56.008680 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:23:56.053499 sudo[1779]: pam_unix(sudo:session): session closed for user root Jan 23 19:23:56.079255 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 19:23:56.080321 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:23:56.118713 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:23:56.302080 augenrules[1803]: No rules Jan 23 19:23:56.302251 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:23:56.303051 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:23:56.305353 sudo[1778]: pam_unix(sudo:session): session closed for user root Jan 23 19:23:56.311244 sshd[1775]: Connection closed by 10.0.0.1 port 34848 Jan 23 19:23:56.310523 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Jan 23 19:23:56.341353 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:34848.service: Deactivated successfully. Jan 23 19:23:56.345610 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 19:23:56.357710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:23:56.361260 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Jan 23 19:23:56.370487 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:34862.service - OpenSSH per-connection server daemon (10.0.0.1:34862). Jan 23 19:23:56.377406 systemd-logind[1561]: Removed session 8. Jan 23 19:23:56.400349 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:23:56.519176 sshd[1816]: Accepted publickey for core from 10.0.0.1 port 34862 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:23:56.525358 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:23:56.561546 systemd-logind[1561]: New session 9 of user core. Jan 23 19:23:56.571554 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 19:23:56.702060 kubelet[1812]: E0123 19:23:56.701740 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:23:56.707016 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 19:23:56.707651 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:23:56.742392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:23:56.743116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:23:56.744173 systemd[1]: kubelet.service: Consumed 504ms CPU time, 110.9M memory peak. Jan 23 19:23:58.138571 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 19:23:58.164403 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 19:23:59.440471 dockerd[1849]: time="2026-01-23T19:23:59.436634827Z" level=info msg="Starting up" Jan 23 19:23:59.456496 dockerd[1849]: time="2026-01-23T19:23:59.454546073Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 19:23:59.626739 dockerd[1849]: time="2026-01-23T19:23:59.625413722Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 19:23:59.785344 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3353499406-merged.mount: Deactivated successfully. Jan 23 19:23:59.841635 systemd[1]: var-lib-docker-metacopy\x2dcheck387023553-merged.mount: Deactivated successfully. Jan 23 19:23:59.977039 dockerd[1849]: time="2026-01-23T19:23:59.976459685Z" level=info msg="Loading containers: start." Jan 23 19:24:00.095484 kernel: Initializing XFRM netlink socket Jan 23 19:24:03.051617 systemd-networkd[1384]: docker0: Link UP Jan 23 19:24:03.088269 dockerd[1849]: time="2026-01-23T19:24:03.087604747Z" level=info msg="Loading containers: done." Jan 23 19:24:03.188661 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1382868224-merged.mount: Deactivated successfully. Jan 23 19:24:03.197079 dockerd[1849]: time="2026-01-23T19:24:03.195503849Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 19:24:03.197079 dockerd[1849]: time="2026-01-23T19:24:03.195716941Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 19:24:03.197079 dockerd[1849]: time="2026-01-23T19:24:03.196130921Z" level=info msg="Initializing buildkit" Jan 23 19:24:03.535726 dockerd[1849]: time="2026-01-23T19:24:03.532471158Z" level=info msg="Completed buildkit initialization" Jan 23 19:24:03.570155 dockerd[1849]: time="2026-01-23T19:24:03.569339172Z" level=info msg="Daemon has completed initialization" Jan 23 19:24:03.570155 dockerd[1849]: time="2026-01-23T19:24:03.569534410Z" level=info msg="API listen on /run/docker.sock" Jan 23 19:24:03.571554 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 19:24:06.791742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 19:24:06.800098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:24:07.233593 containerd[1585]: time="2026-01-23T19:24:07.231034738Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 19:24:07.347257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:24:07.352268 kernel: hrtimer: interrupt took 6185161 ns Jan 23 19:24:07.384623 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:24:07.576945 kubelet[2076]: E0123 19:24:07.576052 2076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:24:07.587653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:24:07.588235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:24:07.591410 systemd[1]: kubelet.service: Consumed 461ms CPU time, 110.7M memory peak. Jan 23 19:24:08.535142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777980239.mount: Deactivated successfully. Jan 23 19:24:17.420593 containerd[1585]: time="2026-01-23T19:24:17.419739464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:17.423514 containerd[1585]: time="2026-01-23T19:24:17.423475335Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 23 19:24:17.429148 containerd[1585]: time="2026-01-23T19:24:17.429067707Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:17.435157 containerd[1585]: time="2026-01-23T19:24:17.435067281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:17.438235 containerd[1585]: time="2026-01-23T19:24:17.436987215Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 10.205908186s" Jan 23 19:24:17.438235 containerd[1585]: time="2026-01-23T19:24:17.437036752Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 19:24:17.440025 containerd[1585]: time="2026-01-23T19:24:17.439761085Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 19:24:17.791352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 19:24:17.797935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:24:18.198287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:24:18.213533 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:24:18.338202 kubelet[2152]: E0123 19:24:18.337956 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:24:18.343235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:24:18.343489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:24:18.344284 systemd[1]: kubelet.service: Consumed 360ms CPU time, 110.5M memory peak. Jan 23 19:24:21.266516 containerd[1585]: time="2026-01-23T19:24:21.266196720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:21.271914 containerd[1585]: time="2026-01-23T19:24:21.271728417Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 23 19:24:21.275686 containerd[1585]: time="2026-01-23T19:24:21.275564995Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:21.287772 containerd[1585]: time="2026-01-23T19:24:21.287255683Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 3.846746643s" Jan 23 19:24:21.287772 containerd[1585]: time="2026-01-23T19:24:21.287301417Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 19:24:21.287772 containerd[1585]: time="2026-01-23T19:24:21.287332306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:21.289917 containerd[1585]: time="2026-01-23T19:24:21.289277143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 19:24:23.372756 containerd[1585]: time="2026-01-23T19:24:23.372391271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:23.374688 containerd[1585]: time="2026-01-23T19:24:23.374655264Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 23 19:24:23.377652 containerd[1585]: time="2026-01-23T19:24:23.377517118Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:23.384658 containerd[1585]: time="2026-01-23T19:24:23.384504976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:23.386302 containerd[1585]: time="2026-01-23T19:24:23.385759433Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 2.096448364s" Jan 23 19:24:23.386302 containerd[1585]: time="2026-01-23T19:24:23.386034875Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 19:24:23.388475 containerd[1585]: time="2026-01-23T19:24:23.387748949Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 19:24:25.244400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount222122439.mount: Deactivated successfully. Jan 23 19:24:26.117079 containerd[1585]: time="2026-01-23T19:24:26.116998098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:26.124576 containerd[1585]: time="2026-01-23T19:24:26.121020421Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 23 19:24:26.127713 containerd[1585]: time="2026-01-23T19:24:26.127279644Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:26.137724 containerd[1585]: time="2026-01-23T19:24:26.137361823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:26.142614 containerd[1585]: time="2026-01-23T19:24:26.141483063Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.753368353s" Jan 23 19:24:26.142614 containerd[1585]: time="2026-01-23T19:24:26.142183410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 19:24:26.147365 containerd[1585]: time="2026-01-23T19:24:26.145730324Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 19:24:26.719561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782645613.mount: Deactivated successfully. Jan 23 19:24:26.762290 update_engine[1562]: I20260123 19:24:26.762058 1562 update_attempter.cc:509] Updating boot flags... Jan 23 19:24:28.542280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 19:24:28.547007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:24:29.303406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:24:29.333437 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:24:29.796422 kubelet[2250]: E0123 19:24:29.796318 2250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:24:29.808755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:24:29.809579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:24:29.810657 systemd[1]: kubelet.service: Consumed 1.016s CPU time, 110.3M memory peak. Jan 23 19:24:32.614961 containerd[1585]: time="2026-01-23T19:24:32.613877722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:32.620515 containerd[1585]: time="2026-01-23T19:24:32.619624816Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 23 19:24:32.624277 containerd[1585]: time="2026-01-23T19:24:32.624071391Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:32.634938 containerd[1585]: time="2026-01-23T19:24:32.634362202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:32.636155 containerd[1585]: time="2026-01-23T19:24:32.636073783Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 6.489950688s" Jan 23 19:24:32.636155 containerd[1585]: time="2026-01-23T19:24:32.636120951Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 19:24:32.639425 containerd[1585]: time="2026-01-23T19:24:32.638756887Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 19:24:33.441196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146653674.mount: Deactivated successfully. Jan 23 19:24:33.484016 containerd[1585]: time="2026-01-23T19:24:33.483399934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:33.489129 containerd[1585]: time="2026-01-23T19:24:33.489089522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 23 19:24:33.496246 containerd[1585]: time="2026-01-23T19:24:33.495720561Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:33.503078 containerd[1585]: time="2026-01-23T19:24:33.502643135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:33.505321 containerd[1585]: time="2026-01-23T19:24:33.504350010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 865.431695ms" Jan 23 19:24:33.505321 containerd[1585]: time="2026-01-23T19:24:33.504450840Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 19:24:33.507218 containerd[1585]: time="2026-01-23T19:24:33.507188911Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 19:24:34.352588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269646698.mount: Deactivated successfully. Jan 23 19:24:40.041066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 19:24:40.047084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:24:40.572489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:24:40.588759 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:24:40.800141 kubelet[2323]: E0123 19:24:40.799968 2323 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:24:40.806740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:24:40.807411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:24:40.808763 systemd[1]: kubelet.service: Consumed 452ms CPU time, 108.5M memory peak. Jan 23 19:24:49.486282 containerd[1585]: time="2026-01-23T19:24:49.483353745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:49.490737 containerd[1585]: time="2026-01-23T19:24:49.490669828Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 23 19:24:49.494306 containerd[1585]: time="2026-01-23T19:24:49.494248488Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:49.503743 containerd[1585]: time="2026-01-23T19:24:49.503253871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:24:49.505986 containerd[1585]: time="2026-01-23T19:24:49.505648338Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 15.998351073s" Jan 23 19:24:49.505986 containerd[1585]: time="2026-01-23T19:24:49.505737304Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 19:24:51.041604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 23 19:24:51.047433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:24:52.388357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:24:52.415282 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:24:53.053060 kubelet[2366]: E0123 19:24:53.052719 2366 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:24:53.084317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:24:53.084695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:24:53.087622 systemd[1]: kubelet.service: Consumed 978ms CPU time, 110.1M memory peak. Jan 23 19:24:57.325927 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:24:57.326174 systemd[1]: kubelet.service: Consumed 978ms CPU time, 110.1M memory peak. Jan 23 19:24:57.344688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:24:57.501692 systemd[1]: Reload requested from client PID 2383 ('systemctl') (unit session-9.scope)... Jan 23 19:24:57.501890 systemd[1]: Reloading... Jan 23 19:24:57.849962 zram_generator::config[2427]: No configuration found. Jan 23 19:24:58.676983 systemd[1]: Reloading finished in 1171 ms. Jan 23 19:24:58.887902 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 19:24:58.888099 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 19:24:58.890566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:24:58.893464 systemd[1]: kubelet.service: Consumed 346ms CPU time, 98.2M memory peak. Jan 23 19:24:58.906571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:24:59.521419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:24:59.761646 (kubelet)[2473]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:25:00.557074 kubelet[2473]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:25:00.557074 kubelet[2473]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:25:00.560364 kubelet[2473]: I0123 19:25:00.559094 2473 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:25:02.237128 kubelet[2473]: I0123 19:25:02.236744 2473 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 19:25:02.239649 kubelet[2473]: I0123 19:25:02.237174 2473 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:25:02.239649 kubelet[2473]: I0123 19:25:02.237219 2473 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 19:25:02.239649 kubelet[2473]: I0123 19:25:02.237229 2473 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:25:02.239649 kubelet[2473]: I0123 19:25:02.237687 2473 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 19:25:02.256729 kubelet[2473]: I0123 19:25:02.256686 2473 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:25:02.268615 kubelet[2473]: E0123 19:25:02.268393 2473 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 19:25:02.292953 kubelet[2473]: I0123 19:25:02.290241 2473 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:25:02.316470 kubelet[2473]: I0123 19:25:02.316058 2473 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 19:25:02.317093 kubelet[2473]: I0123 19:25:02.316641 2473 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:25:02.317093 kubelet[2473]: I0123 19:25:02.316663 2473 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:25:02.317093 kubelet[2473]: I0123 19:25:02.316989 2473 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:25:02.317093 kubelet[2473]: I0123 19:25:02.317058 2473 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 19:25:02.319643 kubelet[2473]: I0123 19:25:02.317216 2473 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 19:25:02.325248 kubelet[2473]: I0123 19:25:02.324918 2473 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:25:02.325322 kubelet[2473]: I0123 19:25:02.325298 2473 kubelet.go:475] "Attempting to sync node with API server" Jan 23 19:25:02.325364 kubelet[2473]: I0123 19:25:02.325324 2473 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:25:02.325364 kubelet[2473]: I0123 19:25:02.325360 2473 kubelet.go:387] "Adding apiserver pod source" Jan 23 19:25:02.325576 kubelet[2473]: I0123 19:25:02.325461 2473 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:25:02.327434 kubelet[2473]: E0123 19:25:02.327207 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:25:02.339909 kubelet[2473]: E0123 19:25:02.338316 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:25:02.340764 kubelet[2473]: I0123 19:25:02.340719 2473 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:25:02.350303 kubelet[2473]: I0123 19:25:02.348446 2473 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 19:25:02.350303 kubelet[2473]: I0123 19:25:02.348493 2473 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 19:25:02.350303 kubelet[2473]: W0123 19:25:02.348569 2473 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 19:25:02.377951 kubelet[2473]: I0123 19:25:02.375447 2473 server.go:1262] "Started kubelet" Jan 23 19:25:02.377951 kubelet[2473]: I0123 19:25:02.377140 2473 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:25:02.384529 kubelet[2473]: I0123 19:25:02.384499 2473 server.go:310] "Adding debug handlers to kubelet server" Jan 23 19:25:02.392610 kubelet[2473]: I0123 19:25:02.392577 2473 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:25:02.397290 kubelet[2473]: I0123 19:25:02.397091 2473 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:25:02.398085 kubelet[2473]: I0123 19:25:02.397372 2473 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:25:02.398085 kubelet[2473]: I0123 19:25:02.397457 2473 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 19:25:02.398320 kubelet[2473]: I0123 19:25:02.398300 2473 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:25:02.403910 kubelet[2473]: I0123 19:25:02.402429 2473 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 19:25:02.403910 kubelet[2473]: E0123 19:25:02.402739 2473 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:25:02.404141 kubelet[2473]: I0123 19:25:02.403775 2473 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 19:25:02.404347 kubelet[2473]: I0123 19:25:02.404332 2473 reconciler.go:29] "Reconciler: start to sync state" Jan 23 19:25:02.405194 kubelet[2473]: I0123 19:25:02.405175 2473 factory.go:223] Registration of the systemd container factory successfully Jan 23 19:25:02.405290 kubelet[2473]: E0123 19:25:02.405207 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:25:02.405717 kubelet[2473]: I0123 19:25:02.405329 2473 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:25:02.406488 kubelet[2473]: E0123 19:25:02.405366 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Jan 23 19:25:02.414376 kubelet[2473]: E0123 19:25:02.402111 2473 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d72a9d5052b89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:25:02.375398281 +0000 UTC m=+2.520658934,LastTimestamp:2026-01-23 19:25:02.375398281 +0000 UTC m=+2.520658934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:25:02.419182 kubelet[2473]: I0123 19:25:02.419148 2473 factory.go:223] Registration of the containerd container factory successfully Jan 23 19:25:02.426091 kubelet[2473]: E0123 19:25:02.420364 2473 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:25:02.504211 kubelet[2473]: E0123 19:25:02.503636 2473 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:25:02.540246 kubelet[2473]: I0123 19:25:02.540199 2473 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:25:02.540476 kubelet[2473]: I0123 19:25:02.540459 2473 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:25:02.540558 kubelet[2473]: I0123 19:25:02.540545 2473 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:25:02.556286 kubelet[2473]: I0123 19:25:02.556251 2473 policy_none.go:49] "None policy: Start" Jan 23 19:25:02.556504 kubelet[2473]: I0123 19:25:02.556483 2473 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 19:25:02.556600 kubelet[2473]: I0123 19:25:02.556578 2473 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 19:25:02.567113 kubelet[2473]: I0123 19:25:02.566923 2473 policy_none.go:47] "Start" Jan 23 19:25:02.592260 kubelet[2473]: I0123 19:25:02.591587 2473 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 19:25:02.593346 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 19:25:02.602629 kubelet[2473]: I0123 19:25:02.602601 2473 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 19:25:02.602629 kubelet[2473]: I0123 19:25:02.602627 2473 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 19:25:02.602918 kubelet[2473]: I0123 19:25:02.602667 2473 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 19:25:02.602975 kubelet[2473]: E0123 19:25:02.602918 2473 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:25:02.605211 kubelet[2473]: E0123 19:25:02.605154 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:25:02.611968 kubelet[2473]: E0123 19:25:02.605315 2473 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:25:02.612517 kubelet[2473]: E0123 19:25:02.611937 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Jan 23 19:25:02.634731 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 19:25:02.669937 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 19:25:02.676939 kubelet[2473]: E0123 19:25:02.676403 2473 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 19:25:02.676939 kubelet[2473]: I0123 19:25:02.676657 2473 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:25:02.676939 kubelet[2473]: I0123 19:25:02.676676 2473 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:25:02.677923 kubelet[2473]: I0123 19:25:02.677733 2473 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:25:02.685758 kubelet[2473]: E0123 19:25:02.685532 2473 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:25:02.685758 kubelet[2473]: E0123 19:25:02.685590 2473 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:25:02.707439 kubelet[2473]: I0123 19:25:02.705994 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a76119fb92df5f04f634275a3a9b646a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a76119fb92df5f04f634275a3a9b646a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:02.708573 kubelet[2473]: I0123 19:25:02.708482 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a76119fb92df5f04f634275a3a9b646a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a76119fb92df5f04f634275a3a9b646a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:02.709547 kubelet[2473]: I0123 19:25:02.708526 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a76119fb92df5f04f634275a3a9b646a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a76119fb92df5f04f634275a3a9b646a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:02.761352 systemd[1]: Created slice kubepods-burstable-poda76119fb92df5f04f634275a3a9b646a.slice - libcontainer container kubepods-burstable-poda76119fb92df5f04f634275a3a9b646a.slice. Jan 23 19:25:02.783445 kubelet[2473]: I0123 19:25:02.781766 2473 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:25:02.790542 kubelet[2473]: E0123 19:25:02.790411 2473 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Jan 23 19:25:02.790981 kubelet[2473]: E0123 19:25:02.790771 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:02.819568 kubelet[2473]: I0123 19:25:02.818356 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:02.819568 kubelet[2473]: I0123 19:25:02.818460 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 23 19:25:02.819568 kubelet[2473]: I0123 19:25:02.818922 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:02.819568 kubelet[2473]: I0123 19:25:02.819021 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:02.819568 kubelet[2473]: I0123 19:25:02.819257 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:02.820189 kubelet[2473]: I0123 19:25:02.819410 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:02.835230 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 23 19:25:02.852029 kubelet[2473]: E0123 19:25:02.851982 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:02.857018 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 23 19:25:02.865269 kubelet[2473]: E0123 19:25:02.864616 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:03.001244 kubelet[2473]: I0123 19:25:03.000993 2473 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:25:03.003413 kubelet[2473]: E0123 19:25:03.002239 2473 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Jan 23 19:25:03.021571 kubelet[2473]: E0123 19:25:03.020307 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Jan 23 19:25:03.104254 kubelet[2473]: E0123 19:25:03.104042 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:03.106712 containerd[1585]: time="2026-01-23T19:25:03.106275008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a76119fb92df5f04f634275a3a9b646a,Namespace:kube-system,Attempt:0,}" Jan 23 19:25:03.154266 kubelet[2473]: E0123 19:25:03.154024 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:25:03.166628 kubelet[2473]: E0123 19:25:03.165505 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:03.171469 containerd[1585]: time="2026-01-23T19:25:03.168944807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 23 19:25:03.180155 kubelet[2473]: E0123 19:25:03.177987 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:03.180278 containerd[1585]: time="2026-01-23T19:25:03.179276359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 23 19:25:03.406619 kubelet[2473]: I0123 19:25:03.405939 2473 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:25:03.406619 kubelet[2473]: E0123 19:25:03.406391 2473 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Jan 23 19:25:03.522767 kubelet[2473]: E0123 19:25:03.522709 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:25:03.738020 kubelet[2473]: E0123 19:25:03.737480 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:25:03.746031 kubelet[2473]: E0123 19:25:03.745893 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:25:03.827256 kubelet[2473]: E0123 19:25:03.826928 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" Jan 23 19:25:03.865770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3796480038.mount: Deactivated successfully. Jan 23 19:25:03.892041 containerd[1585]: time="2026-01-23T19:25:03.891710489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:25:03.897273 containerd[1585]: time="2026-01-23T19:25:03.897082471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 19:25:03.903695 containerd[1585]: time="2026-01-23T19:25:03.903321340Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:25:03.910639 containerd[1585]: time="2026-01-23T19:25:03.909606802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:25:03.917091 containerd[1585]: time="2026-01-23T19:25:03.916887668Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:25:03.919773 containerd[1585]: time="2026-01-23T19:25:03.919513343Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 19:25:03.924255 containerd[1585]: time="2026-01-23T19:25:03.922570380Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 19:25:03.937934 containerd[1585]: time="2026-01-23T19:25:03.935095398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:25:03.937934 containerd[1585]: time="2026-01-23T19:25:03.937621238Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 740.665413ms" Jan 23 19:25:03.941694 containerd[1585]: time="2026-01-23T19:25:03.941336396Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 819.774709ms" Jan 23 19:25:03.946010 containerd[1585]: time="2026-01-23T19:25:03.945648636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 768.317333ms" Jan 23 19:25:04.048086 containerd[1585]: time="2026-01-23T19:25:04.047987050Z" level=info msg="connecting to shim 96d30fa6e418efe951b4352cb0a9c25c8d460cc4ea4c9d141a28996b5783e900" address="unix:///run/containerd/s/27072bb1d8118e286c0e689b38480fab8f9e5da19e085507c66f17c5884ab5f4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:25:04.073118 containerd[1585]: time="2026-01-23T19:25:04.072943664Z" level=info msg="connecting to shim 47652c7188e07861918339d15a60b09405844521ce3bef96fe64d508aa75bfc5" address="unix:///run/containerd/s/8a7764dd6886ecc62d347a70c69dee56587c3f11cf14cf5df371973872e961ea" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:25:04.082589 containerd[1585]: time="2026-01-23T19:25:04.082419781Z" level=info msg="connecting to shim ce5c6b24fd72e0a756c1889a7591981f067303eea5b4f6be7c4e97fdc4b8b797" address="unix:///run/containerd/s/019ef8de85a4de1a495507249183e939e555c7341cf85e57e6ef125562f4f191" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:25:04.126304 systemd[1]: Started cri-containerd-96d30fa6e418efe951b4352cb0a9c25c8d460cc4ea4c9d141a28996b5783e900.scope - libcontainer container 96d30fa6e418efe951b4352cb0a9c25c8d460cc4ea4c9d141a28996b5783e900. Jan 23 19:25:04.135046 systemd[1]: Started cri-containerd-47652c7188e07861918339d15a60b09405844521ce3bef96fe64d508aa75bfc5.scope - libcontainer container 47652c7188e07861918339d15a60b09405844521ce3bef96fe64d508aa75bfc5. Jan 23 19:25:04.210495 kubelet[2473]: I0123 19:25:04.210021 2473 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:25:04.210945 systemd[1]: Started cri-containerd-ce5c6b24fd72e0a756c1889a7591981f067303eea5b4f6be7c4e97fdc4b8b797.scope - libcontainer container ce5c6b24fd72e0a756c1889a7591981f067303eea5b4f6be7c4e97fdc4b8b797. Jan 23 19:25:04.213665 kubelet[2473]: E0123 19:25:04.212976 2473 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Jan 23 19:25:04.335678 containerd[1585]: time="2026-01-23T19:25:04.335186691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"47652c7188e07861918339d15a60b09405844521ce3bef96fe64d508aa75bfc5\"" Jan 23 19:25:04.343510 kubelet[2473]: E0123 19:25:04.343179 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:04.344746 containerd[1585]: time="2026-01-23T19:25:04.344495656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d30fa6e418efe951b4352cb0a9c25c8d460cc4ea4c9d141a28996b5783e900\"" Jan 23 19:25:04.348265 kubelet[2473]: E0123 19:25:04.348047 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:04.374063 containerd[1585]: time="2026-01-23T19:25:04.371465155Z" level=info msg="CreateContainer within sandbox \"47652c7188e07861918339d15a60b09405844521ce3bef96fe64d508aa75bfc5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 19:25:04.375320 containerd[1585]: time="2026-01-23T19:25:04.375277169Z" level=info msg="CreateContainer within sandbox \"96d30fa6e418efe951b4352cb0a9c25c8d460cc4ea4c9d141a28996b5783e900\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 19:25:04.379665 containerd[1585]: time="2026-01-23T19:25:04.379445985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a76119fb92df5f04f634275a3a9b646a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce5c6b24fd72e0a756c1889a7591981f067303eea5b4f6be7c4e97fdc4b8b797\"" Jan 23 19:25:04.382050 kubelet[2473]: E0123 19:25:04.381726 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:04.397497 containerd[1585]: time="2026-01-23T19:25:04.397462582Z" level=info msg="CreateContainer within sandbox \"ce5c6b24fd72e0a756c1889a7591981f067303eea5b4f6be7c4e97fdc4b8b797\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 19:25:04.401559 kubelet[2473]: E0123 19:25:04.401451 2473 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 19:25:04.418479 containerd[1585]: time="2026-01-23T19:25:04.417533132Z" level=info msg="Container 5d4c5ffb0e52affdf79b498caf2d13a3e44dfabbf7606173dbf87ece8a9e4072: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:25:04.433249 containerd[1585]: time="2026-01-23T19:25:04.433031244Z" level=info msg="Container 53823ee4a28317fa80293ab002d6038c8e01834d580f1cf571ba1033eda7cd9d: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:25:04.453962 containerd[1585]: time="2026-01-23T19:25:04.453699690Z" level=info msg="Container 60019795f89cd57e81048dbde9dbd6331bddb25647dda5d398acb46b27aae449: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:25:04.460718 containerd[1585]: time="2026-01-23T19:25:04.460405847Z" level=info msg="CreateContainer within sandbox \"47652c7188e07861918339d15a60b09405844521ce3bef96fe64d508aa75bfc5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5d4c5ffb0e52affdf79b498caf2d13a3e44dfabbf7606173dbf87ece8a9e4072\"" Jan 23 19:25:04.464501 containerd[1585]: time="2026-01-23T19:25:04.464114942Z" level=info msg="StartContainer for \"5d4c5ffb0e52affdf79b498caf2d13a3e44dfabbf7606173dbf87ece8a9e4072\"" Jan 23 19:25:04.466593 containerd[1585]: time="2026-01-23T19:25:04.466565049Z" level=info msg="CreateContainer within sandbox \"96d30fa6e418efe951b4352cb0a9c25c8d460cc4ea4c9d141a28996b5783e900\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"53823ee4a28317fa80293ab002d6038c8e01834d580f1cf571ba1033eda7cd9d\"" Jan 23 19:25:04.467180 containerd[1585]: time="2026-01-23T19:25:04.467158388Z" level=info msg="connecting to shim 5d4c5ffb0e52affdf79b498caf2d13a3e44dfabbf7606173dbf87ece8a9e4072" address="unix:///run/containerd/s/8a7764dd6886ecc62d347a70c69dee56587c3f11cf14cf5df371973872e961ea" protocol=ttrpc version=3 Jan 23 19:25:04.470484 containerd[1585]: time="2026-01-23T19:25:04.469908194Z" level=info msg="StartContainer for \"53823ee4a28317fa80293ab002d6038c8e01834d580f1cf571ba1033eda7cd9d\"" Jan 23 19:25:04.473082 containerd[1585]: time="2026-01-23T19:25:04.472777481Z" level=info msg="connecting to shim 53823ee4a28317fa80293ab002d6038c8e01834d580f1cf571ba1033eda7cd9d" address="unix:///run/containerd/s/27072bb1d8118e286c0e689b38480fab8f9e5da19e085507c66f17c5884ab5f4" protocol=ttrpc version=3 Jan 23 19:25:04.479401 containerd[1585]: time="2026-01-23T19:25:04.479077917Z" level=info msg="CreateContainer within sandbox \"ce5c6b24fd72e0a756c1889a7591981f067303eea5b4f6be7c4e97fdc4b8b797\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60019795f89cd57e81048dbde9dbd6331bddb25647dda5d398acb46b27aae449\"" Jan 23 19:25:04.480103 containerd[1585]: time="2026-01-23T19:25:04.480076284Z" level=info msg="StartContainer for \"60019795f89cd57e81048dbde9dbd6331bddb25647dda5d398acb46b27aae449\"" Jan 23 19:25:04.482029 containerd[1585]: time="2026-01-23T19:25:04.482003079Z" level=info msg="connecting to shim 60019795f89cd57e81048dbde9dbd6331bddb25647dda5d398acb46b27aae449" address="unix:///run/containerd/s/019ef8de85a4de1a495507249183e939e555c7341cf85e57e6ef125562f4f191" protocol=ttrpc version=3 Jan 23 19:25:04.522090 systemd[1]: Started cri-containerd-5d4c5ffb0e52affdf79b498caf2d13a3e44dfabbf7606173dbf87ece8a9e4072.scope - libcontainer container 5d4c5ffb0e52affdf79b498caf2d13a3e44dfabbf7606173dbf87ece8a9e4072. Jan 23 19:25:04.529957 systemd[1]: Started cri-containerd-53823ee4a28317fa80293ab002d6038c8e01834d580f1cf571ba1033eda7cd9d.scope - libcontainer container 53823ee4a28317fa80293ab002d6038c8e01834d580f1cf571ba1033eda7cd9d. Jan 23 19:25:04.554209 systemd[1]: Started cri-containerd-60019795f89cd57e81048dbde9dbd6331bddb25647dda5d398acb46b27aae449.scope - libcontainer container 60019795f89cd57e81048dbde9dbd6331bddb25647dda5d398acb46b27aae449. Jan 23 19:25:04.692307 containerd[1585]: time="2026-01-23T19:25:04.690911556Z" level=info msg="StartContainer for \"53823ee4a28317fa80293ab002d6038c8e01834d580f1cf571ba1033eda7cd9d\" returns successfully" Jan 23 19:25:04.699715 containerd[1585]: time="2026-01-23T19:25:04.699656239Z" level=info msg="StartContainer for \"5d4c5ffb0e52affdf79b498caf2d13a3e44dfabbf7606173dbf87ece8a9e4072\" returns successfully" Jan 23 19:25:04.730964 containerd[1585]: time="2026-01-23T19:25:04.730093246Z" level=info msg="StartContainer for \"60019795f89cd57e81048dbde9dbd6331bddb25647dda5d398acb46b27aae449\" returns successfully" Jan 23 19:25:05.690487 kubelet[2473]: E0123 19:25:05.690269 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:05.691133 kubelet[2473]: E0123 19:25:05.690577 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:05.711916 kubelet[2473]: E0123 19:25:05.711635 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:05.712031 kubelet[2473]: E0123 19:25:05.712020 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:05.724269 kubelet[2473]: E0123 19:25:05.724172 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:05.724489 kubelet[2473]: E0123 19:25:05.724433 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:05.816746 kubelet[2473]: I0123 19:25:05.816636 2473 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:25:06.715413 kubelet[2473]: E0123 19:25:06.715378 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:06.717017 kubelet[2473]: E0123 19:25:06.716998 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:06.724182 kubelet[2473]: E0123 19:25:06.723382 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:06.724182 kubelet[2473]: E0123 19:25:06.723603 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:06.724182 kubelet[2473]: E0123 19:25:06.724015 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:06.724182 kubelet[2473]: E0123 19:25:06.724124 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:07.723489 kubelet[2473]: E0123 19:25:07.723369 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:07.724707 kubelet[2473]: E0123 19:25:07.723693 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:07.724707 kubelet[2473]: E0123 19:25:07.724144 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:07.724707 kubelet[2473]: E0123 19:25:07.724249 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:08.726163 kubelet[2473]: E0123 19:25:08.725578 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:25:08.726997 kubelet[2473]: E0123 19:25:08.726174 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:09.124922 kubelet[2473]: E0123 19:25:09.124448 2473 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 19:25:09.214435 kubelet[2473]: I0123 19:25:09.214367 2473 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 19:25:09.223008 kubelet[2473]: I0123 19:25:09.221022 2473 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:09.231288 kubelet[2473]: E0123 19:25:09.230023 2473 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d72a9d5052b89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:25:02.375398281 +0000 UTC m=+2.520658934,LastTimestamp:2026-01-23 19:25:02.375398281 +0000 UTC m=+2.520658934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:25:09.277523 kubelet[2473]: E0123 19:25:09.277095 2473 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:09.278402 kubelet[2473]: E0123 19:25:09.277776 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:09.305317 kubelet[2473]: I0123 19:25:09.303629 2473 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:09.309474 kubelet[2473]: E0123 19:25:09.309445 2473 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:09.309603 kubelet[2473]: I0123 19:25:09.309587 2473 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:09.313188 kubelet[2473]: E0123 19:25:09.313160 2473 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:09.314851 kubelet[2473]: I0123 19:25:09.313363 2473 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 19:25:09.316439 kubelet[2473]: E0123 19:25:09.316412 2473 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 23 19:25:09.335628 kubelet[2473]: I0123 19:25:09.335438 2473 apiserver.go:52] "Watching apiserver" Jan 23 19:25:09.407050 kubelet[2473]: I0123 19:25:09.406016 2473 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 19:25:10.996598 kubelet[2473]: I0123 19:25:10.996562 2473 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:11.021097 kubelet[2473]: E0123 19:25:11.020509 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:11.764980 kubelet[2473]: E0123 19:25:11.763435 2473 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:12.685578 systemd[1]: Reload requested from client PID 2769 ('systemctl') (unit session-9.scope)... Jan 23 19:25:12.688073 systemd[1]: Reloading... Jan 23 19:25:13.206953 kubelet[2473]: I0123 19:25:13.202127 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.202000333 podStartE2EDuration="2.202000333s" podCreationTimestamp="2026-01-23 19:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:25:13.201665351 +0000 UTC m=+13.346926044" watchObservedRunningTime="2026-01-23 19:25:13.202000333 +0000 UTC m=+13.347260986" Jan 23 19:25:13.596201 zram_generator::config[2811]: No configuration found. Jan 23 19:25:14.155499 systemd[1]: Reloading finished in 1455 ms. Jan 23 19:25:14.227232 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:25:14.243923 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 19:25:14.244383 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:25:14.244488 systemd[1]: kubelet.service: Consumed 4.366s CPU time, 127M memory peak. Jan 23 19:25:14.251066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:25:14.744388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:25:14.779702 (kubelet)[2859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:25:15.044174 kubelet[2859]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:25:15.044174 kubelet[2859]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:25:15.044174 kubelet[2859]: I0123 19:25:15.041958 2859 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:25:15.087670 kubelet[2859]: I0123 19:25:15.084945 2859 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 19:25:15.087670 kubelet[2859]: I0123 19:25:15.084981 2859 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:25:15.087670 kubelet[2859]: I0123 19:25:15.085013 2859 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 19:25:15.087670 kubelet[2859]: I0123 19:25:15.085021 2859 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:25:15.088084 kubelet[2859]: I0123 19:25:15.088045 2859 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 19:25:15.138543 kubelet[2859]: I0123 19:25:15.137601 2859 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 19:25:15.142717 kubelet[2859]: I0123 19:25:15.142664 2859 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:25:15.162064 kubelet[2859]: I0123 19:25:15.161971 2859 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:25:15.176498 kubelet[2859]: I0123 19:25:15.175508 2859 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 19:25:15.178602 kubelet[2859]: I0123 19:25:15.177730 2859 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:25:15.178602 kubelet[2859]: I0123 19:25:15.177935 2859 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:25:15.178602 kubelet[2859]: I0123 19:25:15.178120 2859 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:25:15.178602 kubelet[2859]: I0123 19:25:15.178135 2859 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 19:25:15.179305 kubelet[2859]: I0123 19:25:15.178170 2859 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 19:25:15.181088 kubelet[2859]: I0123 19:25:15.180982 2859 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:25:15.181549 kubelet[2859]: I0123 19:25:15.181448 2859 kubelet.go:475] "Attempting to sync node with API server" Jan 23 19:25:15.181604 kubelet[2859]: I0123 19:25:15.181554 2859 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:25:15.181604 kubelet[2859]: I0123 19:25:15.181584 2859 kubelet.go:387] "Adding apiserver pod source" Jan 23 19:25:15.182046 kubelet[2859]: I0123 19:25:15.181609 2859 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:25:15.192972 kubelet[2859]: I0123 19:25:15.189392 2859 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:25:15.195758 kubelet[2859]: I0123 19:25:15.194606 2859 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 19:25:15.195758 kubelet[2859]: I0123 19:25:15.194648 2859 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 19:25:15.217189 sudo[2875]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 19:25:15.218041 sudo[2875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 19:25:15.244755 kubelet[2859]: I0123 19:25:15.244650 2859 server.go:1262] "Started kubelet" Jan 23 19:25:15.260002 kubelet[2859]: I0123 19:25:15.255398 2859 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:25:15.265629 kubelet[2859]: I0123 19:25:15.255906 2859 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:25:15.266633 kubelet[2859]: I0123 19:25:15.266457 2859 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 19:25:15.267095 kubelet[2859]: I0123 19:25:15.255945 2859 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:25:15.267095 kubelet[2859]: I0123 19:25:15.267078 2859 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 19:25:15.268005 kubelet[2859]: I0123 19:25:15.267505 2859 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:25:15.269715 kubelet[2859]: I0123 19:25:15.268774 2859 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 19:25:15.280757 kubelet[2859]: I0123 19:25:15.260031 2859 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:25:15.280757 kubelet[2859]: I0123 19:25:15.273609 2859 reconciler.go:29] "Reconciler: start to sync state" Jan 23 19:25:15.280757 kubelet[2859]: I0123 19:25:15.280104 2859 server.go:310] "Adding debug handlers to kubelet server" Jan 23 19:25:15.293619 kubelet[2859]: I0123 19:25:15.293512 2859 factory.go:223] Registration of the systemd container factory successfully Jan 23 19:25:15.293720 kubelet[2859]: I0123 19:25:15.293680 2859 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:25:15.301626 kubelet[2859]: E0123 19:25:15.301392 2859 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:25:15.311944 kubelet[2859]: I0123 19:25:15.309102 2859 factory.go:223] Registration of the containerd container factory successfully Jan 23 19:25:15.369921 kubelet[2859]: I0123 19:25:15.369606 2859 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 19:25:15.382602 kubelet[2859]: I0123 19:25:15.379777 2859 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 19:25:15.382757 kubelet[2859]: I0123 19:25:15.381600 2859 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 19:25:15.384507 kubelet[2859]: I0123 19:25:15.384321 2859 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 19:25:15.384507 kubelet[2859]: E0123 19:25:15.384381 2859 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:25:15.478516 kubelet[2859]: I0123 19:25:15.477934 2859 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:25:15.478516 kubelet[2859]: I0123 19:25:15.478044 2859 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:25:15.478516 kubelet[2859]: I0123 19:25:15.478074 2859 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:25:15.478516 kubelet[2859]: I0123 19:25:15.478470 2859 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 19:25:15.478516 kubelet[2859]: I0123 19:25:15.478483 2859 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 19:25:15.478516 kubelet[2859]: I0123 19:25:15.478514 2859 policy_none.go:49] "None policy: Start" Jan 23 19:25:15.478516 kubelet[2859]: I0123 19:25:15.478528 2859 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 19:25:15.487069 kubelet[2859]: I0123 19:25:15.479032 2859 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 19:25:15.487069 kubelet[2859]: I0123 19:25:15.480664 2859 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 19:25:15.487069 kubelet[2859]: I0123 19:25:15.480679 2859 policy_none.go:47] "Start" Jan 23 19:25:15.491648 kubelet[2859]: E0123 19:25:15.491514 2859 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:25:15.511703 kubelet[2859]: E0123 19:25:15.511585 2859 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 19:25:15.512135 kubelet[2859]: I0123 19:25:15.512067 2859 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:25:15.512135 kubelet[2859]: I0123 19:25:15.512089 2859 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:25:15.512885 kubelet[2859]: I0123 19:25:15.512729 2859 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:25:15.519194 kubelet[2859]: E0123 19:25:15.519092 2859 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:25:15.700597 kubelet[2859]: I0123 19:25:15.699148 2859 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 19:25:15.700597 kubelet[2859]: I0123 19:25:15.700368 2859 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:15.705433 kubelet[2859]: I0123 19:25:15.704621 2859 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:15.732160 kubelet[2859]: I0123 19:25:15.731998 2859 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:25:15.789979 kubelet[2859]: I0123 19:25:15.789682 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a76119fb92df5f04f634275a3a9b646a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a76119fb92df5f04f634275a3a9b646a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:15.789979 kubelet[2859]: I0123 19:25:15.789750 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:15.790439 kubelet[2859]: I0123 19:25:15.789776 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:15.790439 kubelet[2859]: I0123 19:25:15.790020 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:15.790439 kubelet[2859]: I0123 19:25:15.790035 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:15.790439 kubelet[2859]: I0123 19:25:15.790051 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 23 19:25:15.790439 kubelet[2859]: I0123 19:25:15.790068 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:15.790637 kubelet[2859]: I0123 19:25:15.790081 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a76119fb92df5f04f634275a3a9b646a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a76119fb92df5f04f634275a3a9b646a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:15.790637 kubelet[2859]: I0123 19:25:15.790093 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a76119fb92df5f04f634275a3a9b646a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a76119fb92df5f04f634275a3a9b646a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:15.828572 kubelet[2859]: I0123 19:25:15.828419 2859 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 19:25:15.828572 kubelet[2859]: I0123 19:25:15.828553 2859 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 19:25:15.833649 kubelet[2859]: E0123 19:25:15.833410 2859 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:16.120153 kubelet[2859]: E0123 19:25:16.120095 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:16.126510 kubelet[2859]: E0123 19:25:16.121493 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:16.138537 kubelet[2859]: E0123 19:25:16.134563 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:16.188724 kubelet[2859]: I0123 19:25:16.188671 2859 apiserver.go:52] "Watching apiserver" Jan 23 19:25:16.270778 kubelet[2859]: I0123 19:25:16.270664 2859 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 19:25:16.271498 sudo[2875]: pam_unix(sudo:session): session closed for user root Jan 23 19:25:16.439654 kubelet[2859]: I0123 19:25:16.438260 2859 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:16.439654 kubelet[2859]: I0123 19:25:16.439019 2859 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:16.444555 kubelet[2859]: E0123 19:25:16.444147 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:16.493432 kubelet[2859]: E0123 19:25:16.493098 2859 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 19:25:16.493432 kubelet[2859]: E0123 19:25:16.493410 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:16.505237 kubelet[2859]: E0123 19:25:16.503946 2859 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:25:16.505237 kubelet[2859]: E0123 19:25:16.504179 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:16.581710 kubelet[2859]: I0123 19:25:16.581505 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.581483994 podStartE2EDuration="1.581483994s" podCreationTimestamp="2026-01-23 19:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:25:16.567575571 +0000 UTC m=+1.768928262" watchObservedRunningTime="2026-01-23 19:25:16.581483994 +0000 UTC m=+1.782836684" Jan 23 19:25:16.836119 kubelet[2859]: I0123 19:25:16.835648 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8334331719999999 podStartE2EDuration="1.833433172s" podCreationTimestamp="2026-01-23 19:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:25:16.666439013 +0000 UTC m=+1.867791713" watchObservedRunningTime="2026-01-23 19:25:16.833433172 +0000 UTC m=+2.034785862" Jan 23 19:25:17.439731 kubelet[2859]: E0123 19:25:17.439003 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:17.440399 kubelet[2859]: E0123 19:25:17.440019 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:17.443635 kubelet[2859]: E0123 19:25:17.441260 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:18.267640 kubelet[2859]: I0123 19:25:18.260360 2859 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 19:25:18.284924 containerd[1585]: time="2026-01-23T19:25:18.283170869Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 19:25:18.292044 kubelet[2859]: I0123 19:25:18.286949 2859 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 19:25:18.482393 kubelet[2859]: E0123 19:25:18.481953 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:19.486960 systemd[1]: Created slice kubepods-besteffort-pod037945d5_13ca_4dab_be76_ba146507b44a.slice - libcontainer container kubepods-besteffort-pod037945d5_13ca_4dab_be76_ba146507b44a.slice. Jan 23 19:25:19.518402 kubelet[2859]: E0123 19:25:19.517494 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:19.549617 systemd[1]: Created slice kubepods-burstable-pod5cbb13b0_35a7_4d1f_baba_b2b78a040c8e.slice - libcontainer container kubepods-burstable-pod5cbb13b0_35a7_4d1f_baba_b2b78a040c8e.slice. Jan 23 19:25:19.567277 kubelet[2859]: I0123 19:25:19.566213 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/037945d5-13ca-4dab-be76-ba146507b44a-kube-proxy\") pod \"kube-proxy-wr8w2\" (UID: \"037945d5-13ca-4dab-be76-ba146507b44a\") " pod="kube-system/kube-proxy-wr8w2" Jan 23 19:25:19.567277 kubelet[2859]: I0123 19:25:19.566328 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/037945d5-13ca-4dab-be76-ba146507b44a-xtables-lock\") pod \"kube-proxy-wr8w2\" (UID: \"037945d5-13ca-4dab-be76-ba146507b44a\") " pod="kube-system/kube-proxy-wr8w2" Jan 23 19:25:19.567277 kubelet[2859]: I0123 19:25:19.566361 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-xtables-lock\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.567277 kubelet[2859]: I0123 19:25:19.566391 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/037945d5-13ca-4dab-be76-ba146507b44a-lib-modules\") pod \"kube-proxy-wr8w2\" (UID: \"037945d5-13ca-4dab-be76-ba146507b44a\") " pod="kube-system/kube-proxy-wr8w2" Jan 23 19:25:19.567277 kubelet[2859]: I0123 19:25:19.566415 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-bpf-maps\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.567277 kubelet[2859]: I0123 19:25:19.566438 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-cgroup\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568347 kubelet[2859]: I0123 19:25:19.566461 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-etc-cni-netd\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568347 kubelet[2859]: I0123 19:25:19.566485 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-lib-modules\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568347 kubelet[2859]: I0123 19:25:19.566511 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-config-path\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568347 kubelet[2859]: I0123 19:25:19.566545 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-run\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568347 kubelet[2859]: I0123 19:25:19.566569 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-hostproc\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568347 kubelet[2859]: I0123 19:25:19.566591 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cni-path\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568529 kubelet[2859]: I0123 19:25:19.566624 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-hubble-tls\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568529 kubelet[2859]: I0123 19:25:19.566648 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hvn\" (UniqueName: \"kubernetes.io/projected/037945d5-13ca-4dab-be76-ba146507b44a-kube-api-access-m4hvn\") pod \"kube-proxy-wr8w2\" (UID: \"037945d5-13ca-4dab-be76-ba146507b44a\") " pod="kube-system/kube-proxy-wr8w2" Jan 23 19:25:19.568529 kubelet[2859]: I0123 19:25:19.566674 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-clustermesh-secrets\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568529 kubelet[2859]: I0123 19:25:19.566699 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-host-proc-sys-net\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568529 kubelet[2859]: I0123 19:25:19.566725 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-host-proc-sys-kernel\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.568683 kubelet[2859]: I0123 19:25:19.566751 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58x4t\" (UniqueName: \"kubernetes.io/projected/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-kube-api-access-58x4t\") pod \"cilium-r57jb\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " pod="kube-system/cilium-r57jb" Jan 23 19:25:19.645365 systemd[1]: Created slice kubepods-besteffort-podef6cf0cb_401f_41f1_8fc5_1db19e184d24.slice - libcontainer container kubepods-besteffort-podef6cf0cb_401f_41f1_8fc5_1db19e184d24.slice. Jan 23 19:25:19.668644 kubelet[2859]: I0123 19:25:19.668337 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8ndm\" (UniqueName: \"kubernetes.io/projected/ef6cf0cb-401f-41f1-8fc5-1db19e184d24-kube-api-access-h8ndm\") pod \"cilium-operator-6f9c7c5859-hvjtd\" (UID: \"ef6cf0cb-401f-41f1-8fc5-1db19e184d24\") " pod="kube-system/cilium-operator-6f9c7c5859-hvjtd" Jan 23 19:25:19.668644 kubelet[2859]: I0123 19:25:19.668392 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef6cf0cb-401f-41f1-8fc5-1db19e184d24-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-hvjtd\" (UID: \"ef6cf0cb-401f-41f1-8fc5-1db19e184d24\") " pod="kube-system/cilium-operator-6f9c7c5859-hvjtd" Jan 23 19:25:19.842517 kubelet[2859]: E0123 19:25:19.842468 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:19.851319 containerd[1585]: time="2026-01-23T19:25:19.851256685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wr8w2,Uid:037945d5-13ca-4dab-be76-ba146507b44a,Namespace:kube-system,Attempt:0,}" Jan 23 19:25:19.879623 kubelet[2859]: E0123 19:25:19.878468 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:19.886761 containerd[1585]: time="2026-01-23T19:25:19.886648115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r57jb,Uid:5cbb13b0-35a7-4d1f-baba-b2b78a040c8e,Namespace:kube-system,Attempt:0,}" Jan 23 19:25:19.964029 kubelet[2859]: E0123 19:25:19.963757 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:19.973017 containerd[1585]: time="2026-01-23T19:25:19.972199927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-hvjtd,Uid:ef6cf0cb-401f-41f1-8fc5-1db19e184d24,Namespace:kube-system,Attempt:0,}" Jan 23 19:25:20.039653 containerd[1585]: time="2026-01-23T19:25:20.038682619Z" level=info msg="connecting to shim de94837dadb3850385870df23db3e0ee3998fea9c9ee14feb3481a1629e14678" address="unix:///run/containerd/s/ff6903736c0fdba5e06e31a581e0152c63977df2bd42bd2311827225c205d119" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:25:20.077509 containerd[1585]: time="2026-01-23T19:25:20.077444032Z" level=info msg="connecting to shim 6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c" address="unix:///run/containerd/s/779ccccc04d9931a3948dd319dbc18617021f68aef2a857a49dc68a22e7278a8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:25:20.208761 containerd[1585]: time="2026-01-23T19:25:20.206193250Z" level=info msg="connecting to shim 43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f" address="unix:///run/containerd/s/334901df7b5980518310ebcfb80cd095a18362ced69d55cdc1487556192c407b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:25:20.246240 sudo[1827]: pam_unix(sudo:session): session closed for user root Jan 23 19:25:20.256923 sshd[1826]: Connection closed by 10.0.0.1 port 34862 Jan 23 19:25:20.254052 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Jan 23 19:25:20.273486 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:34862.service: Deactivated successfully. Jan 23 19:25:20.291541 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 19:25:20.292215 systemd[1]: session-9.scope: Consumed 11.637s CPU time, 265.4M memory peak. Jan 23 19:25:20.299244 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Jan 23 19:25:20.317200 systemd-logind[1561]: Removed session 9. Jan 23 19:25:20.358666 systemd[1]: Started cri-containerd-6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c.scope - libcontainer container 6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c. Jan 23 19:25:20.560167 systemd[1]: Started cri-containerd-de94837dadb3850385870df23db3e0ee3998fea9c9ee14feb3481a1629e14678.scope - libcontainer container de94837dadb3850385870df23db3e0ee3998fea9c9ee14feb3481a1629e14678. Jan 23 19:25:20.625575 systemd[1]: Started cri-containerd-43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f.scope - libcontainer container 43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f. Jan 23 19:25:21.061476 containerd[1585]: time="2026-01-23T19:25:21.056480075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-hvjtd,Uid:ef6cf0cb-401f-41f1-8fc5-1db19e184d24,Namespace:kube-system,Attempt:0,} returns sandbox id \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\"" Jan 23 19:25:21.089204 kubelet[2859]: E0123 19:25:21.088365 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:21.153460 containerd[1585]: time="2026-01-23T19:25:21.153354179Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 19:25:21.186432 containerd[1585]: time="2026-01-23T19:25:21.185587291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r57jb,Uid:5cbb13b0-35a7-4d1f-baba-b2b78a040c8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\"" Jan 23 19:25:21.193958 kubelet[2859]: E0123 19:25:21.193445 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:21.239218 containerd[1585]: time="2026-01-23T19:25:21.239129600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wr8w2,Uid:037945d5-13ca-4dab-be76-ba146507b44a,Namespace:kube-system,Attempt:0,} returns sandbox id \"de94837dadb3850385870df23db3e0ee3998fea9c9ee14feb3481a1629e14678\"" Jan 23 19:25:21.251591 kubelet[2859]: E0123 19:25:21.251279 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:21.297909 containerd[1585]: time="2026-01-23T19:25:21.297358702Z" level=info msg="CreateContainer within sandbox \"de94837dadb3850385870df23db3e0ee3998fea9c9ee14feb3481a1629e14678\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 19:25:21.359654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2722944974.mount: Deactivated successfully. Jan 23 19:25:21.362450 containerd[1585]: time="2026-01-23T19:25:21.361324716Z" level=info msg="Container 58433ac0b3757737623e43a90f99264643f8e0a60dd4f27aa015173f1ff912e3: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:25:21.410540 containerd[1585]: time="2026-01-23T19:25:21.410495148Z" level=info msg="CreateContainer within sandbox \"de94837dadb3850385870df23db3e0ee3998fea9c9ee14feb3481a1629e14678\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"58433ac0b3757737623e43a90f99264643f8e0a60dd4f27aa015173f1ff912e3\"" Jan 23 19:25:21.418335 containerd[1585]: time="2026-01-23T19:25:21.418223992Z" level=info msg="StartContainer for \"58433ac0b3757737623e43a90f99264643f8e0a60dd4f27aa015173f1ff912e3\"" Jan 23 19:25:21.422651 containerd[1585]: time="2026-01-23T19:25:21.422335757Z" level=info msg="connecting to shim 58433ac0b3757737623e43a90f99264643f8e0a60dd4f27aa015173f1ff912e3" address="unix:///run/containerd/s/ff6903736c0fdba5e06e31a581e0152c63977df2bd42bd2311827225c205d119" protocol=ttrpc version=3 Jan 23 19:25:21.535315 systemd[1]: Started cri-containerd-58433ac0b3757737623e43a90f99264643f8e0a60dd4f27aa015173f1ff912e3.scope - libcontainer container 58433ac0b3757737623e43a90f99264643f8e0a60dd4f27aa015173f1ff912e3. Jan 23 19:25:21.872497 containerd[1585]: time="2026-01-23T19:25:21.864713240Z" level=info msg="StartContainer for \"58433ac0b3757737623e43a90f99264643f8e0a60dd4f27aa015173f1ff912e3\" returns successfully" Jan 23 19:25:22.316149 kubelet[2859]: E0123 19:25:22.314463 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:22.485994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056801420.mount: Deactivated successfully. Jan 23 19:25:22.578527 kubelet[2859]: E0123 19:25:22.577457 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:22.578527 kubelet[2859]: E0123 19:25:22.577525 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:22.603645 kubelet[2859]: E0123 19:25:22.600542 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:22.736719 kubelet[2859]: I0123 19:25:22.736541 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wr8w2" podStartSLOduration=3.736515247 podStartE2EDuration="3.736515247s" podCreationTimestamp="2026-01-23 19:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:25:22.730532885 +0000 UTC m=+7.931885595" watchObservedRunningTime="2026-01-23 19:25:22.736515247 +0000 UTC m=+7.937867967" Jan 23 19:25:23.603015 kubelet[2859]: E0123 19:25:23.601375 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:23.603015 kubelet[2859]: E0123 19:25:23.602273 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:25.689323 containerd[1585]: time="2026-01-23T19:25:25.688671306Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:25:25.691874 containerd[1585]: time="2026-01-23T19:25:25.691603105Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 19:25:25.694517 containerd[1585]: time="2026-01-23T19:25:25.694360048Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:25:25.695906 containerd[1585]: time="2026-01-23T19:25:25.695623513Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.542209527s" Jan 23 19:25:25.695906 containerd[1585]: time="2026-01-23T19:25:25.695658045Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 19:25:25.702353 containerd[1585]: time="2026-01-23T19:25:25.701064353Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 19:25:25.714611 containerd[1585]: time="2026-01-23T19:25:25.714008053Z" level=info msg="CreateContainer within sandbox \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 19:25:25.785900 containerd[1585]: time="2026-01-23T19:25:25.783361023Z" level=info msg="Container 0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:25:25.829929 containerd[1585]: time="2026-01-23T19:25:25.828936079Z" level=info msg="CreateContainer within sandbox \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\"" Jan 23 19:25:25.830077 containerd[1585]: time="2026-01-23T19:25:25.829977534Z" level=info msg="StartContainer for \"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\"" Jan 23 19:25:25.834269 containerd[1585]: time="2026-01-23T19:25:25.834118490Z" level=info msg="connecting to shim 0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718" address="unix:///run/containerd/s/334901df7b5980518310ebcfb80cd095a18362ced69d55cdc1487556192c407b" protocol=ttrpc version=3 Jan 23 19:25:25.922118 systemd[1]: Started cri-containerd-0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718.scope - libcontainer container 0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718. Jan 23 19:25:26.073309 containerd[1585]: time="2026-01-23T19:25:26.073216636Z" level=info msg="StartContainer for \"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\" returns successfully" Jan 23 19:25:26.633726 kubelet[2859]: E0123 19:25:26.633503 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:27.637257 kubelet[2859]: E0123 19:25:27.637222 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:25:46.992474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517031839.mount: Deactivated successfully. Jan 23 19:26:01.740745 containerd[1585]: time="2026-01-23T19:26:01.739252137Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:26:01.740745 containerd[1585]: time="2026-01-23T19:26:01.744207925Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 19:26:01.750003 containerd[1585]: time="2026-01-23T19:26:01.749089527Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:26:01.758198 containerd[1585]: time="2026-01-23T19:26:01.758157480Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 36.056885632s" Jan 23 19:26:01.758314 containerd[1585]: time="2026-01-23T19:26:01.758291893Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 19:26:01.786305 containerd[1585]: time="2026-01-23T19:26:01.786125440Z" level=info msg="CreateContainer within sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 19:26:01.874145 containerd[1585]: time="2026-01-23T19:26:01.868334498Z" level=info msg="Container 22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:26:01.914057 containerd[1585]: time="2026-01-23T19:26:01.913468571Z" level=info msg="CreateContainer within sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\"" Jan 23 19:26:01.921050 containerd[1585]: time="2026-01-23T19:26:01.916163848Z" level=info msg="StartContainer for \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\"" Jan 23 19:26:01.922559 containerd[1585]: time="2026-01-23T19:26:01.922526701Z" level=info msg="connecting to shim 22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d" address="unix:///run/containerd/s/779ccccc04d9931a3948dd319dbc18617021f68aef2a857a49dc68a22e7278a8" protocol=ttrpc version=3 Jan 23 19:26:02.081164 systemd[1]: Started cri-containerd-22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d.scope - libcontainer container 22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d. Jan 23 19:26:02.445563 containerd[1585]: time="2026-01-23T19:26:02.439532958Z" level=info msg="StartContainer for \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\" returns successfully" Jan 23 19:26:02.536468 systemd[1]: cri-containerd-22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d.scope: Deactivated successfully. Jan 23 19:26:02.563708 containerd[1585]: time="2026-01-23T19:26:02.563092392Z" level=info msg="received container exit event container_id:\"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\" id:\"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\" pid:3351 exited_at:{seconds:1769196362 nanos:557691158}" Jan 23 19:26:02.764131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d-rootfs.mount: Deactivated successfully. Jan 23 19:26:03.010377 kubelet[2859]: E0123 19:26:03.008359 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:03.117522 kubelet[2859]: I0123 19:26:03.117461 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-hvjtd" podStartSLOduration=39.569532795 podStartE2EDuration="44.117363268s" podCreationTimestamp="2026-01-23 19:25:19 +0000 UTC" firstStartedPulling="2026-01-23 19:25:21.152125346 +0000 UTC m=+6.353478036" lastFinishedPulling="2026-01-23 19:25:25.699955809 +0000 UTC m=+10.901308509" observedRunningTime="2026-01-23 19:25:26.79477121 +0000 UTC m=+11.996123910" watchObservedRunningTime="2026-01-23 19:26:03.117363268 +0000 UTC m=+48.318715998" Jan 23 19:26:04.034544 kubelet[2859]: E0123 19:26:04.034055 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:04.050021 containerd[1585]: time="2026-01-23T19:26:04.049973113Z" level=info msg="CreateContainer within sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 19:26:04.173136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359659739.mount: Deactivated successfully. Jan 23 19:26:04.186658 containerd[1585]: time="2026-01-23T19:26:04.185754883Z" level=info msg="Container 11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:26:04.219984 containerd[1585]: time="2026-01-23T19:26:04.219934050Z" level=info msg="CreateContainer within sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\"" Jan 23 19:26:04.225022 containerd[1585]: time="2026-01-23T19:26:04.224584860Z" level=info msg="StartContainer for \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\"" Jan 23 19:26:04.226772 containerd[1585]: time="2026-01-23T19:26:04.226329864Z" level=info msg="connecting to shim 11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1" address="unix:///run/containerd/s/779ccccc04d9931a3948dd319dbc18617021f68aef2a857a49dc68a22e7278a8" protocol=ttrpc version=3 Jan 23 19:26:04.331128 systemd[1]: Started cri-containerd-11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1.scope - libcontainer container 11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1. Jan 23 19:26:04.554982 containerd[1585]: time="2026-01-23T19:26:04.552972483Z" level=info msg="StartContainer for \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\" returns successfully" Jan 23 19:26:04.629753 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:26:04.630396 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:26:04.634297 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:26:04.650032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:26:04.664192 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 19:26:04.679141 systemd[1]: cri-containerd-11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1.scope: Deactivated successfully. Jan 23 19:26:04.684662 containerd[1585]: time="2026-01-23T19:26:04.684623026Z" level=info msg="received container exit event container_id:\"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\" id:\"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\" pid:3393 exited_at:{seconds:1769196364 nanos:684213586}" Jan 23 19:26:04.743045 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:26:05.058995 kubelet[2859]: E0123 19:26:05.057777 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:05.078079 containerd[1585]: time="2026-01-23T19:26:05.075316650Z" level=info msg="CreateContainer within sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 19:26:05.137107 containerd[1585]: time="2026-01-23T19:26:05.137060549Z" level=info msg="Container cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:26:05.159381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1-rootfs.mount: Deactivated successfully. Jan 23 19:26:05.185304 containerd[1585]: time="2026-01-23T19:26:05.185126369Z" level=info msg="CreateContainer within sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\"" Jan 23 19:26:05.189096 containerd[1585]: time="2026-01-23T19:26:05.188670071Z" level=info msg="StartContainer for \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\"" Jan 23 19:26:05.196351 containerd[1585]: time="2026-01-23T19:26:05.196320605Z" level=info msg="connecting to shim cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c" address="unix:///run/containerd/s/779ccccc04d9931a3948dd319dbc18617021f68aef2a857a49dc68a22e7278a8" protocol=ttrpc version=3 Jan 23 19:26:05.295710 systemd[1]: Started cri-containerd-cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c.scope - libcontainer container cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c. Jan 23 19:26:05.593744 containerd[1585]: time="2026-01-23T19:26:05.593698846Z" level=info msg="StartContainer for \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\" returns successfully" Jan 23 19:26:05.594374 systemd[1]: cri-containerd-cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c.scope: Deactivated successfully. Jan 23 19:26:05.603109 containerd[1585]: time="2026-01-23T19:26:05.603078682Z" level=info msg="received container exit event container_id:\"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\" id:\"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\" pid:3440 exited_at:{seconds:1769196365 nanos:601606450}" Jan 23 19:26:05.767416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c-rootfs.mount: Deactivated successfully. Jan 23 19:26:06.092077 kubelet[2859]: E0123 19:26:06.088692 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:06.177105 containerd[1585]: time="2026-01-23T19:26:06.177058372Z" level=info msg="CreateContainer within sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 19:26:06.268375 containerd[1585]: time="2026-01-23T19:26:06.268332174Z" level=info msg="Container ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:26:06.297564 containerd[1585]: time="2026-01-23T19:26:06.297163603Z" level=info msg="CreateContainer within sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\"" Jan 23 19:26:06.303439 containerd[1585]: time="2026-01-23T19:26:06.303117622Z" level=info msg="StartContainer for \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\"" Jan 23 19:26:06.304772 containerd[1585]: time="2026-01-23T19:26:06.304569337Z" level=info msg="connecting to shim ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78" address="unix:///run/containerd/s/779ccccc04d9931a3948dd319dbc18617021f68aef2a857a49dc68a22e7278a8" protocol=ttrpc version=3 Jan 23 19:26:06.417410 systemd[1]: Started cri-containerd-ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78.scope - libcontainer container ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78. Jan 23 19:26:06.611351 systemd[1]: cri-containerd-ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78.scope: Deactivated successfully. Jan 23 19:26:06.624667 containerd[1585]: time="2026-01-23T19:26:06.624443689Z" level=info msg="received container exit event container_id:\"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\" id:\"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\" pid:3479 exited_at:{seconds:1769196366 nanos:623080000}" Jan 23 19:26:06.626282 containerd[1585]: time="2026-01-23T19:26:06.626143522Z" level=info msg="StartContainer for \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\" returns successfully" Jan 23 19:26:06.736438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78-rootfs.mount: Deactivated successfully. Jan 23 19:26:07.128411 kubelet[2859]: E0123 19:26:07.116646 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:07.167651 containerd[1585]: time="2026-01-23T19:26:07.167609235Z" level=info msg="CreateContainer within sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 19:26:07.281485 containerd[1585]: time="2026-01-23T19:26:07.281362130Z" level=info msg="Container 6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:26:07.306468 containerd[1585]: time="2026-01-23T19:26:07.306280534Z" level=info msg="CreateContainer within sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\"" Jan 23 19:26:07.314746 containerd[1585]: time="2026-01-23T19:26:07.314705398Z" level=info msg="StartContainer for \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\"" Jan 23 19:26:07.316483 containerd[1585]: time="2026-01-23T19:26:07.316450498Z" level=info msg="connecting to shim 6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16" address="unix:///run/containerd/s/779ccccc04d9931a3948dd319dbc18617021f68aef2a857a49dc68a22e7278a8" protocol=ttrpc version=3 Jan 23 19:26:07.384233 systemd[1]: Started cri-containerd-6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16.scope - libcontainer container 6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16. Jan 23 19:26:07.628033 containerd[1585]: time="2026-01-23T19:26:07.627649646Z" level=info msg="StartContainer for \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\" returns successfully" Jan 23 19:26:08.053001 kubelet[2859]: I0123 19:26:08.052714 2859 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 19:26:08.199290 kubelet[2859]: E0123 19:26:08.196051 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:08.296161 systemd[1]: Created slice kubepods-burstable-podea091457_a088_43ec_a8e3_827ce857c75d.slice - libcontainer container kubepods-burstable-podea091457_a088_43ec_a8e3_827ce857c75d.slice. Jan 23 19:26:08.321193 kubelet[2859]: I0123 19:26:08.319749 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvkp\" (UniqueName: \"kubernetes.io/projected/ea091457-a088-43ec-a8e3-827ce857c75d-kube-api-access-8bvkp\") pod \"coredns-66bc5c9577-rw2bf\" (UID: \"ea091457-a088-43ec-a8e3-827ce857c75d\") " pod="kube-system/coredns-66bc5c9577-rw2bf" Jan 23 19:26:08.321408 kubelet[2859]: I0123 19:26:08.321386 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea091457-a088-43ec-a8e3-827ce857c75d-config-volume\") pod \"coredns-66bc5c9577-rw2bf\" (UID: \"ea091457-a088-43ec-a8e3-827ce857c75d\") " pod="kube-system/coredns-66bc5c9577-rw2bf" Jan 23 19:26:08.326672 kubelet[2859]: I0123 19:26:08.326473 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r57jb" podStartSLOduration=8.778477956 podStartE2EDuration="49.326457587s" podCreationTimestamp="2026-01-23 19:25:19 +0000 UTC" firstStartedPulling="2026-01-23 19:25:21.215480557 +0000 UTC m=+6.416833247" lastFinishedPulling="2026-01-23 19:26:01.763460187 +0000 UTC m=+46.964812878" observedRunningTime="2026-01-23 19:26:08.309227155 +0000 UTC m=+53.510579845" watchObservedRunningTime="2026-01-23 19:26:08.326457587 +0000 UTC m=+53.527810278" Jan 23 19:26:08.338252 systemd[1]: Created slice kubepods-burstable-pod135155aa_20c0_4417_b625_062f54c808fa.slice - libcontainer container kubepods-burstable-pod135155aa_20c0_4417_b625_062f54c808fa.slice. Jan 23 19:26:08.427949 kubelet[2859]: I0123 19:26:08.425165 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/135155aa-20c0-4417-b625-062f54c808fa-config-volume\") pod \"coredns-66bc5c9577-kwfmj\" (UID: \"135155aa-20c0-4417-b625-062f54c808fa\") " pod="kube-system/coredns-66bc5c9577-kwfmj" Jan 23 19:26:08.427949 kubelet[2859]: I0123 19:26:08.425208 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pktrs\" (UniqueName: \"kubernetes.io/projected/135155aa-20c0-4417-b625-062f54c808fa-kube-api-access-pktrs\") pod \"coredns-66bc5c9577-kwfmj\" (UID: \"135155aa-20c0-4417-b625-062f54c808fa\") " pod="kube-system/coredns-66bc5c9577-kwfmj" Jan 23 19:26:08.643497 kubelet[2859]: E0123 19:26:08.643378 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:08.652087 containerd[1585]: time="2026-01-23T19:26:08.648285732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rw2bf,Uid:ea091457-a088-43ec-a8e3-827ce857c75d,Namespace:kube-system,Attempt:0,}" Jan 23 19:26:08.679155 kubelet[2859]: E0123 19:26:08.677294 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:08.681153 containerd[1585]: time="2026-01-23T19:26:08.679660354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kwfmj,Uid:135155aa-20c0-4417-b625-062f54c808fa,Namespace:kube-system,Attempt:0,}" Jan 23 19:26:09.200174 kubelet[2859]: E0123 19:26:09.200033 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:10.216558 kubelet[2859]: E0123 19:26:10.212542 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:12.237003 systemd-networkd[1384]: cilium_host: Link UP Jan 23 19:26:12.249349 systemd-networkd[1384]: cilium_net: Link UP Jan 23 19:26:12.252935 systemd-networkd[1384]: cilium_net: Gained carrier Jan 23 19:26:12.253308 systemd-networkd[1384]: cilium_host: Gained carrier Jan 23 19:26:12.512195 systemd-networkd[1384]: cilium_net: Gained IPv6LL Jan 23 19:26:12.843497 systemd-networkd[1384]: cilium_host: Gained IPv6LL Jan 23 19:26:12.923145 systemd-networkd[1384]: cilium_vxlan: Link UP Jan 23 19:26:12.923437 systemd-networkd[1384]: cilium_vxlan: Gained carrier Jan 23 19:26:13.582931 kernel: NET: Registered PF_ALG protocol family Jan 23 19:26:14.058274 systemd-networkd[1384]: cilium_vxlan: Gained IPv6LL Jan 23 19:26:15.540077 systemd-networkd[1384]: lxc_health: Link UP Jan 23 19:26:15.570528 systemd-networkd[1384]: lxc_health: Gained carrier Jan 23 19:26:15.844933 systemd-networkd[1384]: lxce773af94fd59: Link UP Jan 23 19:26:15.883389 kubelet[2859]: E0123 19:26:15.883313 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:15.908067 kernel: eth0: renamed from tmp6e839 Jan 23 19:26:15.910746 systemd-networkd[1384]: lxce773af94fd59: Gained carrier Jan 23 19:26:16.103959 kernel: eth0: renamed from tmpceb1b Jan 23 19:26:16.103756 systemd-networkd[1384]: lxc2034f77d692b: Link UP Jan 23 19:26:16.131437 systemd-networkd[1384]: lxc2034f77d692b: Gained carrier Jan 23 19:26:16.266752 kubelet[2859]: E0123 19:26:16.266717 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:16.821126 systemd-networkd[1384]: lxc_health: Gained IPv6LL Jan 23 19:26:17.278322 kubelet[2859]: E0123 19:26:17.276396 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:17.324068 systemd-networkd[1384]: lxce773af94fd59: Gained IPv6LL Jan 23 19:26:18.156552 systemd-networkd[1384]: lxc2034f77d692b: Gained IPv6LL Jan 23 19:26:25.472917 containerd[1585]: time="2026-01-23T19:26:25.472604679Z" level=info msg="connecting to shim 6e83940c6abcbd910c5024770a9561cff53f59cd24b40fb46b78b89b349efb21" address="unix:///run/containerd/s/e5795c5bf255bd39fbeeb2d4d21f3264cff0cf16cc3b86aa5e4631def685d48a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:26:25.540896 containerd[1585]: time="2026-01-23T19:26:25.540670314Z" level=info msg="connecting to shim ceb1bd1810a413b70c22d89bdc9e5ce884509c3ab8b85899f4645a0c5d02d795" address="unix:///run/containerd/s/161e85ebc8fd462ecf0ce7952ed9541d2d0d8ec8e83660d09fc5898553d3c641" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:26:25.586266 systemd[1]: Started cri-containerd-6e83940c6abcbd910c5024770a9561cff53f59cd24b40fb46b78b89b349efb21.scope - libcontainer container 6e83940c6abcbd910c5024770a9561cff53f59cd24b40fb46b78b89b349efb21. Jan 23 19:26:25.648223 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:26:25.664951 systemd[1]: Started cri-containerd-ceb1bd1810a413b70c22d89bdc9e5ce884509c3ab8b85899f4645a0c5d02d795.scope - libcontainer container ceb1bd1810a413b70c22d89bdc9e5ce884509c3ab8b85899f4645a0c5d02d795. Jan 23 19:26:25.763635 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:26:25.821382 containerd[1585]: time="2026-01-23T19:26:25.821121942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rw2bf,Uid:ea091457-a088-43ec-a8e3-827ce857c75d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e83940c6abcbd910c5024770a9561cff53f59cd24b40fb46b78b89b349efb21\"" Jan 23 19:26:25.825905 kubelet[2859]: E0123 19:26:25.825309 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:25.841525 containerd[1585]: time="2026-01-23T19:26:25.841416808Z" level=info msg="CreateContainer within sandbox \"6e83940c6abcbd910c5024770a9561cff53f59cd24b40fb46b78b89b349efb21\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:26:25.930564 containerd[1585]: time="2026-01-23T19:26:25.930302456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kwfmj,Uid:135155aa-20c0-4417-b625-062f54c808fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceb1bd1810a413b70c22d89bdc9e5ce884509c3ab8b85899f4645a0c5d02d795\"" Jan 23 19:26:25.935033 kubelet[2859]: E0123 19:26:25.934706 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:25.939010 containerd[1585]: time="2026-01-23T19:26:25.938971890Z" level=info msg="Container 63ce20f6dbc4d839735eb7b4bd8b724d49dd22ccce49d9b87039173c2f3eb0e2: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:26:25.952084 containerd[1585]: time="2026-01-23T19:26:25.951716837Z" level=info msg="CreateContainer within sandbox \"ceb1bd1810a413b70c22d89bdc9e5ce884509c3ab8b85899f4645a0c5d02d795\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:26:25.987372 containerd[1585]: time="2026-01-23T19:26:25.983980698Z" level=info msg="CreateContainer within sandbox \"6e83940c6abcbd910c5024770a9561cff53f59cd24b40fb46b78b89b349efb21\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63ce20f6dbc4d839735eb7b4bd8b724d49dd22ccce49d9b87039173c2f3eb0e2\"" Jan 23 19:26:25.987924 containerd[1585]: time="2026-01-23T19:26:25.987767771Z" level=info msg="StartContainer for \"63ce20f6dbc4d839735eb7b4bd8b724d49dd22ccce49d9b87039173c2f3eb0e2\"" Jan 23 19:26:25.991754 containerd[1585]: time="2026-01-23T19:26:25.990650535Z" level=info msg="connecting to shim 63ce20f6dbc4d839735eb7b4bd8b724d49dd22ccce49d9b87039173c2f3eb0e2" address="unix:///run/containerd/s/e5795c5bf255bd39fbeeb2d4d21f3264cff0cf16cc3b86aa5e4631def685d48a" protocol=ttrpc version=3 Jan 23 19:26:26.029996 containerd[1585]: time="2026-01-23T19:26:26.028771149Z" level=info msg="Container 91756265c9da6706fe5128f76027b017684b215fbf635ddc742a99d060b6a993: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:26:26.060008 containerd[1585]: time="2026-01-23T19:26:26.059903382Z" level=info msg="CreateContainer within sandbox \"ceb1bd1810a413b70c22d89bdc9e5ce884509c3ab8b85899f4645a0c5d02d795\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91756265c9da6706fe5128f76027b017684b215fbf635ddc742a99d060b6a993\"" Jan 23 19:26:26.073588 containerd[1585]: time="2026-01-23T19:26:26.070960226Z" level=info msg="StartContainer for \"91756265c9da6706fe5128f76027b017684b215fbf635ddc742a99d060b6a993\"" Jan 23 19:26:26.077986 containerd[1585]: time="2026-01-23T19:26:26.077950334Z" level=info msg="connecting to shim 91756265c9da6706fe5128f76027b017684b215fbf635ddc742a99d060b6a993" address="unix:///run/containerd/s/161e85ebc8fd462ecf0ce7952ed9541d2d0d8ec8e83660d09fc5898553d3c641" protocol=ttrpc version=3 Jan 23 19:26:26.098548 systemd[1]: Started cri-containerd-63ce20f6dbc4d839735eb7b4bd8b724d49dd22ccce49d9b87039173c2f3eb0e2.scope - libcontainer container 63ce20f6dbc4d839735eb7b4bd8b724d49dd22ccce49d9b87039173c2f3eb0e2. Jan 23 19:26:26.173483 systemd[1]: Started cri-containerd-91756265c9da6706fe5128f76027b017684b215fbf635ddc742a99d060b6a993.scope - libcontainer container 91756265c9da6706fe5128f76027b017684b215fbf635ddc742a99d060b6a993. Jan 23 19:26:26.265882 containerd[1585]: time="2026-01-23T19:26:26.259143600Z" level=info msg="StartContainer for \"63ce20f6dbc4d839735eb7b4bd8b724d49dd22ccce49d9b87039173c2f3eb0e2\" returns successfully" Jan 23 19:26:26.363392 kubelet[2859]: E0123 19:26:26.362978 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:26.408659 containerd[1585]: time="2026-01-23T19:26:26.408160084Z" level=info msg="StartContainer for \"91756265c9da6706fe5128f76027b017684b215fbf635ddc742a99d060b6a993\" returns successfully" Jan 23 19:26:26.435994 kubelet[2859]: I0123 19:26:26.435715 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rw2bf" podStartSLOduration=67.435692929 podStartE2EDuration="1m7.435692929s" podCreationTimestamp="2026-01-23 19:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:26:26.426933297 +0000 UTC m=+71.628286037" watchObservedRunningTime="2026-01-23 19:26:26.435692929 +0000 UTC m=+71.637045639" Jan 23 19:26:26.444020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064648000.mount: Deactivated successfully. Jan 23 19:26:27.402545 kubelet[2859]: E0123 19:26:27.396360 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:27.402545 kubelet[2859]: E0123 19:26:27.397415 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:27.564440 kubelet[2859]: I0123 19:26:27.562204 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kwfmj" podStartSLOduration=68.562185382 podStartE2EDuration="1m8.562185382s" podCreationTimestamp="2026-01-23 19:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:26:27.561545918 +0000 UTC m=+72.762898618" watchObservedRunningTime="2026-01-23 19:26:27.562185382 +0000 UTC m=+72.763538073" Jan 23 19:26:28.412193 kubelet[2859]: E0123 19:26:28.405179 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:28.417455 kubelet[2859]: E0123 19:26:28.413131 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:35.388381 kubelet[2859]: E0123 19:26:35.387436 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:38.391295 kubelet[2859]: E0123 19:26:38.389661 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:41.389914 kubelet[2859]: E0123 19:26:41.389266 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:50.390180 kubelet[2859]: E0123 19:26:50.387358 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:26:58.398002 kubelet[2859]: E0123 19:26:58.397633 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:27:31.387341 kubelet[2859]: E0123 19:27:31.387301 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:27:41.408946 kubelet[2859]: E0123 19:27:41.408687 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:27:48.393014 kubelet[2859]: E0123 19:27:48.390561 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:27:59.400547 kubelet[2859]: E0123 19:27:59.398470 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:03.391233 kubelet[2859]: E0123 19:28:03.390999 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:05.392597 kubelet[2859]: E0123 19:28:05.392028 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:11.401685 kubelet[2859]: E0123 19:28:11.401648 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:17.395926 kubelet[2859]: E0123 19:28:17.395310 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:17.793606 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:53358.service - OpenSSH per-connection server daemon (10.0.0.1:53358). Jan 23 19:28:18.163009 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 53358 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:28:18.168512 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:28:18.197589 systemd-logind[1561]: New session 10 of user core. Jan 23 19:28:18.203143 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 19:28:19.206617 sshd[4211]: Connection closed by 10.0.0.1 port 53358 Jan 23 19:28:19.204657 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jan 23 19:28:19.242509 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:53358.service: Deactivated successfully. Jan 23 19:28:19.253711 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 19:28:19.264475 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Jan 23 19:28:19.271073 systemd-logind[1561]: Removed session 10. Jan 23 19:28:24.278534 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:53370.service - OpenSSH per-connection server daemon (10.0.0.1:53370). Jan 23 19:28:24.533258 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 53370 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:28:24.539589 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:28:24.556646 systemd-logind[1561]: New session 11 of user core. Jan 23 19:28:24.574201 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 19:28:25.090705 sshd[4231]: Connection closed by 10.0.0.1 port 53370 Jan 23 19:28:25.089986 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Jan 23 19:28:25.128661 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:53370.service: Deactivated successfully. Jan 23 19:28:25.171378 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 19:28:25.199095 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Jan 23 19:28:25.201650 systemd-logind[1561]: Removed session 11. Jan 23 19:28:30.137963 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:60244.service - OpenSSH per-connection server daemon (10.0.0.1:60244). Jan 23 19:28:30.427205 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 60244 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:28:30.431914 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:28:30.463983 systemd-logind[1561]: New session 12 of user core. Jan 23 19:28:30.484658 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 19:28:31.092058 sshd[4248]: Connection closed by 10.0.0.1 port 60244 Jan 23 19:28:31.094557 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Jan 23 19:28:31.117335 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:60244.service: Deactivated successfully. Jan 23 19:28:31.130870 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 19:28:31.139481 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Jan 23 19:28:31.156330 systemd-logind[1561]: Removed session 12. Jan 23 19:28:36.171604 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:56584.service - OpenSSH per-connection server daemon (10.0.0.1:56584). Jan 23 19:28:36.479923 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 56584 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:28:36.485300 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:28:36.508555 systemd-logind[1561]: New session 13 of user core. Jan 23 19:28:36.519431 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 19:28:36.941459 sshd[4267]: Connection closed by 10.0.0.1 port 56584 Jan 23 19:28:36.943151 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Jan 23 19:28:36.983938 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:56584.service: Deactivated successfully. Jan 23 19:28:36.991583 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 19:28:37.014287 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Jan 23 19:28:37.029109 systemd-logind[1561]: Removed session 13. Jan 23 19:28:39.387590 kubelet[2859]: E0123 19:28:39.387040 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:41.992362 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:56598.service - OpenSSH per-connection server daemon (10.0.0.1:56598). Jan 23 19:28:42.185960 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 56598 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:28:42.189304 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:28:42.219057 systemd-logind[1561]: New session 14 of user core. Jan 23 19:28:42.243537 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 19:28:42.680520 sshd[4288]: Connection closed by 10.0.0.1 port 56598 Jan 23 19:28:42.680493 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Jan 23 19:28:42.696569 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:56598.service: Deactivated successfully. Jan 23 19:28:42.706372 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 19:28:42.715122 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Jan 23 19:28:42.728644 systemd-logind[1561]: Removed session 14. Jan 23 19:28:47.392056 kubelet[2859]: E0123 19:28:47.391942 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:28:47.723013 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:59662.service - OpenSSH per-connection server daemon (10.0.0.1:59662). Jan 23 19:28:48.162416 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 59662 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:28:48.168701 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:28:48.199393 systemd-logind[1561]: New session 15 of user core. Jan 23 19:28:48.245932 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 19:28:48.751885 sshd[4305]: Connection closed by 10.0.0.1 port 59662 Jan 23 19:28:48.755401 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Jan 23 19:28:48.762997 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:59662.service: Deactivated successfully. Jan 23 19:28:48.769529 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 19:28:48.779560 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Jan 23 19:28:48.784146 systemd-logind[1561]: Removed session 15. Jan 23 19:28:53.808349 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:59668.service - OpenSSH per-connection server daemon (10.0.0.1:59668). Jan 23 19:28:54.047956 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 59668 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:28:54.057737 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:28:54.093412 systemd-logind[1561]: New session 16 of user core. Jan 23 19:28:54.120738 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 19:28:54.702543 sshd[4324]: Connection closed by 10.0.0.1 port 59668 Jan 23 19:28:54.705219 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Jan 23 19:28:54.717615 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:59668.service: Deactivated successfully. Jan 23 19:28:54.732226 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 19:28:54.736114 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Jan 23 19:28:54.744378 systemd-logind[1561]: Removed session 16. Jan 23 19:28:59.760201 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:40058.service - OpenSSH per-connection server daemon (10.0.0.1:40058). Jan 23 19:28:59.906936 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 40058 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:28:59.918385 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:28:59.941634 systemd-logind[1561]: New session 17 of user core. Jan 23 19:28:59.967482 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 19:29:00.501051 sshd[4342]: Connection closed by 10.0.0.1 port 40058 Jan 23 19:29:00.502149 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:00.536714 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:40058.service: Deactivated successfully. Jan 23 19:29:00.554440 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 19:29:00.558038 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Jan 23 19:29:00.566043 systemd-logind[1561]: Removed session 17. Jan 23 19:29:04.386910 kubelet[2859]: E0123 19:29:04.386349 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:05.553309 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:46732.service - OpenSSH per-connection server daemon (10.0.0.1:46732). Jan 23 19:29:05.711409 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 46732 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:29:05.720539 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:29:05.746891 systemd-logind[1561]: New session 18 of user core. Jan 23 19:29:05.758596 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 19:29:06.237612 sshd[4359]: Connection closed by 10.0.0.1 port 46732 Jan 23 19:29:06.239180 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:06.257644 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:46732.service: Deactivated successfully. Jan 23 19:29:06.261272 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 19:29:06.270453 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Jan 23 19:29:06.276334 systemd-logind[1561]: Removed session 18. Jan 23 19:29:07.457260 kubelet[2859]: E0123 19:29:07.456390 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:08.387065 kubelet[2859]: E0123 19:29:08.385356 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:11.246136 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:46734.service - OpenSSH per-connection server daemon (10.0.0.1:46734). Jan 23 19:29:11.453336 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 46734 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:29:11.460984 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:29:11.487968 systemd-logind[1561]: New session 19 of user core. Jan 23 19:29:11.503548 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 19:29:11.939534 sshd[4376]: Connection closed by 10.0.0.1 port 46734 Jan 23 19:29:11.940582 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:11.963505 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:46734.service: Deactivated successfully. Jan 23 19:29:11.970713 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 19:29:11.987431 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Jan 23 19:29:12.006296 systemd-logind[1561]: Removed session 19. Jan 23 19:29:17.004489 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:51156.service - OpenSSH per-connection server daemon (10.0.0.1:51156). Jan 23 19:29:17.295561 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 51156 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:29:17.295476 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:29:17.315950 systemd-logind[1561]: New session 20 of user core. Jan 23 19:29:17.345074 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 19:29:17.674562 sshd[4396]: Connection closed by 10.0.0.1 port 51156 Jan 23 19:29:17.675964 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:17.685536 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:51156.service: Deactivated successfully. Jan 23 19:29:17.689460 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 19:29:17.694625 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Jan 23 19:29:17.697416 systemd-logind[1561]: Removed session 20. Jan 23 19:29:22.711772 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:51170.service - OpenSSH per-connection server daemon (10.0.0.1:51170). Jan 23 19:29:22.933893 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 51170 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:29:22.942380 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:29:22.997413 systemd-logind[1561]: New session 21 of user core. Jan 23 19:29:23.015259 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 19:29:23.449362 sshd[4416]: Connection closed by 10.0.0.1 port 51170 Jan 23 19:29:23.448982 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:23.472277 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:51170.service: Deactivated successfully. Jan 23 19:29:23.479513 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 19:29:23.487933 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Jan 23 19:29:23.494924 systemd-logind[1561]: Removed session 21. Jan 23 19:29:23.737144 update_engine[1562]: I20260123 19:29:23.736402 1562 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 19:29:23.737144 update_engine[1562]: I20260123 19:29:23.736523 1562 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 19:29:23.737144 update_engine[1562]: I20260123 19:29:23.737030 1562 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 19:29:23.739087 update_engine[1562]: I20260123 19:29:23.737926 1562 omaha_request_params.cc:62] Current group set to stable Jan 23 19:29:23.742277 update_engine[1562]: I20260123 19:29:23.741660 1562 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 19:29:23.742277 update_engine[1562]: I20260123 19:29:23.741693 1562 update_attempter.cc:643] Scheduling an action processor start. Jan 23 19:29:23.742277 update_engine[1562]: I20260123 19:29:23.741719 1562 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 19:29:23.742277 update_engine[1562]: I20260123 19:29:23.741767 1562 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 19:29:23.742277 update_engine[1562]: I20260123 19:29:23.741982 1562 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 19:29:23.742277 update_engine[1562]: I20260123 19:29:23.741997 1562 omaha_request_action.cc:272] Request: Jan 23 19:29:23.742277 update_engine[1562]: Jan 23 19:29:23.742277 update_engine[1562]: Jan 23 19:29:23.742277 update_engine[1562]: Jan 23 19:29:23.742277 update_engine[1562]: Jan 23 19:29:23.742277 update_engine[1562]: Jan 23 19:29:23.742277 update_engine[1562]: Jan 23 19:29:23.742277 update_engine[1562]: Jan 23 19:29:23.742277 update_engine[1562]: Jan 23 19:29:23.742277 update_engine[1562]: I20260123 19:29:23.742010 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:29:23.752191 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 19:29:23.764354 update_engine[1562]: I20260123 19:29:23.764148 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:29:23.765705 update_engine[1562]: I20260123 19:29:23.765455 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:29:23.790359 update_engine[1562]: E20260123 19:29:23.789188 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:29:23.790359 update_engine[1562]: I20260123 19:29:23.789387 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 19:29:24.391472 kubelet[2859]: E0123 19:29:24.391095 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:28.502231 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:56872.service - OpenSSH per-connection server daemon (10.0.0.1:56872). Jan 23 19:29:28.681361 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 56872 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:29:28.691209 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:29:28.719668 systemd-logind[1561]: New session 22 of user core. Jan 23 19:29:28.727471 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 19:29:29.108594 sshd[4433]: Connection closed by 10.0.0.1 port 56872 Jan 23 19:29:29.109632 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:29.128906 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:56872.service: Deactivated successfully. Jan 23 19:29:29.137895 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 19:29:29.162374 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Jan 23 19:29:29.165132 systemd-logind[1561]: Removed session 22. Jan 23 19:29:33.742019 update_engine[1562]: I20260123 19:29:33.740687 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:29:33.742019 update_engine[1562]: I20260123 19:29:33.741072 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:29:33.742019 update_engine[1562]: I20260123 19:29:33.741688 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:29:33.763923 update_engine[1562]: E20260123 19:29:33.763288 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:29:33.763923 update_engine[1562]: I20260123 19:29:33.763494 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 19:29:34.154192 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:56878.service - OpenSSH per-connection server daemon (10.0.0.1:56878). Jan 23 19:29:34.383033 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 56878 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:29:34.389975 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:29:34.419425 systemd-logind[1561]: New session 23 of user core. Jan 23 19:29:34.455248 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 19:29:34.907128 sshd[4450]: Connection closed by 10.0.0.1 port 56878 Jan 23 19:29:34.907676 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:34.943316 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:56878.service: Deactivated successfully. Jan 23 19:29:34.956328 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 19:29:34.959949 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Jan 23 19:29:34.970682 systemd-logind[1561]: Removed session 23. Jan 23 19:29:36.389887 kubelet[2859]: E0123 19:29:36.385989 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:39.953683 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:53568.service - OpenSSH per-connection server daemon (10.0.0.1:53568). Jan 23 19:29:40.248027 sshd[4464]: Accepted publickey for core from 10.0.0.1 port 53568 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:29:40.253276 sshd-session[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:29:40.300538 systemd-logind[1561]: New session 24 of user core. Jan 23 19:29:40.328894 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 19:29:40.951454 sshd[4467]: Connection closed by 10.0.0.1 port 53568 Jan 23 19:29:40.955367 sshd-session[4464]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:40.998045 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:53568.service: Deactivated successfully. Jan 23 19:29:41.039301 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 19:29:41.049350 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Jan 23 19:29:41.062162 systemd-logind[1561]: Removed session 24. Jan 23 19:29:41.405279 kubelet[2859]: E0123 19:29:41.401249 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:43.739887 update_engine[1562]: I20260123 19:29:43.739498 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:29:43.739887 update_engine[1562]: I20260123 19:29:43.739670 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:29:43.741484 update_engine[1562]: I20260123 19:29:43.740511 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:29:43.766124 update_engine[1562]: E20260123 19:29:43.761284 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:29:43.766124 update_engine[1562]: I20260123 19:29:43.762725 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 19:29:45.999492 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:40278.service - OpenSSH per-connection server daemon (10.0.0.1:40278). Jan 23 19:29:46.219697 sshd[4481]: Accepted publickey for core from 10.0.0.1 port 40278 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:29:46.219188 sshd-session[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:29:46.247219 systemd-logind[1561]: New session 25 of user core. Jan 23 19:29:46.268744 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 19:29:46.799415 sshd[4484]: Connection closed by 10.0.0.1 port 40278 Jan 23 19:29:46.797496 sshd-session[4481]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:46.820993 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:40278.service: Deactivated successfully. Jan 23 19:29:46.830578 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 19:29:46.838340 systemd-logind[1561]: Session 25 logged out. Waiting for processes to exit. Jan 23 19:29:46.850643 systemd-logind[1561]: Removed session 25. Jan 23 19:29:51.858353 systemd[1]: Started sshd@25-10.0.0.117:22-10.0.0.1:40294.service - OpenSSH per-connection server daemon (10.0.0.1:40294). Jan 23 19:29:52.180652 sshd[4498]: Accepted publickey for core from 10.0.0.1 port 40294 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:29:52.188540 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:29:52.220768 systemd-logind[1561]: New session 26 of user core. Jan 23 19:29:52.235156 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 19:29:52.837471 sshd[4501]: Connection closed by 10.0.0.1 port 40294 Jan 23 19:29:52.839345 sshd-session[4498]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:52.856653 systemd[1]: sshd@25-10.0.0.117:22-10.0.0.1:40294.service: Deactivated successfully. Jan 23 19:29:52.862691 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 19:29:52.874327 systemd-logind[1561]: Session 26 logged out. Waiting for processes to exit. Jan 23 19:29:52.877392 systemd-logind[1561]: Removed session 26. Jan 23 19:29:53.743659 update_engine[1562]: I20260123 19:29:53.742408 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:29:53.743659 update_engine[1562]: I20260123 19:29:53.742578 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:29:53.743659 update_engine[1562]: I20260123 19:29:53.743324 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:29:53.762944 update_engine[1562]: E20260123 19:29:53.761009 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761234 1562 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761253 1562 omaha_request_action.cc:617] Omaha request response: Jan 23 19:29:53.762944 update_engine[1562]: E20260123 19:29:53.761371 1562 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761399 1562 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761409 1562 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761418 1562 update_attempter.cc:306] Processing Done. Jan 23 19:29:53.762944 update_engine[1562]: E20260123 19:29:53.761442 1562 update_attempter.cc:619] Update failed. Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761452 1562 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761461 1562 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761469 1562 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761615 1562 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761648 1562 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 19:29:53.762944 update_engine[1562]: I20260123 19:29:53.761659 1562 omaha_request_action.cc:272] Request: Jan 23 19:29:53.762944 update_engine[1562]: Jan 23 19:29:53.762944 update_engine[1562]: Jan 23 19:29:53.762944 update_engine[1562]: Jan 23 19:29:53.763640 update_engine[1562]: Jan 23 19:29:53.763640 update_engine[1562]: Jan 23 19:29:53.763640 update_engine[1562]: Jan 23 19:29:53.763640 update_engine[1562]: I20260123 19:29:53.761669 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:29:53.763640 update_engine[1562]: I20260123 19:29:53.761699 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:29:53.763640 update_engine[1562]: I20260123 19:29:53.762398 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:29:53.771084 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 19:29:53.798075 update_engine[1562]: E20260123 19:29:53.797622 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:29:53.798075 update_engine[1562]: I20260123 19:29:53.797940 1562 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 19:29:53.798075 update_engine[1562]: I20260123 19:29:53.797956 1562 omaha_request_action.cc:617] Omaha request response: Jan 23 19:29:53.798075 update_engine[1562]: I20260123 19:29:53.797977 1562 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:29:53.798075 update_engine[1562]: I20260123 19:29:53.797987 1562 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:29:53.798075 update_engine[1562]: I20260123 19:29:53.797996 1562 update_attempter.cc:306] Processing Done. Jan 23 19:29:53.798075 update_engine[1562]: I20260123 19:29:53.798008 1562 update_attempter.cc:310] Error event sent. Jan 23 19:29:53.798075 update_engine[1562]: I20260123 19:29:53.798023 1562 update_check_scheduler.cc:74] Next update check in 41m3s Jan 23 19:29:53.799363 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 19:29:55.417455 kubelet[2859]: E0123 19:29:55.403774 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:29:57.887954 systemd[1]: Started sshd@26-10.0.0.117:22-10.0.0.1:48674.service - OpenSSH per-connection server daemon (10.0.0.1:48674). Jan 23 19:29:58.213593 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 48674 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:29:58.237502 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:29:58.289750 systemd-logind[1561]: New session 27 of user core. Jan 23 19:29:58.316646 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 19:29:59.024658 sshd[4521]: Connection closed by 10.0.0.1 port 48674 Jan 23 19:29:59.026149 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Jan 23 19:29:59.055238 systemd[1]: sshd@26-10.0.0.117:22-10.0.0.1:48674.service: Deactivated successfully. Jan 23 19:29:59.073627 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 19:29:59.092464 systemd-logind[1561]: Session 27 logged out. Waiting for processes to exit. Jan 23 19:29:59.096064 systemd-logind[1561]: Removed session 27. Jan 23 19:30:04.054924 systemd[1]: Started sshd@27-10.0.0.117:22-10.0.0.1:48688.service - OpenSSH per-connection server daemon (10.0.0.1:48688). Jan 23 19:30:04.247253 sshd[4536]: Accepted publickey for core from 10.0.0.1 port 48688 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:04.252335 sshd-session[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:04.276311 systemd-logind[1561]: New session 28 of user core. Jan 23 19:30:04.288135 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 19:30:04.378102 containerd[1585]: time="2026-01-23T19:30:04.338173760Z" level=warning msg="container event discarded" container=47652c7188e07861918339d15a60b09405844521ce3bef96fe64d508aa75bfc5 type=CONTAINER_CREATED_EVENT Jan 23 19:30:04.378102 containerd[1585]: time="2026-01-23T19:30:04.377980375Z" level=warning msg="container event discarded" container=47652c7188e07861918339d15a60b09405844521ce3bef96fe64d508aa75bfc5 type=CONTAINER_STARTED_EVENT Jan 23 19:30:04.423102 containerd[1585]: time="2026-01-23T19:30:04.422983615Z" level=warning msg="container event discarded" container=96d30fa6e418efe951b4352cb0a9c25c8d460cc4ea4c9d141a28996b5783e900 type=CONTAINER_CREATED_EVENT Jan 23 19:30:04.423102 containerd[1585]: time="2026-01-23T19:30:04.423042125Z" level=warning msg="container event discarded" container=96d30fa6e418efe951b4352cb0a9c25c8d460cc4ea4c9d141a28996b5783e900 type=CONTAINER_STARTED_EVENT Jan 23 19:30:04.423102 containerd[1585]: time="2026-01-23T19:30:04.423055630Z" level=warning msg="container event discarded" container=ce5c6b24fd72e0a756c1889a7591981f067303eea5b4f6be7c4e97fdc4b8b797 type=CONTAINER_CREATED_EVENT Jan 23 19:30:04.423102 containerd[1585]: time="2026-01-23T19:30:04.423065809Z" level=warning msg="container event discarded" container=ce5c6b24fd72e0a756c1889a7591981f067303eea5b4f6be7c4e97fdc4b8b797 type=CONTAINER_STARTED_EVENT Jan 23 19:30:04.471519 containerd[1585]: time="2026-01-23T19:30:04.471353315Z" level=warning msg="container event discarded" container=5d4c5ffb0e52affdf79b498caf2d13a3e44dfabbf7606173dbf87ece8a9e4072 type=CONTAINER_CREATED_EVENT Jan 23 19:30:04.471519 containerd[1585]: time="2026-01-23T19:30:04.471409520Z" level=warning msg="container event discarded" container=53823ee4a28317fa80293ab002d6038c8e01834d580f1cf571ba1033eda7cd9d type=CONTAINER_CREATED_EVENT Jan 23 19:30:04.489959 containerd[1585]: time="2026-01-23T19:30:04.488233473Z" level=warning msg="container event discarded" container=60019795f89cd57e81048dbde9dbd6331bddb25647dda5d398acb46b27aae449 type=CONTAINER_CREATED_EVENT Jan 23 19:30:04.690270 containerd[1585]: time="2026-01-23T19:30:04.690120938Z" level=warning msg="container event discarded" container=53823ee4a28317fa80293ab002d6038c8e01834d580f1cf571ba1033eda7cd9d type=CONTAINER_STARTED_EVENT Jan 23 19:30:04.690270 containerd[1585]: time="2026-01-23T19:30:04.690165923Z" level=warning msg="container event discarded" container=5d4c5ffb0e52affdf79b498caf2d13a3e44dfabbf7606173dbf87ece8a9e4072 type=CONTAINER_STARTED_EVENT Jan 23 19:30:04.728147 containerd[1585]: time="2026-01-23T19:30:04.728077464Z" level=warning msg="container event discarded" container=60019795f89cd57e81048dbde9dbd6331bddb25647dda5d398acb46b27aae449 type=CONTAINER_STARTED_EVENT Jan 23 19:30:04.756746 sshd[4539]: Connection closed by 10.0.0.1 port 48688 Jan 23 19:30:04.762088 sshd-session[4536]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:04.808438 systemd[1]: Started sshd@28-10.0.0.117:22-10.0.0.1:49030.service - OpenSSH per-connection server daemon (10.0.0.1:49030). Jan 23 19:30:04.809318 systemd[1]: sshd@27-10.0.0.117:22-10.0.0.1:48688.service: Deactivated successfully. Jan 23 19:30:04.821179 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 19:30:04.826279 systemd-logind[1561]: Session 28 logged out. Waiting for processes to exit. Jan 23 19:30:04.848238 systemd-logind[1561]: Removed session 28. Jan 23 19:30:05.081091 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 49030 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:05.085340 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:05.127643 systemd-logind[1561]: New session 29 of user core. Jan 23 19:30:05.136443 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 19:30:06.001027 sshd[4556]: Connection closed by 10.0.0.1 port 49030 Jan 23 19:30:06.005413 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:06.051756 systemd[1]: sshd@28-10.0.0.117:22-10.0.0.1:49030.service: Deactivated successfully. Jan 23 19:30:06.066671 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 19:30:06.076380 systemd-logind[1561]: Session 29 logged out. Waiting for processes to exit. Jan 23 19:30:06.087931 systemd[1]: Started sshd@29-10.0.0.117:22-10.0.0.1:49042.service - OpenSSH per-connection server daemon (10.0.0.1:49042). Jan 23 19:30:06.108348 systemd-logind[1561]: Removed session 29. Jan 23 19:30:06.336480 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 49042 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:06.345249 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:06.382411 systemd-logind[1561]: New session 30 of user core. Jan 23 19:30:06.393332 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 19:30:06.794933 sshd[4571]: Connection closed by 10.0.0.1 port 49042 Jan 23 19:30:06.793644 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:06.814693 systemd[1]: sshd@29-10.0.0.117:22-10.0.0.1:49042.service: Deactivated successfully. Jan 23 19:30:06.825022 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 19:30:06.839040 systemd-logind[1561]: Session 30 logged out. Waiting for processes to exit. Jan 23 19:30:06.847268 systemd-logind[1561]: Removed session 30. Jan 23 19:30:11.829525 systemd[1]: Started sshd@30-10.0.0.117:22-10.0.0.1:49056.service - OpenSSH per-connection server daemon (10.0.0.1:49056). Jan 23 19:30:11.986138 sshd[4584]: Accepted publickey for core from 10.0.0.1 port 49056 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:11.983591 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:11.996924 systemd-logind[1561]: New session 31 of user core. Jan 23 19:30:12.013400 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 23 19:30:12.473646 sshd[4587]: Connection closed by 10.0.0.1 port 49056 Jan 23 19:30:12.474111 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:12.498505 systemd[1]: sshd@30-10.0.0.117:22-10.0.0.1:49056.service: Deactivated successfully. Jan 23 19:30:12.509397 systemd[1]: session-31.scope: Deactivated successfully. Jan 23 19:30:12.514636 systemd-logind[1561]: Session 31 logged out. Waiting for processes to exit. Jan 23 19:30:12.519246 systemd-logind[1561]: Removed session 31. Jan 23 19:30:13.387296 kubelet[2859]: E0123 19:30:13.387254 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:16.389205 kubelet[2859]: E0123 19:30:16.389106 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:17.504538 systemd[1]: Started sshd@31-10.0.0.117:22-10.0.0.1:33294.service - OpenSSH per-connection server daemon (10.0.0.1:33294). Jan 23 19:30:17.667776 sshd[4603]: Accepted publickey for core from 10.0.0.1 port 33294 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:17.674215 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:17.696977 systemd-logind[1561]: New session 32 of user core. Jan 23 19:30:17.712561 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 23 19:30:18.118353 sshd[4606]: Connection closed by 10.0.0.1 port 33294 Jan 23 19:30:18.118326 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:18.138354 systemd[1]: sshd@31-10.0.0.117:22-10.0.0.1:33294.service: Deactivated successfully. Jan 23 19:30:18.148093 systemd[1]: session-32.scope: Deactivated successfully. Jan 23 19:30:18.157053 systemd-logind[1561]: Session 32 logged out. Waiting for processes to exit. Jan 23 19:30:18.161499 systemd-logind[1561]: Removed session 32. Jan 23 19:30:18.401238 kubelet[2859]: E0123 19:30:18.390736 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:21.069426 containerd[1585]: time="2026-01-23T19:30:21.066903299Z" level=warning msg="container event discarded" container=43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f type=CONTAINER_CREATED_EVENT Jan 23 19:30:21.069426 containerd[1585]: time="2026-01-23T19:30:21.067064664Z" level=warning msg="container event discarded" container=43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f type=CONTAINER_STARTED_EVENT Jan 23 19:30:21.196601 containerd[1585]: time="2026-01-23T19:30:21.196474330Z" level=warning msg="container event discarded" container=6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c type=CONTAINER_CREATED_EVENT Jan 23 19:30:21.196601 containerd[1585]: time="2026-01-23T19:30:21.196539483Z" level=warning msg="container event discarded" container=6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c type=CONTAINER_STARTED_EVENT Jan 23 19:30:21.253174 containerd[1585]: time="2026-01-23T19:30:21.251150935Z" level=warning msg="container event discarded" container=de94837dadb3850385870df23db3e0ee3998fea9c9ee14feb3481a1629e14678 type=CONTAINER_CREATED_EVENT Jan 23 19:30:21.253174 containerd[1585]: time="2026-01-23T19:30:21.251244833Z" level=warning msg="container event discarded" container=de94837dadb3850385870df23db3e0ee3998fea9c9ee14feb3481a1629e14678 type=CONTAINER_STARTED_EVENT Jan 23 19:30:21.416172 containerd[1585]: time="2026-01-23T19:30:21.415936134Z" level=warning msg="container event discarded" container=58433ac0b3757737623e43a90f99264643f8e0a60dd4f27aa015173f1ff912e3 type=CONTAINER_CREATED_EVENT Jan 23 19:30:21.885849 containerd[1585]: time="2026-01-23T19:30:21.883964807Z" level=warning msg="container event discarded" container=58433ac0b3757737623e43a90f99264643f8e0a60dd4f27aa015173f1ff912e3 type=CONTAINER_STARTED_EVENT Jan 23 19:30:23.161421 systemd[1]: Started sshd@32-10.0.0.117:22-10.0.0.1:33310.service - OpenSSH per-connection server daemon (10.0.0.1:33310). Jan 23 19:30:23.317263 sshd[4621]: Accepted publickey for core from 10.0.0.1 port 33310 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:23.323718 sshd-session[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:23.346608 systemd-logind[1561]: New session 33 of user core. Jan 23 19:30:23.377578 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 23 19:30:23.765402 sshd[4624]: Connection closed by 10.0.0.1 port 33310 Jan 23 19:30:23.766173 sshd-session[4621]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:23.782480 systemd[1]: sshd@32-10.0.0.117:22-10.0.0.1:33310.service: Deactivated successfully. Jan 23 19:30:23.785916 systemd[1]: session-33.scope: Deactivated successfully. Jan 23 19:30:23.788960 systemd-logind[1561]: Session 33 logged out. Waiting for processes to exit. Jan 23 19:30:23.799622 systemd-logind[1561]: Removed session 33. Jan 23 19:30:24.396201 kubelet[2859]: E0123 19:30:24.395966 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:25.837684 containerd[1585]: time="2026-01-23T19:30:25.837457003Z" level=warning msg="container event discarded" container=0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718 type=CONTAINER_CREATED_EVENT Jan 23 19:30:26.082675 containerd[1585]: time="2026-01-23T19:30:26.082593190Z" level=warning msg="container event discarded" container=0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718 type=CONTAINER_STARTED_EVENT Jan 23 19:30:28.795702 systemd[1]: Started sshd@33-10.0.0.117:22-10.0.0.1:48328.service - OpenSSH per-connection server daemon (10.0.0.1:48328). Jan 23 19:30:28.937447 sshd[4638]: Accepted publickey for core from 10.0.0.1 port 48328 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:28.942401 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:28.964396 systemd-logind[1561]: New session 34 of user core. Jan 23 19:30:28.982086 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 23 19:30:29.245383 sshd[4641]: Connection closed by 10.0.0.1 port 48328 Jan 23 19:30:29.248924 sshd-session[4638]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:29.267994 systemd[1]: sshd@33-10.0.0.117:22-10.0.0.1:48328.service: Deactivated successfully. Jan 23 19:30:29.274570 systemd[1]: session-34.scope: Deactivated successfully. Jan 23 19:30:29.286907 systemd-logind[1561]: Session 34 logged out. Waiting for processes to exit. Jan 23 19:30:29.293403 systemd-logind[1561]: Removed session 34. Jan 23 19:30:34.263253 systemd[1]: Started sshd@34-10.0.0.117:22-10.0.0.1:48340.service - OpenSSH per-connection server daemon (10.0.0.1:48340). Jan 23 19:30:34.368601 sshd[4655]: Accepted publickey for core from 10.0.0.1 port 48340 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:34.374026 sshd-session[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:34.392710 systemd-logind[1561]: New session 35 of user core. Jan 23 19:30:34.412309 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 23 19:30:34.657076 sshd[4658]: Connection closed by 10.0.0.1 port 48340 Jan 23 19:30:34.657033 sshd-session[4655]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:34.666286 systemd[1]: sshd@34-10.0.0.117:22-10.0.0.1:48340.service: Deactivated successfully. Jan 23 19:30:34.670376 systemd[1]: session-35.scope: Deactivated successfully. Jan 23 19:30:34.673183 systemd-logind[1561]: Session 35 logged out. Waiting for processes to exit. Jan 23 19:30:34.681888 systemd-logind[1561]: Removed session 35. Jan 23 19:30:39.679259 systemd[1]: Started sshd@35-10.0.0.117:22-10.0.0.1:44588.service - OpenSSH per-connection server daemon (10.0.0.1:44588). Jan 23 19:30:39.790971 sshd[4671]: Accepted publickey for core from 10.0.0.1 port 44588 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:39.794492 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:39.816911 systemd-logind[1561]: New session 36 of user core. Jan 23 19:30:39.830329 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 23 19:30:40.100748 sshd[4674]: Connection closed by 10.0.0.1 port 44588 Jan 23 19:30:40.101443 sshd-session[4671]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:40.110122 systemd[1]: sshd@35-10.0.0.117:22-10.0.0.1:44588.service: Deactivated successfully. Jan 23 19:30:40.116415 systemd[1]: session-36.scope: Deactivated successfully. Jan 23 19:30:40.119757 systemd-logind[1561]: Session 36 logged out. Waiting for processes to exit. Jan 23 19:30:40.125084 systemd-logind[1561]: Removed session 36. Jan 23 19:30:45.136069 systemd[1]: Started sshd@36-10.0.0.117:22-10.0.0.1:54628.service - OpenSSH per-connection server daemon (10.0.0.1:54628). Jan 23 19:30:45.317985 sshd[4687]: Accepted publickey for core from 10.0.0.1 port 54628 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:45.321532 sshd-session[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:45.346059 systemd-logind[1561]: New session 37 of user core. Jan 23 19:30:45.354981 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 23 19:30:45.684483 sshd[4690]: Connection closed by 10.0.0.1 port 54628 Jan 23 19:30:45.686281 sshd-session[4687]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:45.700122 systemd[1]: sshd@36-10.0.0.117:22-10.0.0.1:54628.service: Deactivated successfully. Jan 23 19:30:45.703565 systemd[1]: session-37.scope: Deactivated successfully. Jan 23 19:30:45.707401 systemd-logind[1561]: Session 37 logged out. Waiting for processes to exit. Jan 23 19:30:45.711024 systemd-logind[1561]: Removed session 37. Jan 23 19:30:48.386366 kubelet[2859]: E0123 19:30:48.386218 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:50.740372 systemd[1]: Started sshd@37-10.0.0.117:22-10.0.0.1:54638.service - OpenSSH per-connection server daemon (10.0.0.1:54638). Jan 23 19:30:51.002143 sshd[4703]: Accepted publickey for core from 10.0.0.1 port 54638 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:51.007493 sshd-session[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:51.059663 systemd-logind[1561]: New session 38 of user core. Jan 23 19:30:51.072586 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 23 19:30:51.406585 kubelet[2859]: E0123 19:30:51.406548 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:30:51.548905 sshd[4706]: Connection closed by 10.0.0.1 port 54638 Jan 23 19:30:51.553345 sshd-session[4703]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:51.570509 systemd[1]: sshd@37-10.0.0.117:22-10.0.0.1:54638.service: Deactivated successfully. Jan 23 19:30:51.576347 systemd[1]: session-38.scope: Deactivated successfully. Jan 23 19:30:51.581558 systemd-logind[1561]: Session 38 logged out. Waiting for processes to exit. Jan 23 19:30:51.592386 systemd-logind[1561]: Removed session 38. Jan 23 19:30:56.595451 systemd[1]: Started sshd@38-10.0.0.117:22-10.0.0.1:33260.service - OpenSSH per-connection server daemon (10.0.0.1:33260). Jan 23 19:30:56.899975 sshd[4722]: Accepted publickey for core from 10.0.0.1 port 33260 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:30:56.904971 sshd-session[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:30:56.980648 systemd-logind[1561]: New session 39 of user core. Jan 23 19:30:57.042095 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 23 19:30:57.683739 sshd[4725]: Connection closed by 10.0.0.1 port 33260 Jan 23 19:30:57.680144 sshd-session[4722]: pam_unix(sshd:session): session closed for user core Jan 23 19:30:57.693135 systemd[1]: sshd@38-10.0.0.117:22-10.0.0.1:33260.service: Deactivated successfully. Jan 23 19:30:57.710695 systemd[1]: session-39.scope: Deactivated successfully. Jan 23 19:30:57.742754 systemd-logind[1561]: Session 39 logged out. Waiting for processes to exit. Jan 23 19:30:57.750304 systemd-logind[1561]: Removed session 39. Jan 23 19:31:01.927235 containerd[1585]: time="2026-01-23T19:31:01.922238708Z" level=warning msg="container event discarded" container=22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d type=CONTAINER_CREATED_EVENT Jan 23 19:31:02.386730 kubelet[2859]: E0123 19:31:02.386691 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:02.449654 containerd[1585]: time="2026-01-23T19:31:02.447193517Z" level=warning msg="container event discarded" container=22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d type=CONTAINER_STARTED_EVENT Jan 23 19:31:02.750584 systemd[1]: Started sshd@39-10.0.0.117:22-10.0.0.1:33274.service - OpenSSH per-connection server daemon (10.0.0.1:33274). Jan 23 19:31:02.979906 sshd[4738]: Accepted publickey for core from 10.0.0.1 port 33274 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:02.982969 sshd-session[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:03.010258 systemd-logind[1561]: New session 40 of user core. Jan 23 19:31:03.024204 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 23 19:31:03.058189 containerd[1585]: time="2026-01-23T19:31:03.058087848Z" level=warning msg="container event discarded" container=22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d type=CONTAINER_STOPPED_EVENT Jan 23 19:31:03.328765 sshd[4741]: Connection closed by 10.0.0.1 port 33274 Jan 23 19:31:03.330245 sshd-session[4738]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:03.338956 systemd[1]: sshd@39-10.0.0.117:22-10.0.0.1:33274.service: Deactivated successfully. Jan 23 19:31:03.348542 systemd[1]: session-40.scope: Deactivated successfully. Jan 23 19:31:03.351982 systemd-logind[1561]: Session 40 logged out. Waiting for processes to exit. Jan 23 19:31:03.357528 systemd-logind[1561]: Removed session 40. Jan 23 19:31:04.229295 containerd[1585]: time="2026-01-23T19:31:04.229104472Z" level=warning msg="container event discarded" container=11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1 type=CONTAINER_CREATED_EVENT Jan 23 19:31:04.564156 containerd[1585]: time="2026-01-23T19:31:04.563004817Z" level=warning msg="container event discarded" container=11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1 type=CONTAINER_STARTED_EVENT Jan 23 19:31:04.878184 containerd[1585]: time="2026-01-23T19:31:04.877970480Z" level=warning msg="container event discarded" container=11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1 type=CONTAINER_STOPPED_EVENT Jan 23 19:31:05.194773 containerd[1585]: time="2026-01-23T19:31:05.194282333Z" level=warning msg="container event discarded" container=cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c type=CONTAINER_CREATED_EVENT Jan 23 19:31:05.594677 containerd[1585]: time="2026-01-23T19:31:05.593635478Z" level=warning msg="container event discarded" container=cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c type=CONTAINER_STARTED_EVENT Jan 23 19:31:05.835449 containerd[1585]: time="2026-01-23T19:31:05.835155392Z" level=warning msg="container event discarded" container=cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c type=CONTAINER_STOPPED_EVENT Jan 23 19:31:06.302414 containerd[1585]: time="2026-01-23T19:31:06.302322718Z" level=warning msg="container event discarded" container=ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78 type=CONTAINER_CREATED_EVENT Jan 23 19:31:06.636266 containerd[1585]: time="2026-01-23T19:31:06.636091298Z" level=warning msg="container event discarded" container=ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78 type=CONTAINER_STARTED_EVENT Jan 23 19:31:06.810122 containerd[1585]: time="2026-01-23T19:31:06.809907526Z" level=warning msg="container event discarded" container=ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78 type=CONTAINER_STOPPED_EVENT Jan 23 19:31:07.326290 containerd[1585]: time="2026-01-23T19:31:07.314145035Z" level=warning msg="container event discarded" container=6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16 type=CONTAINER_CREATED_EVENT Jan 23 19:31:07.644982 containerd[1585]: time="2026-01-23T19:31:07.644327933Z" level=warning msg="container event discarded" container=6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16 type=CONTAINER_STARTED_EVENT Jan 23 19:31:08.379107 systemd[1]: Started sshd@40-10.0.0.117:22-10.0.0.1:54996.service - OpenSSH per-connection server daemon (10.0.0.1:54996). Jan 23 19:31:08.389372 kubelet[2859]: E0123 19:31:08.386951 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:08.701032 sshd[4755]: Accepted publickey for core from 10.0.0.1 port 54996 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:08.701553 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:08.750202 systemd-logind[1561]: New session 41 of user core. Jan 23 19:31:08.775604 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 23 19:31:09.241097 sshd[4758]: Connection closed by 10.0.0.1 port 54996 Jan 23 19:31:09.242122 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:09.257461 systemd[1]: sshd@40-10.0.0.117:22-10.0.0.1:54996.service: Deactivated successfully. Jan 23 19:31:09.264611 systemd[1]: session-41.scope: Deactivated successfully. Jan 23 19:31:09.268085 systemd-logind[1561]: Session 41 logged out. Waiting for processes to exit. Jan 23 19:31:09.284966 systemd-logind[1561]: Removed session 41. Jan 23 19:31:14.284571 systemd[1]: Started sshd@41-10.0.0.117:22-10.0.0.1:55008.service - OpenSSH per-connection server daemon (10.0.0.1:55008). Jan 23 19:31:14.537030 sshd[4771]: Accepted publickey for core from 10.0.0.1 port 55008 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:14.547084 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:14.576967 systemd-logind[1561]: New session 42 of user core. Jan 23 19:31:14.581122 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 23 19:31:15.082229 sshd[4774]: Connection closed by 10.0.0.1 port 55008 Jan 23 19:31:15.080083 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:15.100183 systemd[1]: sshd@41-10.0.0.117:22-10.0.0.1:55008.service: Deactivated successfully. Jan 23 19:31:15.111418 systemd[1]: session-42.scope: Deactivated successfully. Jan 23 19:31:15.141599 systemd-logind[1561]: Session 42 logged out. Waiting for processes to exit. Jan 23 19:31:15.160147 systemd-logind[1561]: Removed session 42. Jan 23 19:31:18.390083 kubelet[2859]: E0123 19:31:18.386444 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:20.118469 systemd[1]: Started sshd@42-10.0.0.117:22-10.0.0.1:45170.service - OpenSSH per-connection server daemon (10.0.0.1:45170). Jan 23 19:31:20.309380 sshd[4789]: Accepted publickey for core from 10.0.0.1 port 45170 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:20.322568 sshd-session[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:20.357670 systemd-logind[1561]: New session 43 of user core. Jan 23 19:31:20.376341 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 23 19:31:20.717158 sshd[4792]: Connection closed by 10.0.0.1 port 45170 Jan 23 19:31:20.720243 sshd-session[4789]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:20.743751 systemd[1]: sshd@42-10.0.0.117:22-10.0.0.1:45170.service: Deactivated successfully. Jan 23 19:31:20.749745 systemd[1]: session-43.scope: Deactivated successfully. Jan 23 19:31:20.768769 systemd-logind[1561]: Session 43 logged out. Waiting for processes to exit. Jan 23 19:31:20.777505 systemd-logind[1561]: Removed session 43. Jan 23 19:31:25.749730 systemd[1]: Started sshd@43-10.0.0.117:22-10.0.0.1:49016.service - OpenSSH per-connection server daemon (10.0.0.1:49016). Jan 23 19:31:25.833271 containerd[1585]: time="2026-01-23T19:31:25.831663977Z" level=warning msg="container event discarded" container=6e83940c6abcbd910c5024770a9561cff53f59cd24b40fb46b78b89b349efb21 type=CONTAINER_CREATED_EVENT Jan 23 19:31:25.833271 containerd[1585]: time="2026-01-23T19:31:25.832961037Z" level=warning msg="container event discarded" container=6e83940c6abcbd910c5024770a9561cff53f59cd24b40fb46b78b89b349efb21 type=CONTAINER_STARTED_EVENT Jan 23 19:31:25.918530 sshd[4807]: Accepted publickey for core from 10.0.0.1 port 49016 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:25.925604 sshd-session[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:25.941484 containerd[1585]: time="2026-01-23T19:31:25.941261239Z" level=warning msg="container event discarded" container=ceb1bd1810a413b70c22d89bdc9e5ce884509c3ab8b85899f4645a0c5d02d795 type=CONTAINER_CREATED_EVENT Jan 23 19:31:25.941484 containerd[1585]: time="2026-01-23T19:31:25.941443304Z" level=warning msg="container event discarded" container=ceb1bd1810a413b70c22d89bdc9e5ce884509c3ab8b85899f4645a0c5d02d795 type=CONTAINER_STARTED_EVENT Jan 23 19:31:25.958633 systemd-logind[1561]: New session 44 of user core. Jan 23 19:31:25.964713 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 23 19:31:25.992769 containerd[1585]: time="2026-01-23T19:31:25.992605067Z" level=warning msg="container event discarded" container=63ce20f6dbc4d839735eb7b4bd8b724d49dd22ccce49d9b87039173c2f3eb0e2 type=CONTAINER_CREATED_EVENT Jan 23 19:31:26.067117 containerd[1585]: time="2026-01-23T19:31:26.066661300Z" level=warning msg="container event discarded" container=91756265c9da6706fe5128f76027b017684b215fbf635ddc742a99d060b6a993 type=CONTAINER_CREATED_EVENT Jan 23 19:31:26.268768 containerd[1585]: time="2026-01-23T19:31:26.268479914Z" level=warning msg="container event discarded" container=63ce20f6dbc4d839735eb7b4bd8b724d49dd22ccce49d9b87039173c2f3eb0e2 type=CONTAINER_STARTED_EVENT Jan 23 19:31:26.372998 containerd[1585]: time="2026-01-23T19:31:26.371901155Z" level=warning msg="container event discarded" container=91756265c9da6706fe5128f76027b017684b215fbf635ddc742a99d060b6a993 type=CONTAINER_STARTED_EVENT Jan 23 19:31:26.422474 sshd[4810]: Connection closed by 10.0.0.1 port 49016 Jan 23 19:31:26.427923 sshd-session[4807]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:26.444944 systemd[1]: sshd@43-10.0.0.117:22-10.0.0.1:49016.service: Deactivated successfully. Jan 23 19:31:26.448676 systemd[1]: session-44.scope: Deactivated successfully. Jan 23 19:31:26.466468 systemd-logind[1561]: Session 44 logged out. Waiting for processes to exit. Jan 23 19:31:26.472604 systemd-logind[1561]: Removed session 44. Jan 23 19:31:31.466034 systemd[1]: Started sshd@44-10.0.0.117:22-10.0.0.1:49020.service - OpenSSH per-connection server daemon (10.0.0.1:49020). Jan 23 19:31:31.816346 sshd[4824]: Accepted publickey for core from 10.0.0.1 port 49020 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:31.819201 sshd-session[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:31.861972 systemd-logind[1561]: New session 45 of user core. Jan 23 19:31:31.878218 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 23 19:31:32.312686 sshd[4828]: Connection closed by 10.0.0.1 port 49020 Jan 23 19:31:32.313231 sshd-session[4824]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:32.342286 systemd[1]: sshd@44-10.0.0.117:22-10.0.0.1:49020.service: Deactivated successfully. Jan 23 19:31:32.347731 systemd[1]: session-45.scope: Deactivated successfully. Jan 23 19:31:32.351139 systemd-logind[1561]: Session 45 logged out. Waiting for processes to exit. Jan 23 19:31:32.358991 systemd-logind[1561]: Removed session 45. Jan 23 19:31:37.351415 systemd[1]: Started sshd@45-10.0.0.117:22-10.0.0.1:39674.service - OpenSSH per-connection server daemon (10.0.0.1:39674). Jan 23 19:31:37.392697 kubelet[2859]: E0123 19:31:37.391454 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:37.394622 kubelet[2859]: E0123 19:31:37.393222 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:37.562663 sshd[4842]: Accepted publickey for core from 10.0.0.1 port 39674 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:37.562172 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:37.590344 systemd-logind[1561]: New session 46 of user core. Jan 23 19:31:37.601712 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 23 19:31:37.957639 sshd[4846]: Connection closed by 10.0.0.1 port 39674 Jan 23 19:31:37.959310 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:37.977062 systemd[1]: sshd@45-10.0.0.117:22-10.0.0.1:39674.service: Deactivated successfully. Jan 23 19:31:37.987096 systemd[1]: session-46.scope: Deactivated successfully. Jan 23 19:31:37.997371 systemd-logind[1561]: Session 46 logged out. Waiting for processes to exit. Jan 23 19:31:38.018019 systemd-logind[1561]: Removed session 46. Jan 23 19:31:43.003346 systemd[1]: Started sshd@46-10.0.0.117:22-10.0.0.1:39678.service - OpenSSH per-connection server daemon (10.0.0.1:39678). Jan 23 19:31:43.158001 sshd[4859]: Accepted publickey for core from 10.0.0.1 port 39678 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:43.161654 sshd-session[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:43.187441 systemd-logind[1561]: New session 47 of user core. Jan 23 19:31:43.195272 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 23 19:31:43.534662 sshd[4862]: Connection closed by 10.0.0.1 port 39678 Jan 23 19:31:43.534662 sshd-session[4859]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:43.553440 systemd[1]: sshd@46-10.0.0.117:22-10.0.0.1:39678.service: Deactivated successfully. Jan 23 19:31:43.564251 systemd[1]: session-47.scope: Deactivated successfully. Jan 23 19:31:43.576960 systemd-logind[1561]: Session 47 logged out. Waiting for processes to exit. Jan 23 19:31:43.580637 systemd[1]: Started sshd@47-10.0.0.117:22-10.0.0.1:39680.service - OpenSSH per-connection server daemon (10.0.0.1:39680). Jan 23 19:31:43.597377 systemd-logind[1561]: Removed session 47. Jan 23 19:31:43.735115 sshd[4876]: Accepted publickey for core from 10.0.0.1 port 39680 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:43.735544 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:43.760159 systemd-logind[1561]: New session 48 of user core. Jan 23 19:31:43.773454 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 23 19:31:44.709254 sshd[4880]: Connection closed by 10.0.0.1 port 39680 Jan 23 19:31:44.712455 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:44.737981 systemd[1]: sshd@47-10.0.0.117:22-10.0.0.1:39680.service: Deactivated successfully. Jan 23 19:31:44.749094 systemd[1]: session-48.scope: Deactivated successfully. Jan 23 19:31:44.754343 systemd-logind[1561]: Session 48 logged out. Waiting for processes to exit. Jan 23 19:31:44.766496 systemd[1]: Started sshd@48-10.0.0.117:22-10.0.0.1:36842.service - OpenSSH per-connection server daemon (10.0.0.1:36842). Jan 23 19:31:44.774041 systemd-logind[1561]: Removed session 48. Jan 23 19:31:44.939336 sshd[4892]: Accepted publickey for core from 10.0.0.1 port 36842 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:44.941357 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:44.956048 systemd-logind[1561]: New session 49 of user core. Jan 23 19:31:44.979037 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 23 19:31:46.683335 sshd[4895]: Connection closed by 10.0.0.1 port 36842 Jan 23 19:31:46.684356 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:46.699742 systemd[1]: sshd@48-10.0.0.117:22-10.0.0.1:36842.service: Deactivated successfully. Jan 23 19:31:46.705131 systemd[1]: session-49.scope: Deactivated successfully. Jan 23 19:31:46.720750 systemd-logind[1561]: Session 49 logged out. Waiting for processes to exit. Jan 23 19:31:46.732188 systemd[1]: Started sshd@49-10.0.0.117:22-10.0.0.1:36850.service - OpenSSH per-connection server daemon (10.0.0.1:36850). Jan 23 19:31:46.739373 systemd-logind[1561]: Removed session 49. Jan 23 19:31:46.902982 sshd[4914]: Accepted publickey for core from 10.0.0.1 port 36850 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:46.907446 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:46.944312 systemd-logind[1561]: New session 50 of user core. Jan 23 19:31:46.963459 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 23 19:31:47.917023 sshd[4917]: Connection closed by 10.0.0.1 port 36850 Jan 23 19:31:47.917147 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:47.956053 systemd[1]: sshd@49-10.0.0.117:22-10.0.0.1:36850.service: Deactivated successfully. Jan 23 19:31:47.963459 systemd[1]: session-50.scope: Deactivated successfully. Jan 23 19:31:47.979054 systemd-logind[1561]: Session 50 logged out. Waiting for processes to exit. Jan 23 19:31:47.997663 systemd[1]: Started sshd@50-10.0.0.117:22-10.0.0.1:36860.service - OpenSSH per-connection server daemon (10.0.0.1:36860). Jan 23 19:31:48.012199 systemd-logind[1561]: Removed session 50. Jan 23 19:31:48.176757 sshd[4930]: Accepted publickey for core from 10.0.0.1 port 36860 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:48.179503 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:48.203712 systemd-logind[1561]: New session 51 of user core. Jan 23 19:31:48.211261 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 23 19:31:48.590384 sshd[4933]: Connection closed by 10.0.0.1 port 36860 Jan 23 19:31:48.587149 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:48.620398 systemd[1]: sshd@50-10.0.0.117:22-10.0.0.1:36860.service: Deactivated successfully. Jan 23 19:31:48.626408 systemd[1]: session-51.scope: Deactivated successfully. Jan 23 19:31:48.632985 systemd-logind[1561]: Session 51 logged out. Waiting for processes to exit. Jan 23 19:31:48.641413 systemd-logind[1561]: Removed session 51. Jan 23 19:31:49.390966 kubelet[2859]: E0123 19:31:49.390721 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:52.386004 kubelet[2859]: E0123 19:31:52.385744 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:31:53.621278 systemd[1]: Started sshd@51-10.0.0.117:22-10.0.0.1:36876.service - OpenSSH per-connection server daemon (10.0.0.1:36876). Jan 23 19:31:53.772496 sshd[4950]: Accepted publickey for core from 10.0.0.1 port 36876 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:53.779077 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:53.814426 systemd-logind[1561]: New session 52 of user core. Jan 23 19:31:53.840969 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 23 19:31:54.268748 sshd[4953]: Connection closed by 10.0.0.1 port 36876 Jan 23 19:31:54.272556 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:54.287111 systemd[1]: sshd@51-10.0.0.117:22-10.0.0.1:36876.service: Deactivated successfully. Jan 23 19:31:54.287714 systemd-logind[1561]: Session 52 logged out. Waiting for processes to exit. Jan 23 19:31:54.298580 systemd[1]: session-52.scope: Deactivated successfully. Jan 23 19:31:54.316486 systemd-logind[1561]: Removed session 52. Jan 23 19:31:59.309988 systemd[1]: Started sshd@52-10.0.0.117:22-10.0.0.1:40588.service - OpenSSH per-connection server daemon (10.0.0.1:40588). Jan 23 19:31:59.515520 sshd[4967]: Accepted publickey for core from 10.0.0.1 port 40588 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:31:59.518761 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:31:59.554425 systemd-logind[1561]: New session 53 of user core. Jan 23 19:31:59.570433 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 23 19:31:59.876299 sshd[4970]: Connection closed by 10.0.0.1 port 40588 Jan 23 19:31:59.876211 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Jan 23 19:31:59.887308 systemd[1]: sshd@52-10.0.0.117:22-10.0.0.1:40588.service: Deactivated successfully. Jan 23 19:31:59.898586 systemd[1]: session-53.scope: Deactivated successfully. Jan 23 19:31:59.905306 systemd-logind[1561]: Session 53 logged out. Waiting for processes to exit. Jan 23 19:31:59.917679 systemd-logind[1561]: Removed session 53. Jan 23 19:32:04.920609 systemd[1]: Started sshd@53-10.0.0.117:22-10.0.0.1:59798.service - OpenSSH per-connection server daemon (10.0.0.1:59798). Jan 23 19:32:05.238675 sshd[4983]: Accepted publickey for core from 10.0.0.1 port 59798 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:05.243394 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:05.275460 systemd-logind[1561]: New session 54 of user core. Jan 23 19:32:05.292662 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 23 19:32:05.850521 sshd[4986]: Connection closed by 10.0.0.1 port 59798 Jan 23 19:32:05.854204 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:05.890194 systemd[1]: sshd@53-10.0.0.117:22-10.0.0.1:59798.service: Deactivated successfully. Jan 23 19:32:05.902415 systemd[1]: session-54.scope: Deactivated successfully. Jan 23 19:32:05.907654 systemd-logind[1561]: Session 54 logged out. Waiting for processes to exit. Jan 23 19:32:05.926166 systemd-logind[1561]: Removed session 54. Jan 23 19:32:10.881223 systemd[1]: Started sshd@54-10.0.0.117:22-10.0.0.1:59814.service - OpenSSH per-connection server daemon (10.0.0.1:59814). Jan 23 19:32:11.082279 sshd[4999]: Accepted publickey for core from 10.0.0.1 port 59814 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:11.085518 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:11.124040 systemd-logind[1561]: New session 55 of user core. Jan 23 19:32:11.149197 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 23 19:32:11.692327 sshd[5002]: Connection closed by 10.0.0.1 port 59814 Jan 23 19:32:11.699147 sshd-session[4999]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:11.708342 systemd[1]: sshd@54-10.0.0.117:22-10.0.0.1:59814.service: Deactivated successfully. Jan 23 19:32:11.719544 systemd[1]: session-55.scope: Deactivated successfully. Jan 23 19:32:11.729747 systemd-logind[1561]: Session 55 logged out. Waiting for processes to exit. Jan 23 19:32:11.735497 systemd-logind[1561]: Removed session 55. Jan 23 19:32:15.406643 kubelet[2859]: E0123 19:32:15.405921 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:16.741202 systemd[1]: Started sshd@55-10.0.0.117:22-10.0.0.1:42524.service - OpenSSH per-connection server daemon (10.0.0.1:42524). Jan 23 19:32:16.902353 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 42524 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:16.905400 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:16.927278 systemd-logind[1561]: New session 56 of user core. Jan 23 19:32:16.946291 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 23 19:32:17.271368 sshd[5021]: Connection closed by 10.0.0.1 port 42524 Jan 23 19:32:17.269301 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:17.277180 systemd[1]: sshd@55-10.0.0.117:22-10.0.0.1:42524.service: Deactivated successfully. Jan 23 19:32:17.281147 systemd[1]: session-56.scope: Deactivated successfully. Jan 23 19:32:17.283655 systemd-logind[1561]: Session 56 logged out. Waiting for processes to exit. Jan 23 19:32:17.287251 systemd-logind[1561]: Removed session 56. Jan 23 19:32:19.387020 kubelet[2859]: E0123 19:32:19.386491 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:19.387701 kubelet[2859]: E0123 19:32:19.387353 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:22.302067 systemd[1]: Started sshd@56-10.0.0.117:22-10.0.0.1:42534.service - OpenSSH per-connection server daemon (10.0.0.1:42534). Jan 23 19:32:22.394067 kubelet[2859]: E0123 19:32:22.393951 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:22.482612 sshd[5034]: Accepted publickey for core from 10.0.0.1 port 42534 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:22.486459 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:22.504491 systemd-logind[1561]: New session 57 of user core. Jan 23 19:32:22.517127 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 23 19:32:22.802658 sshd[5037]: Connection closed by 10.0.0.1 port 42534 Jan 23 19:32:22.805135 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:22.819678 systemd[1]: sshd@56-10.0.0.117:22-10.0.0.1:42534.service: Deactivated successfully. Jan 23 19:32:22.827277 systemd[1]: session-57.scope: Deactivated successfully. Jan 23 19:32:22.833741 systemd-logind[1561]: Session 57 logged out. Waiting for processes to exit. Jan 23 19:32:22.844496 systemd-logind[1561]: Removed session 57. Jan 23 19:32:27.835088 systemd[1]: Started sshd@57-10.0.0.117:22-10.0.0.1:39548.service - OpenSSH per-connection server daemon (10.0.0.1:39548). Jan 23 19:32:27.973266 sshd[5054]: Accepted publickey for core from 10.0.0.1 port 39548 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:27.976417 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:27.994411 systemd-logind[1561]: New session 58 of user core. Jan 23 19:32:28.012520 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 23 19:32:28.487437 sshd[5057]: Connection closed by 10.0.0.1 port 39548 Jan 23 19:32:28.490388 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:28.506517 systemd[1]: sshd@57-10.0.0.117:22-10.0.0.1:39548.service: Deactivated successfully. Jan 23 19:32:28.506718 systemd-logind[1561]: Session 58 logged out. Waiting for processes to exit. Jan 23 19:32:28.514475 systemd[1]: session-58.scope: Deactivated successfully. Jan 23 19:32:28.519485 systemd-logind[1561]: Removed session 58. Jan 23 19:32:33.519526 systemd[1]: Started sshd@58-10.0.0.117:22-10.0.0.1:39560.service - OpenSSH per-connection server daemon (10.0.0.1:39560). Jan 23 19:32:33.801545 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 39560 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:33.811255 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:33.873991 systemd-logind[1561]: New session 59 of user core. Jan 23 19:32:33.908736 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 23 19:32:34.782401 sshd[5075]: Connection closed by 10.0.0.1 port 39560 Jan 23 19:32:34.779732 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:34.810642 systemd[1]: sshd@58-10.0.0.117:22-10.0.0.1:39560.service: Deactivated successfully. Jan 23 19:32:34.833664 systemd[1]: session-59.scope: Deactivated successfully. Jan 23 19:32:34.846340 systemd-logind[1561]: Session 59 logged out. Waiting for processes to exit. Jan 23 19:32:34.862329 systemd-logind[1561]: Removed session 59. Jan 23 19:32:38.396326 kubelet[2859]: E0123 19:32:38.390174 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:39.802203 systemd[1]: Started sshd@59-10.0.0.117:22-10.0.0.1:41002.service - OpenSSH per-connection server daemon (10.0.0.1:41002). Jan 23 19:32:39.969558 sshd[5088]: Accepted publickey for core from 10.0.0.1 port 41002 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:39.974289 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:40.007210 systemd-logind[1561]: New session 60 of user core. Jan 23 19:32:40.023535 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 23 19:32:40.415720 sshd[5091]: Connection closed by 10.0.0.1 port 41002 Jan 23 19:32:40.414521 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:40.446730 systemd[1]: sshd@59-10.0.0.117:22-10.0.0.1:41002.service: Deactivated successfully. Jan 23 19:32:40.452268 systemd[1]: session-60.scope: Deactivated successfully. Jan 23 19:32:40.455185 systemd-logind[1561]: Session 60 logged out. Waiting for processes to exit. Jan 23 19:32:40.464483 systemd[1]: Started sshd@60-10.0.0.117:22-10.0.0.1:41004.service - OpenSSH per-connection server daemon (10.0.0.1:41004). Jan 23 19:32:40.469597 systemd-logind[1561]: Removed session 60. Jan 23 19:32:40.621153 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 41004 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:40.626561 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:40.657105 systemd-logind[1561]: New session 61 of user core. Jan 23 19:32:40.670036 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 23 19:32:42.639345 containerd[1585]: time="2026-01-23T19:32:42.639040208Z" level=info msg="StopContainer for \"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\" with timeout 30 (s)" Jan 23 19:32:42.660228 containerd[1585]: time="2026-01-23T19:32:42.660105419Z" level=info msg="Stop container \"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\" with signal terminated" Jan 23 19:32:42.738333 systemd[1]: cri-containerd-0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718.scope: Deactivated successfully. Jan 23 19:32:42.739314 systemd[1]: cri-containerd-0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718.scope: Consumed 2.526s CPU time, 29.7M memory peak, 460K read from disk, 4K written to disk. Jan 23 19:32:42.743326 containerd[1585]: time="2026-01-23T19:32:42.741550684Z" level=info msg="received container exit event container_id:\"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\" id:\"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\" pid:3284 exited_at:{seconds:1769196762 nanos:740082267}" Jan 23 19:32:42.770985 containerd[1585]: time="2026-01-23T19:32:42.770561585Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:32:42.787463 containerd[1585]: time="2026-01-23T19:32:42.787265504Z" level=info msg="StopContainer for \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\" with timeout 2 (s)" Jan 23 19:32:42.792316 containerd[1585]: time="2026-01-23T19:32:42.789628799Z" level=info msg="Stop container \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\" with signal terminated" Jan 23 19:32:42.844202 systemd-networkd[1384]: lxc_health: Link DOWN Jan 23 19:32:42.844214 systemd-networkd[1384]: lxc_health: Lost carrier Jan 23 19:32:42.890234 containerd[1585]: time="2026-01-23T19:32:42.889729355Z" level=info msg="received container exit event container_id:\"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\" id:\"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\" pid:3517 exited_at:{seconds:1769196762 nanos:889051685}" Jan 23 19:32:42.900320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718-rootfs.mount: Deactivated successfully. Jan 23 19:32:42.902618 systemd[1]: cri-containerd-6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16.scope: Deactivated successfully. Jan 23 19:32:42.906998 systemd[1]: cri-containerd-6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16.scope: Consumed 19.678s CPU time, 127.7M memory peak, 176K read from disk, 13.3M written to disk. Jan 23 19:32:42.954770 containerd[1585]: time="2026-01-23T19:32:42.954650196Z" level=info msg="StopContainer for \"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\" returns successfully" Jan 23 19:32:42.962424 containerd[1585]: time="2026-01-23T19:32:42.962386900Z" level=info msg="StopPodSandbox for \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\"" Jan 23 19:32:42.966994 containerd[1585]: time="2026-01-23T19:32:42.966754965Z" level=info msg="Container to stop \"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:32:42.992490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16-rootfs.mount: Deactivated successfully. Jan 23 19:32:42.994692 systemd[1]: cri-containerd-43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f.scope: Deactivated successfully. Jan 23 19:32:43.005306 containerd[1585]: time="2026-01-23T19:32:43.004746174Z" level=info msg="received sandbox exit event container_id:\"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" id:\"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" exit_status:137 exited_at:{seconds:1769196763 nanos:3983289}" monitor_name=podsandbox Jan 23 19:32:43.034074 containerd[1585]: time="2026-01-23T19:32:43.033999141Z" level=info msg="StopContainer for \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\" returns successfully" Jan 23 19:32:43.034739 containerd[1585]: time="2026-01-23T19:32:43.034683265Z" level=info msg="StopPodSandbox for \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\"" Jan 23 19:32:43.035118 containerd[1585]: time="2026-01-23T19:32:43.034762190Z" level=info msg="Container to stop \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:32:43.035118 containerd[1585]: time="2026-01-23T19:32:43.034777207Z" level=info msg="Container to stop \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:32:43.035118 containerd[1585]: time="2026-01-23T19:32:43.034961126Z" level=info msg="Container to stop \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:32:43.035118 containerd[1585]: time="2026-01-23T19:32:43.034972857Z" level=info msg="Container to stop \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:32:43.035118 containerd[1585]: time="2026-01-23T19:32:43.034985812Z" level=info msg="Container to stop \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:32:43.052604 systemd[1]: cri-containerd-6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c.scope: Deactivated successfully. Jan 23 19:32:43.064631 containerd[1585]: time="2026-01-23T19:32:43.064154375Z" level=info msg="received sandbox exit event container_id:\"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" id:\"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" exit_status:137 exited_at:{seconds:1769196763 nanos:63607660}" monitor_name=podsandbox Jan 23 19:32:43.091499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f-rootfs.mount: Deactivated successfully. Jan 23 19:32:43.119690 containerd[1585]: time="2026-01-23T19:32:43.119147155Z" level=info msg="shim disconnected" id=43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f namespace=k8s.io Jan 23 19:32:43.119690 containerd[1585]: time="2026-01-23T19:32:43.119180256Z" level=warning msg="cleaning up after shim disconnected" id=43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f namespace=k8s.io Jan 23 19:32:43.119690 containerd[1585]: time="2026-01-23T19:32:43.119192527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 19:32:43.128666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c-rootfs.mount: Deactivated successfully. Jan 23 19:32:43.150493 containerd[1585]: time="2026-01-23T19:32:43.150040789Z" level=info msg="shim disconnected" id=6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c namespace=k8s.io Jan 23 19:32:43.150493 containerd[1585]: time="2026-01-23T19:32:43.150086884Z" level=warning msg="cleaning up after shim disconnected" id=6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c namespace=k8s.io Jan 23 19:32:43.150493 containerd[1585]: time="2026-01-23T19:32:43.150098796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 19:32:43.203444 containerd[1585]: time="2026-01-23T19:32:43.203396880Z" level=info msg="TearDown network for sandbox \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" successfully" Jan 23 19:32:43.205748 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f-shm.mount: Deactivated successfully. Jan 23 19:32:43.206344 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c-shm.mount: Deactivated successfully. Jan 23 19:32:43.208005 containerd[1585]: time="2026-01-23T19:32:43.207689308Z" level=info msg="StopPodSandbox for \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" returns successfully" Jan 23 19:32:43.212749 containerd[1585]: time="2026-01-23T19:32:43.208358159Z" level=info msg="TearDown network for sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" successfully" Jan 23 19:32:43.212749 containerd[1585]: time="2026-01-23T19:32:43.208438166Z" level=info msg="StopPodSandbox for \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" returns successfully" Jan 23 19:32:43.221229 containerd[1585]: time="2026-01-23T19:32:43.220745189Z" level=info msg="received sandbox container exit event sandbox_id:\"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" exit_status:137 exited_at:{seconds:1769196763 nanos:3983289}" monitor_name=criService Jan 23 19:32:43.221729 containerd[1585]: time="2026-01-23T19:32:43.221614509Z" level=info msg="received sandbox container exit event sandbox_id:\"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" exit_status:137 exited_at:{seconds:1769196763 nanos:63607660}" monitor_name=criService Jan 23 19:32:43.341490 kubelet[2859]: I0123 19:32:43.341351 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-lib-modules\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.341490 kubelet[2859]: I0123 19:32:43.341493 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef6cf0cb-401f-41f1-8fc5-1db19e184d24-cilium-config-path\") pod \"ef6cf0cb-401f-41f1-8fc5-1db19e184d24\" (UID: \"ef6cf0cb-401f-41f1-8fc5-1db19e184d24\") " Jan 23 19:32:43.348298 kubelet[2859]: I0123 19:32:43.341526 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-etc-cni-netd\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.348298 kubelet[2859]: I0123 19:32:43.341556 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-clustermesh-secrets\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.348298 kubelet[2859]: I0123 19:32:43.341586 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-config-path\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.348298 kubelet[2859]: I0123 19:32:43.341606 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58x4t\" (UniqueName: \"kubernetes.io/projected/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-kube-api-access-58x4t\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.348298 kubelet[2859]: I0123 19:32:43.341628 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8ndm\" (UniqueName: \"kubernetes.io/projected/ef6cf0cb-401f-41f1-8fc5-1db19e184d24-kube-api-access-h8ndm\") pod \"ef6cf0cb-401f-41f1-8fc5-1db19e184d24\" (UID: \"ef6cf0cb-401f-41f1-8fc5-1db19e184d24\") " Jan 23 19:32:43.348298 kubelet[2859]: I0123 19:32:43.341722 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-hubble-tls\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.348524 kubelet[2859]: I0123 19:32:43.341747 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-host-proc-sys-kernel\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.348524 kubelet[2859]: I0123 19:32:43.341771 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-bpf-maps\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.348524 kubelet[2859]: I0123 19:32:43.342004 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-host-proc-sys-net\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.348524 kubelet[2859]: I0123 19:32:43.342028 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-run\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.348524 kubelet[2859]: I0123 19:32:43.342048 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cni-path\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.348524 kubelet[2859]: I0123 19:32:43.342069 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-xtables-lock\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.350606 kubelet[2859]: I0123 19:32:43.342086 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-hostproc\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.350606 kubelet[2859]: I0123 19:32:43.342105 2859 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-cgroup\") pod \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\" (UID: \"5cbb13b0-35a7-4d1f-baba-b2b78a040c8e\") " Jan 23 19:32:43.350606 kubelet[2859]: I0123 19:32:43.342201 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:32:43.350606 kubelet[2859]: I0123 19:32:43.342583 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:32:43.350606 kubelet[2859]: I0123 19:32:43.342647 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:32:43.350968 kubelet[2859]: I0123 19:32:43.345613 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:32:43.350968 kubelet[2859]: I0123 19:32:43.347338 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:32:43.350968 kubelet[2859]: I0123 19:32:43.347472 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:32:43.350968 kubelet[2859]: I0123 19:32:43.347504 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:32:43.350968 kubelet[2859]: I0123 19:32:43.347528 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cni-path" (OuterVolumeSpecName: "cni-path") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:32:43.351140 kubelet[2859]: I0123 19:32:43.347552 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:32:43.351140 kubelet[2859]: I0123 19:32:43.347576 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-hostproc" (OuterVolumeSpecName: "hostproc") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:32:43.357559 kubelet[2859]: I0123 19:32:43.357437 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6cf0cb-401f-41f1-8fc5-1db19e184d24-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef6cf0cb-401f-41f1-8fc5-1db19e184d24" (UID: "ef6cf0cb-401f-41f1-8fc5-1db19e184d24"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:32:43.369630 kubelet[2859]: I0123 19:32:43.369513 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:32:43.374205 kubelet[2859]: I0123 19:32:43.374134 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6cf0cb-401f-41f1-8fc5-1db19e184d24-kube-api-access-h8ndm" (OuterVolumeSpecName: "kube-api-access-h8ndm") pod "ef6cf0cb-401f-41f1-8fc5-1db19e184d24" (UID: "ef6cf0cb-401f-41f1-8fc5-1db19e184d24"). InnerVolumeSpecName "kube-api-access-h8ndm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:32:43.376431 kubelet[2859]: I0123 19:32:43.375352 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:32:43.376431 kubelet[2859]: I0123 19:32:43.375449 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 19:32:43.376431 kubelet[2859]: I0123 19:32:43.375542 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-kube-api-access-58x4t" (OuterVolumeSpecName: "kube-api-access-58x4t") pod "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" (UID: "5cbb13b0-35a7-4d1f-baba-b2b78a040c8e"). InnerVolumeSpecName "kube-api-access-58x4t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:32:43.405094 systemd[1]: Removed slice kubepods-burstable-pod5cbb13b0_35a7_4d1f_baba_b2b78a040c8e.slice - libcontainer container kubepods-burstable-pod5cbb13b0_35a7_4d1f_baba_b2b78a040c8e.slice. Jan 23 19:32:43.406522 systemd[1]: kubepods-burstable-pod5cbb13b0_35a7_4d1f_baba_b2b78a040c8e.slice: Consumed 20.201s CPU time, 128.2M memory peak, 374K read from disk, 13.3M written to disk. Jan 23 19:32:43.413537 systemd[1]: Removed slice kubepods-besteffort-podef6cf0cb_401f_41f1_8fc5_1db19e184d24.slice - libcontainer container kubepods-besteffort-podef6cf0cb_401f_41f1_8fc5_1db19e184d24.slice. Jan 23 19:32:43.413678 systemd[1]: kubepods-besteffort-podef6cf0cb_401f_41f1_8fc5_1db19e184d24.slice: Consumed 2.625s CPU time, 29.9M memory peak, 460K read from disk, 4K written to disk. Jan 23 19:32:43.442607 kubelet[2859]: I0123 19:32:43.442474 2859 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.442607 kubelet[2859]: I0123 19:32:43.442595 2859 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.442607 kubelet[2859]: I0123 19:32:43.442612 2859 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.443009 kubelet[2859]: I0123 19:32:43.442625 2859 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.443009 kubelet[2859]: I0123 19:32:43.442637 2859 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.444087 kubelet[2859]: I0123 19:32:43.442649 2859 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.444591 kubelet[2859]: I0123 19:32:43.443223 2859 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef6cf0cb-401f-41f1-8fc5-1db19e184d24-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.444591 kubelet[2859]: I0123 19:32:43.444401 2859 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.444591 kubelet[2859]: I0123 19:32:43.444419 2859 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.444591 kubelet[2859]: I0123 19:32:43.444432 2859 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.444591 kubelet[2859]: I0123 19:32:43.444446 2859 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-58x4t\" (UniqueName: \"kubernetes.io/projected/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-kube-api-access-58x4t\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.444591 kubelet[2859]: I0123 19:32:43.444460 2859 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h8ndm\" (UniqueName: \"kubernetes.io/projected/ef6cf0cb-401f-41f1-8fc5-1db19e184d24-kube-api-access-h8ndm\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.444591 kubelet[2859]: I0123 19:32:43.444472 2859 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.444591 kubelet[2859]: I0123 19:32:43.444484 2859 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.445093 kubelet[2859]: I0123 19:32:43.444496 2859 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.445093 kubelet[2859]: I0123 19:32:43.444508 2859 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 23 19:32:43.895211 systemd[1]: var-lib-kubelet-pods-ef6cf0cb\x2d401f\x2d41f1\x2d8fc5\x2d1db19e184d24-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh8ndm.mount: Deactivated successfully. Jan 23 19:32:43.895349 systemd[1]: var-lib-kubelet-pods-5cbb13b0\x2d35a7\x2d4d1f\x2dbaba\x2db2b78a040c8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d58x4t.mount: Deactivated successfully. Jan 23 19:32:43.895444 systemd[1]: var-lib-kubelet-pods-5cbb13b0\x2d35a7\x2d4d1f\x2dbaba\x2db2b78a040c8e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 19:32:43.895537 systemd[1]: var-lib-kubelet-pods-5cbb13b0\x2d35a7\x2d4d1f\x2dbaba\x2db2b78a040c8e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 19:32:44.043571 kubelet[2859]: I0123 19:32:44.040712 2859 scope.go:117] "RemoveContainer" containerID="0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718" Jan 23 19:32:44.075244 containerd[1585]: time="2026-01-23T19:32:44.075199919Z" level=info msg="RemoveContainer for \"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\"" Jan 23 19:32:44.149589 containerd[1585]: time="2026-01-23T19:32:44.149297885Z" level=info msg="RemoveContainer for \"0675ab8626f7f51a34e0ec7cc65e5d3b809493fd28fec0183f83c1acb5a64718\" returns successfully" Jan 23 19:32:44.154697 kubelet[2859]: I0123 19:32:44.150441 2859 scope.go:117] "RemoveContainer" containerID="6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16" Jan 23 19:32:44.169612 containerd[1585]: time="2026-01-23T19:32:44.169440186Z" level=info msg="RemoveContainer for \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\"" Jan 23 19:32:44.207327 containerd[1585]: time="2026-01-23T19:32:44.205667573Z" level=info msg="RemoveContainer for \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\" returns successfully" Jan 23 19:32:44.219418 kubelet[2859]: I0123 19:32:44.219096 2859 scope.go:117] "RemoveContainer" containerID="ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78" Jan 23 19:32:44.285685 containerd[1585]: time="2026-01-23T19:32:44.285059532Z" level=info msg="RemoveContainer for \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\"" Jan 23 19:32:44.328532 containerd[1585]: time="2026-01-23T19:32:44.322703883Z" level=info msg="RemoveContainer for \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\" returns successfully" Jan 23 19:32:44.338508 kubelet[2859]: I0123 19:32:44.335292 2859 scope.go:117] "RemoveContainer" containerID="cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c" Jan 23 19:32:44.364345 containerd[1585]: time="2026-01-23T19:32:44.361714632Z" level=info msg="RemoveContainer for \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\"" Jan 23 19:32:44.424246 containerd[1585]: time="2026-01-23T19:32:44.423458933Z" level=info msg="RemoveContainer for \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\" returns successfully" Jan 23 19:32:44.427407 kubelet[2859]: I0123 19:32:44.426677 2859 scope.go:117] "RemoveContainer" containerID="11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1" Jan 23 19:32:44.486441 containerd[1585]: time="2026-01-23T19:32:44.485448435Z" level=info msg="RemoveContainer for \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\"" Jan 23 19:32:44.558408 sshd[5107]: Connection closed by 10.0.0.1 port 41004 Jan 23 19:32:44.560236 containerd[1585]: time="2026-01-23T19:32:44.549576021Z" level=info msg="RemoveContainer for \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\" returns successfully" Jan 23 19:32:44.561141 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:44.564690 kubelet[2859]: I0123 19:32:44.564532 2859 scope.go:117] "RemoveContainer" containerID="22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d" Jan 23 19:32:44.590249 containerd[1585]: time="2026-01-23T19:32:44.588229476Z" level=info msg="RemoveContainer for \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\"" Jan 23 19:32:44.630296 systemd[1]: sshd@60-10.0.0.117:22-10.0.0.1:41004.service: Deactivated successfully. Jan 23 19:32:44.648601 systemd[1]: session-61.scope: Deactivated successfully. Jan 23 19:32:44.658379 systemd-logind[1561]: Session 61 logged out. Waiting for processes to exit. Jan 23 19:32:44.691428 systemd[1]: Started sshd@61-10.0.0.117:22-10.0.0.1:40802.service - OpenSSH per-connection server daemon (10.0.0.1:40802). Jan 23 19:32:44.698440 containerd[1585]: time="2026-01-23T19:32:44.697737047Z" level=info msg="RemoveContainer for \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\" returns successfully" Jan 23 19:32:44.704031 kubelet[2859]: I0123 19:32:44.703744 2859 scope.go:117] "RemoveContainer" containerID="6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16" Jan 23 19:32:44.712109 systemd-logind[1561]: Removed session 61. Jan 23 19:32:44.737065 containerd[1585]: time="2026-01-23T19:32:44.709175828Z" level=error msg="ContainerStatus for \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\": not found" Jan 23 19:32:44.737188 kubelet[2859]: E0123 19:32:44.735495 2859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\": not found" containerID="6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16" Jan 23 19:32:44.737188 kubelet[2859]: I0123 19:32:44.735546 2859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16"} err="failed to get container status \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c7c95c7c1ff0f7015041306cb4fe64a37e351b32d9aa7664988dbcabc031d16\": not found" Jan 23 19:32:44.737188 kubelet[2859]: I0123 19:32:44.735599 2859 scope.go:117] "RemoveContainer" containerID="ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78" Jan 23 19:32:44.741474 containerd[1585]: time="2026-01-23T19:32:44.741352427Z" level=error msg="ContainerStatus for \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\": not found" Jan 23 19:32:44.741694 kubelet[2859]: E0123 19:32:44.741598 2859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\": not found" containerID="ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78" Jan 23 19:32:44.741763 kubelet[2859]: I0123 19:32:44.741706 2859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78"} err="failed to get container status \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab6f81238405fc8165a5949a0796b8124e1c16da6d6f2aaadf4faec38b010e78\": not found" Jan 23 19:32:44.741763 kubelet[2859]: I0123 19:32:44.741733 2859 scope.go:117] "RemoveContainer" containerID="cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c" Jan 23 19:32:44.743165 containerd[1585]: time="2026-01-23T19:32:44.743088402Z" level=error msg="ContainerStatus for \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\": not found" Jan 23 19:32:44.745066 kubelet[2859]: E0123 19:32:44.744710 2859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\": not found" containerID="cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c" Jan 23 19:32:44.745066 kubelet[2859]: I0123 19:32:44.745043 2859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c"} err="failed to get container status \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\": rpc error: code = NotFound desc = an error occurred when try to find container \"cee635ecb42b067ebfa1eb1b71e86e1217bb1e4e11167f5d1eac8b59c112f28c\": not found" Jan 23 19:32:44.745150 kubelet[2859]: I0123 19:32:44.745069 2859 scope.go:117] "RemoveContainer" containerID="11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1" Jan 23 19:32:44.745348 containerd[1585]: time="2026-01-23T19:32:44.745287759Z" level=error msg="ContainerStatus for \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\": not found" Jan 23 19:32:44.754074 kubelet[2859]: E0123 19:32:44.747392 2859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\": not found" containerID="11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1" Jan 23 19:32:44.754074 kubelet[2859]: I0123 19:32:44.747429 2859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1"} err="failed to get container status \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"11fdf0491cd568db8ef969e6c4c1ac6593772fcee63c699b026417bbfab396d1\": not found" Jan 23 19:32:44.754074 kubelet[2859]: I0123 19:32:44.747448 2859 scope.go:117] "RemoveContainer" containerID="22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d" Jan 23 19:32:44.762374 containerd[1585]: time="2026-01-23T19:32:44.756261061Z" level=error msg="ContainerStatus for \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\": not found" Jan 23 19:32:44.763329 kubelet[2859]: E0123 19:32:44.763296 2859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\": not found" containerID="22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d" Jan 23 19:32:44.764484 kubelet[2859]: I0123 19:32:44.764111 2859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d"} err="failed to get container status \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"22597bffe98fd0e1143b502ba0545c7956a854c1335c948a329391a0a69b0a9d\": not found" Jan 23 19:32:44.971607 sshd[5251]: Accepted publickey for core from 10.0.0.1 port 40802 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:44.984470 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:45.023442 systemd-logind[1561]: New session 62 of user core. Jan 23 19:32:45.039372 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 23 19:32:45.412576 kubelet[2859]: I0123 19:32:45.412533 2859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cbb13b0-35a7-4d1f-baba-b2b78a040c8e" path="/var/lib/kubelet/pods/5cbb13b0-35a7-4d1f-baba-b2b78a040c8e/volumes" Jan 23 19:32:45.422108 kubelet[2859]: I0123 19:32:45.418642 2859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6cf0cb-401f-41f1-8fc5-1db19e184d24" path="/var/lib/kubelet/pods/ef6cf0cb-401f-41f1-8fc5-1db19e184d24/volumes" Jan 23 19:32:46.221497 kubelet[2859]: E0123 19:32:46.220341 2859 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:32:47.898614 sshd[5254]: Connection closed by 10.0.0.1 port 40802 Jan 23 19:32:47.901141 sshd-session[5251]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:47.943373 systemd[1]: sshd@61-10.0.0.117:22-10.0.0.1:40802.service: Deactivated successfully. Jan 23 19:32:47.957621 systemd[1]: session-62.scope: Deactivated successfully. Jan 23 19:32:47.970565 systemd-logind[1561]: Session 62 logged out. Waiting for processes to exit. Jan 23 19:32:48.015343 systemd[1]: Started sshd@62-10.0.0.117:22-10.0.0.1:40806.service - OpenSSH per-connection server daemon (10.0.0.1:40806). Jan 23 19:32:48.025422 systemd-logind[1561]: Removed session 62. Jan 23 19:32:48.346179 kubelet[2859]: I0123 19:32:48.345273 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/254f7b61-4843-48ee-9899-92d2d77b1f98-cilium-run\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.346179 kubelet[2859]: I0123 19:32:48.345341 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/254f7b61-4843-48ee-9899-92d2d77b1f98-lib-modules\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.346179 kubelet[2859]: I0123 19:32:48.345374 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/254f7b61-4843-48ee-9899-92d2d77b1f98-bpf-maps\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.346179 kubelet[2859]: I0123 19:32:48.345402 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/254f7b61-4843-48ee-9899-92d2d77b1f98-hostproc\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.346179 kubelet[2859]: I0123 19:32:48.345446 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/254f7b61-4843-48ee-9899-92d2d77b1f98-cilium-cgroup\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.346179 kubelet[2859]: I0123 19:32:48.345474 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/254f7b61-4843-48ee-9899-92d2d77b1f98-clustermesh-secrets\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.347081 kubelet[2859]: I0123 19:32:48.345502 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/254f7b61-4843-48ee-9899-92d2d77b1f98-cilium-config-path\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.347081 kubelet[2859]: I0123 19:32:48.345530 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/254f7b61-4843-48ee-9899-92d2d77b1f98-xtables-lock\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.347081 kubelet[2859]: I0123 19:32:48.345565 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/254f7b61-4843-48ee-9899-92d2d77b1f98-host-proc-sys-net\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.347081 kubelet[2859]: I0123 19:32:48.345587 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/254f7b61-4843-48ee-9899-92d2d77b1f98-host-proc-sys-kernel\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.347081 kubelet[2859]: I0123 19:32:48.345616 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/254f7b61-4843-48ee-9899-92d2d77b1f98-hubble-tls\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.347081 kubelet[2859]: I0123 19:32:48.345649 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/254f7b61-4843-48ee-9899-92d2d77b1f98-etc-cni-netd\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.347308 kubelet[2859]: I0123 19:32:48.345678 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/254f7b61-4843-48ee-9899-92d2d77b1f98-cni-path\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.347308 kubelet[2859]: I0123 19:32:48.345709 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/254f7b61-4843-48ee-9899-92d2d77b1f98-cilium-ipsec-secrets\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.347308 kubelet[2859]: I0123 19:32:48.345737 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txh4f\" (UniqueName: \"kubernetes.io/projected/254f7b61-4843-48ee-9899-92d2d77b1f98-kube-api-access-txh4f\") pod \"cilium-scc24\" (UID: \"254f7b61-4843-48ee-9899-92d2d77b1f98\") " pod="kube-system/cilium-scc24" Jan 23 19:32:48.382538 systemd[1]: Created slice kubepods-burstable-pod254f7b61_4843_48ee_9899_92d2d77b1f98.slice - libcontainer container kubepods-burstable-pod254f7b61_4843_48ee_9899_92d2d77b1f98.slice. Jan 23 19:32:48.510192 sshd[5266]: Accepted publickey for core from 10.0.0.1 port 40806 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:48.512538 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:48.595252 systemd-logind[1561]: New session 63 of user core. Jan 23 19:32:48.619546 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 23 19:32:48.743052 kubelet[2859]: E0123 19:32:48.742210 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:48.748516 containerd[1585]: time="2026-01-23T19:32:48.747994460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-scc24,Uid:254f7b61-4843-48ee-9899-92d2d77b1f98,Namespace:kube-system,Attempt:0,}" Jan 23 19:32:48.815100 sshd[5274]: Connection closed by 10.0.0.1 port 40806 Jan 23 19:32:48.813437 sshd-session[5266]: pam_unix(sshd:session): session closed for user core Jan 23 19:32:48.901387 systemd[1]: sshd@62-10.0.0.117:22-10.0.0.1:40806.service: Deactivated successfully. Jan 23 19:32:48.904551 containerd[1585]: time="2026-01-23T19:32:48.904143492Z" level=info msg="connecting to shim 4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332" address="unix:///run/containerd/s/1cd7014c2b90024556cecb8a64a015202ec8134048986d6a8f256292771b9802" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:32:48.915545 systemd[1]: session-63.scope: Deactivated successfully. Jan 23 19:32:48.934209 systemd-logind[1561]: Session 63 logged out. Waiting for processes to exit. Jan 23 19:32:48.957178 systemd[1]: Started sshd@63-10.0.0.117:22-10.0.0.1:40814.service - OpenSSH per-connection server daemon (10.0.0.1:40814). Jan 23 19:32:48.967151 systemd-logind[1561]: Removed session 63. Jan 23 19:32:49.155417 systemd[1]: Started cri-containerd-4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332.scope - libcontainer container 4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332. Jan 23 19:32:49.183269 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 40814 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:32:49.189556 sshd-session[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:32:49.218620 systemd-logind[1561]: New session 64 of user core. Jan 23 19:32:49.229416 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 23 19:32:49.377456 containerd[1585]: time="2026-01-23T19:32:49.377314046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-scc24,Uid:254f7b61-4843-48ee-9899-92d2d77b1f98,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\"" Jan 23 19:32:49.386133 kubelet[2859]: E0123 19:32:49.384250 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:49.473076 containerd[1585]: time="2026-01-23T19:32:49.459156318Z" level=info msg="CreateContainer within sandbox \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 19:32:49.573126 containerd[1585]: time="2026-01-23T19:32:49.572472663Z" level=info msg="Container 5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:32:49.637111 containerd[1585]: time="2026-01-23T19:32:49.635567457Z" level=info msg="CreateContainer within sandbox \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9\"" Jan 23 19:32:49.645357 containerd[1585]: time="2026-01-23T19:32:49.643445149Z" level=info msg="StartContainer for \"5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9\"" Jan 23 19:32:49.703492 containerd[1585]: time="2026-01-23T19:32:49.698404889Z" level=info msg="connecting to shim 5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9" address="unix:///run/containerd/s/1cd7014c2b90024556cecb8a64a015202ec8134048986d6a8f256292771b9802" protocol=ttrpc version=3 Jan 23 19:32:49.856110 systemd[1]: Started cri-containerd-5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9.scope - libcontainer container 5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9. Jan 23 19:32:50.110959 containerd[1585]: time="2026-01-23T19:32:50.109660113Z" level=info msg="StartContainer for \"5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9\" returns successfully" Jan 23 19:32:50.187148 systemd[1]: cri-containerd-5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9.scope: Deactivated successfully. Jan 23 19:32:50.205613 containerd[1585]: time="2026-01-23T19:32:50.205556411Z" level=info msg="received container exit event container_id:\"5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9\" id:\"5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9\" pid:5352 exited_at:{seconds:1769196770 nanos:199391338}" Jan 23 19:32:50.368551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fb0fee6dfa596f8b972b52005757e38dde1c6343933fea784fed72fae63d9d9-rootfs.mount: Deactivated successfully. Jan 23 19:32:50.418261 kubelet[2859]: E0123 19:32:50.417044 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:51.225241 kubelet[2859]: E0123 19:32:51.225178 2859 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:32:51.462900 kubelet[2859]: E0123 19:32:51.461070 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:51.534137 containerd[1585]: time="2026-01-23T19:32:51.531306441Z" level=info msg="CreateContainer within sandbox \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 19:32:51.841427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796214252.mount: Deactivated successfully. Jan 23 19:32:51.870298 containerd[1585]: time="2026-01-23T19:32:51.870219297Z" level=info msg="Container a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:32:51.919640 containerd[1585]: time="2026-01-23T19:32:51.919575366Z" level=info msg="CreateContainer within sandbox \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d\"" Jan 23 19:32:51.927922 containerd[1585]: time="2026-01-23T19:32:51.921290005Z" level=info msg="StartContainer for \"a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d\"" Jan 23 19:32:51.927922 containerd[1585]: time="2026-01-23T19:32:51.924645931Z" level=info msg="connecting to shim a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d" address="unix:///run/containerd/s/1cd7014c2b90024556cecb8a64a015202ec8134048986d6a8f256292771b9802" protocol=ttrpc version=3 Jan 23 19:32:52.082318 systemd[1]: Started cri-containerd-a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d.scope - libcontainer container a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d. Jan 23 19:32:52.355090 containerd[1585]: time="2026-01-23T19:32:52.354112552Z" level=info msg="StartContainer for \"a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d\" returns successfully" Jan 23 19:32:52.391062 kubelet[2859]: E0123 19:32:52.389199 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:52.492250 systemd[1]: cri-containerd-a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d.scope: Deactivated successfully. Jan 23 19:32:52.509900 containerd[1585]: time="2026-01-23T19:32:52.507266817Z" level=info msg="received container exit event container_id:\"a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d\" id:\"a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d\" pid:5397 exited_at:{seconds:1769196772 nanos:493686103}" Jan 23 19:32:52.539899 kubelet[2859]: E0123 19:32:52.527007 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:52.774550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7d754450dff92652dc6a8b49d5df994ca393604435fd99f62c3ac419f7a7f9d-rootfs.mount: Deactivated successfully. Jan 23 19:32:53.568930 kubelet[2859]: E0123 19:32:53.552153 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:53.662065 containerd[1585]: time="2026-01-23T19:32:53.660171780Z" level=info msg="CreateContainer within sandbox \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 19:32:53.775873 containerd[1585]: time="2026-01-23T19:32:53.768205550Z" level=info msg="Container 4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:32:53.782652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4153905355.mount: Deactivated successfully. Jan 23 19:32:53.825974 containerd[1585]: time="2026-01-23T19:32:53.825042745Z" level=info msg="CreateContainer within sandbox \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467\"" Jan 23 19:32:53.827448 containerd[1585]: time="2026-01-23T19:32:53.827035824Z" level=info msg="StartContainer for \"4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467\"" Jan 23 19:32:53.831658 containerd[1585]: time="2026-01-23T19:32:53.831055225Z" level=info msg="connecting to shim 4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467" address="unix:///run/containerd/s/1cd7014c2b90024556cecb8a64a015202ec8134048986d6a8f256292771b9802" protocol=ttrpc version=3 Jan 23 19:32:53.920161 systemd[1]: Started cri-containerd-4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467.scope - libcontainer container 4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467. Jan 23 19:32:54.228561 systemd[1]: cri-containerd-4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467.scope: Deactivated successfully. Jan 23 19:32:54.234102 containerd[1585]: time="2026-01-23T19:32:54.231067253Z" level=info msg="StartContainer for \"4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467\" returns successfully" Jan 23 19:32:54.249382 containerd[1585]: time="2026-01-23T19:32:54.249090290Z" level=info msg="received container exit event container_id:\"4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467\" id:\"4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467\" pid:5444 exited_at:{seconds:1769196774 nanos:247573837}" Jan 23 19:32:54.358368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b7534f459f646b17bbeabd81fcc5ee004007a60eaaebc45ccb073f790dbb467-rootfs.mount: Deactivated successfully. Jan 23 19:32:54.598633 kubelet[2859]: E0123 19:32:54.597610 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:54.628930 containerd[1585]: time="2026-01-23T19:32:54.626407224Z" level=info msg="CreateContainer within sandbox \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 19:32:54.658902 containerd[1585]: time="2026-01-23T19:32:54.658612324Z" level=info msg="Container 16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:32:54.688453 containerd[1585]: time="2026-01-23T19:32:54.688338039Z" level=info msg="CreateContainer within sandbox \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955\"" Jan 23 19:32:54.696547 containerd[1585]: time="2026-01-23T19:32:54.692991532Z" level=info msg="StartContainer for \"16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955\"" Jan 23 19:32:54.700209 containerd[1585]: time="2026-01-23T19:32:54.698680351Z" level=info msg="connecting to shim 16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955" address="unix:///run/containerd/s/1cd7014c2b90024556cecb8a64a015202ec8134048986d6a8f256292771b9802" protocol=ttrpc version=3 Jan 23 19:32:54.785108 systemd[1]: Started cri-containerd-16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955.scope - libcontainer container 16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955. Jan 23 19:32:54.796379 kubelet[2859]: I0123 19:32:54.795966 2859 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T19:32:54Z","lastTransitionTime":"2026-01-23T19:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 19:32:54.926203 systemd[1]: cri-containerd-16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955.scope: Deactivated successfully. Jan 23 19:32:54.933023 containerd[1585]: time="2026-01-23T19:32:54.932282565Z" level=info msg="received container exit event container_id:\"16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955\" id:\"16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955\" pid:5483 exited_at:{seconds:1769196774 nanos:928575850}" Jan 23 19:32:54.940768 containerd[1585]: time="2026-01-23T19:32:54.940276955Z" level=info msg="StartContainer for \"16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955\" returns successfully" Jan 23 19:32:55.041157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16b76ae5b4900292b1fdbd5b0a260ae225236d40f5949badcbf5da903345c955-rootfs.mount: Deactivated successfully. Jan 23 19:32:55.636256 kubelet[2859]: E0123 19:32:55.636135 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:55.655452 containerd[1585]: time="2026-01-23T19:32:55.655108925Z" level=info msg="CreateContainer within sandbox \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 19:32:55.751411 containerd[1585]: time="2026-01-23T19:32:55.751291076Z" level=info msg="Container 82865da89a75e4e1dca16923735447f4b56f46c5897743e98e27eaf1b0565911: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:32:55.785004 containerd[1585]: time="2026-01-23T19:32:55.782558898Z" level=info msg="CreateContainer within sandbox \"4a78bd79877fc451c9ad7a31b701ed01eb16d82d60178e34500142ba98316332\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"82865da89a75e4e1dca16923735447f4b56f46c5897743e98e27eaf1b0565911\"" Jan 23 19:32:55.785004 containerd[1585]: time="2026-01-23T19:32:55.783772872Z" level=info msg="StartContainer for \"82865da89a75e4e1dca16923735447f4b56f46c5897743e98e27eaf1b0565911\"" Jan 23 19:32:55.789339 containerd[1585]: time="2026-01-23T19:32:55.787775905Z" level=info msg="connecting to shim 82865da89a75e4e1dca16923735447f4b56f46c5897743e98e27eaf1b0565911" address="unix:///run/containerd/s/1cd7014c2b90024556cecb8a64a015202ec8134048986d6a8f256292771b9802" protocol=ttrpc version=3 Jan 23 19:32:55.897298 systemd[1]: Started cri-containerd-82865da89a75e4e1dca16923735447f4b56f46c5897743e98e27eaf1b0565911.scope - libcontainer container 82865da89a75e4e1dca16923735447f4b56f46c5897743e98e27eaf1b0565911. Jan 23 19:32:56.095501 containerd[1585]: time="2026-01-23T19:32:56.095023690Z" level=info msg="StartContainer for \"82865da89a75e4e1dca16923735447f4b56f46c5897743e98e27eaf1b0565911\" returns successfully" Jan 23 19:32:56.241508 kubelet[2859]: E0123 19:32:56.240174 2859 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:32:56.677422 kubelet[2859]: E0123 19:32:56.677364 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:32:57.319069 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 19:32:57.702293 kubelet[2859]: E0123 19:32:57.702109 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:04.971229 systemd-networkd[1384]: lxc_health: Link UP Jan 23 19:33:05.003975 systemd-networkd[1384]: lxc_health: Gained carrier Jan 23 19:33:06.774215 kubelet[2859]: E0123 19:33:06.771479 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:06.865767 systemd-networkd[1384]: lxc_health: Gained IPv6LL Jan 23 19:33:06.933912 kubelet[2859]: I0123 19:33:06.933534 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-scc24" podStartSLOduration=19.933514339 podStartE2EDuration="19.933514339s" podCreationTimestamp="2026-01-23 19:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:32:56.751131843 +0000 UTC m=+461.952484533" watchObservedRunningTime="2026-01-23 19:33:06.933514339 +0000 UTC m=+472.134867029" Jan 23 19:33:07.391715 kubelet[2859]: E0123 19:33:07.385290 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:07.829075 kubelet[2859]: E0123 19:33:07.829038 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:08.848938 kubelet[2859]: E0123 19:33:08.848100 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:10.268697 kubelet[2859]: E0123 19:33:10.268492 2859 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34850->127.0.0.1:34939: write tcp 127.0.0.1:34850->127.0.0.1:34939: write: broken pipe Jan 23 19:33:11.391774 kubelet[2859]: E0123 19:33:11.389396 2859 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:33:15.474198 containerd[1585]: time="2026-01-23T19:33:15.473443570Z" level=info msg="StopPodSandbox for \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\"" Jan 23 19:33:15.474198 containerd[1585]: time="2026-01-23T19:33:15.473704193Z" level=info msg="TearDown network for sandbox \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" successfully" Jan 23 19:33:15.474198 containerd[1585]: time="2026-01-23T19:33:15.473723700Z" level=info msg="StopPodSandbox for \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" returns successfully" Jan 23 19:33:15.479577 containerd[1585]: time="2026-01-23T19:33:15.476377452Z" level=info msg="RemovePodSandbox for \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\"" Jan 23 19:33:15.479577 containerd[1585]: time="2026-01-23T19:33:15.476415471Z" level=info msg="Forcibly stopping sandbox \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\"" Jan 23 19:33:15.480093 containerd[1585]: time="2026-01-23T19:33:15.480065457Z" level=info msg="TearDown network for sandbox \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" successfully" Jan 23 19:33:15.488226 containerd[1585]: time="2026-01-23T19:33:15.488195225Z" level=info msg="Ensure that sandbox 43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f in task-service has been cleanup successfully" Jan 23 19:33:15.542461 containerd[1585]: time="2026-01-23T19:33:15.541059943Z" level=info msg="RemovePodSandbox \"43fc4fff5c2e95d798a57af090a3fd536ef6cefa86ded2ebe11699d2a904a97f\" returns successfully" Jan 23 19:33:15.547733 containerd[1585]: time="2026-01-23T19:33:15.545441424Z" level=info msg="StopPodSandbox for \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\"" Jan 23 19:33:15.549649 containerd[1585]: time="2026-01-23T19:33:15.548201402Z" level=info msg="TearDown network for sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" successfully" Jan 23 19:33:15.549649 containerd[1585]: time="2026-01-23T19:33:15.548300104Z" level=info msg="StopPodSandbox for \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" returns successfully" Jan 23 19:33:15.559713 containerd[1585]: time="2026-01-23T19:33:15.554342246Z" level=info msg="RemovePodSandbox for \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\"" Jan 23 19:33:15.559713 containerd[1585]: time="2026-01-23T19:33:15.554390484Z" level=info msg="Forcibly stopping sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\"" Jan 23 19:33:15.559713 containerd[1585]: time="2026-01-23T19:33:15.554577450Z" level=info msg="TearDown network for sandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" successfully" Jan 23 19:33:15.566394 containerd[1585]: time="2026-01-23T19:33:15.563992116Z" level=info msg="Ensure that sandbox 6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c in task-service has been cleanup successfully" Jan 23 19:33:15.629029 containerd[1585]: time="2026-01-23T19:33:15.628979645Z" level=info msg="RemovePodSandbox \"6a621823aca61031167f9ec304d1abb82c8d0b825027d1adce4b1b3f1baf6d0c\" returns successfully" Jan 23 19:33:16.369043 sshd[5325]: Connection closed by 10.0.0.1 port 40814 Jan 23 19:33:16.373317 sshd-session[5294]: pam_unix(sshd:session): session closed for user core Jan 23 19:33:16.396298 systemd[1]: sshd@63-10.0.0.117:22-10.0.0.1:40814.service: Deactivated successfully. Jan 23 19:33:16.402011 systemd[1]: session-64.scope: Deactivated successfully. Jan 23 19:33:16.409968 systemd-logind[1561]: Session 64 logged out. Waiting for processes to exit. Jan 23 19:33:16.435935 systemd-logind[1561]: Removed session 64.