Jan 23 18:59:08.641411 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:59:08.641574 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:59:08.641584 kernel: BIOS-provided physical RAM map: Jan 23 18:59:08.641595 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:59:08.641601 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:59:08.641607 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:59:08.641615 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:59:08.641621 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:59:08.641670 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:59:08.641677 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:59:08.641683 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 23 18:59:08.641689 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 18:59:08.641699 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 18:59:08.641705 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 18:59:08.641714 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 18:59:08.641725 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 18:59:08.641792 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 18:59:08.641811 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 18:59:08.641822 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 18:59:08.641831 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 18:59:08.641840 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 18:59:08.641848 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 18:59:08.641857 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:59:08.641866 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:59:08.641875 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:59:08.641886 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:59:08.641897 kernel: NX (Execute Disable) protection: active Jan 23 18:59:08.641905 kernel: APIC: Static calls initialized Jan 23 18:59:08.641916 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 23 18:59:08.641923 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 23 18:59:08.641930 kernel: extended physical RAM map: Jan 23 18:59:08.641937 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:59:08.642003 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:59:08.642010 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:59:08.642016 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:59:08.642023 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:59:08.642029 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:59:08.642036 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:59:08.642043 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 23 18:59:08.642140 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 23 18:59:08.642154 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 23 18:59:08.642161 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 23 18:59:08.642169 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 23 18:59:08.642176 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 18:59:08.642186 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 18:59:08.642193 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 18:59:08.642200 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 18:59:08.642207 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 18:59:08.642214 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 18:59:08.642221 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 18:59:08.642228 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 18:59:08.642297 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 18:59:08.642309 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 18:59:08.642321 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 18:59:08.642328 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:59:08.642339 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:59:08.642347 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:59:08.642403 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:59:08.642455 kernel: efi: EFI v2.7 by EDK II Jan 23 18:59:08.642502 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 23 18:59:08.642548 kernel: random: crng init done Jan 23 18:59:08.642555 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 18:59:08.642599 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 18:59:08.642607 kernel: secureboot: Secure boot disabled Jan 23 18:59:08.642614 kernel: SMBIOS 2.8 present. Jan 23 18:59:08.642625 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 18:59:08.642644 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:59:08.642655 kernel: Hypervisor detected: KVM Jan 23 18:59:08.642665 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 18:59:08.642676 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:59:08.642687 kernel: kvm-clock: using sched offset of 24399344800 cycles Jan 23 18:59:08.642699 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:59:08.642711 kernel: tsc: Detected 2445.426 MHz processor Jan 23 18:59:08.642723 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:59:08.642735 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:59:08.642747 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 18:59:08.642758 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:59:08.642773 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:59:08.642785 kernel: Using GB pages for direct mapping Jan 23 18:59:08.642796 kernel: ACPI: Early table checksum verification disabled Jan 23 18:59:08.642808 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 23 18:59:08.642821 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 23 18:59:08.642834 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:59:08.642843 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:59:08.642852 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 23 18:59:08.642862 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:59:08.642991 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:59:08.643004 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:59:08.643012 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:59:08.643019 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 18:59:08.643026 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 23 18:59:08.643033 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 23 18:59:08.643041 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 23 18:59:08.643048 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 23 18:59:08.643171 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 23 18:59:08.643180 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 23 18:59:08.643188 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 23 18:59:08.643195 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 23 18:59:08.643203 kernel: No NUMA configuration found Jan 23 18:59:08.643213 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 23 18:59:08.643225 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 23 18:59:08.643301 kernel: Zone ranges: Jan 23 18:59:08.643309 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:59:08.643316 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 23 18:59:08.643329 kernel: Normal empty Jan 23 18:59:08.643342 kernel: Device empty Jan 23 18:59:08.643353 kernel: Movable zone start for each node Jan 23 18:59:08.643362 kernel: Early memory node ranges Jan 23 18:59:08.643369 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:59:08.643433 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 18:59:08.643443 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 18:59:08.643450 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 18:59:08.643456 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 23 18:59:08.643467 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 23 18:59:08.643474 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 23 18:59:08.643481 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 23 18:59:08.643492 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 23 18:59:08.643564 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:59:08.643594 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:59:08.643606 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 18:59:08.643613 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:59:08.643620 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 18:59:08.643628 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 18:59:08.643635 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 18:59:08.643642 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 18:59:08.643652 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 23 18:59:08.643659 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:59:08.643671 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:59:08.643684 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:59:08.643694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:59:08.643713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:59:08.643725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:59:08.643732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:59:08.643739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:59:08.643747 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:59:08.643754 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 18:59:08.643761 kernel: TSC deadline timer available Jan 23 18:59:08.643768 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:59:08.643775 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:59:08.643792 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:59:08.643803 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:59:08.643816 kernel: CPU topo: Num. cores per package: 4 Jan 23 18:59:08.643828 kernel: CPU topo: Num. threads per package: 4 Jan 23 18:59:08.643836 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 18:59:08.643843 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:59:08.643850 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:59:08.643857 kernel: kvm-guest: setup PV sched yield Jan 23 18:59:08.643864 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 23 18:59:08.643875 kernel: Booting paravirtualized kernel on KVM Jan 23 18:59:08.643882 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:59:08.643890 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 18:59:08.643903 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 18:59:08.643916 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 18:59:08.643926 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 18:59:08.643936 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:59:08.643945 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:59:08.644023 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:59:08.644043 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:59:08.644167 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:59:08.644184 kernel: Fallback order for Node 0: 0 Jan 23 18:59:08.644194 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 23 18:59:08.644204 kernel: Policy zone: DMA32 Jan 23 18:59:08.644214 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:59:08.644224 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 18:59:08.644305 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:59:08.644325 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:59:08.644335 kernel: Dynamic Preempt: voluntary Jan 23 18:59:08.644345 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:59:08.644355 kernel: rcu: RCU event tracing is enabled. Jan 23 18:59:08.644368 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 18:59:08.644380 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:59:08.644390 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:59:08.644397 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:59:08.644405 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:59:08.644416 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 18:59:08.644537 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:59:08.644548 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:59:08.644555 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:59:08.644563 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 18:59:08.644570 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:59:08.644577 kernel: Console: colour dummy device 80x25 Jan 23 18:59:08.644584 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:59:08.644592 kernel: ACPI: Core revision 20240827 Jan 23 18:59:08.644603 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 18:59:08.644610 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:59:08.644617 kernel: x2apic enabled Jan 23 18:59:08.644624 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:59:08.644631 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:59:08.644643 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:59:08.644657 kernel: kvm-guest: setup PV IPIs Jan 23 18:59:08.644667 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 18:59:08.644680 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 18:59:08.644698 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 23 18:59:08.644711 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:59:08.644724 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 18:59:08.644736 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 18:59:08.644750 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:59:08.644763 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:59:08.644777 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:59:08.644788 kernel: Speculative Store Bypass: Vulnerable Jan 23 18:59:08.644800 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 18:59:08.644818 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 18:59:08.644952 kernel: active return thunk: srso_alias_return_thunk Jan 23 18:59:08.644970 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 18:59:08.644985 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 18:59:08.644998 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:59:08.645012 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:59:08.645023 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:59:08.645033 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:59:08.645048 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:59:08.645188 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 18:59:08.645201 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:59:08.645213 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:59:08.645223 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:59:08.645653 kernel: landlock: Up and running. Jan 23 18:59:08.645668 kernel: SELinux: Initializing. Jan 23 18:59:08.645680 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:59:08.645692 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:59:08.645710 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 18:59:08.645722 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 18:59:08.645734 kernel: signal: max sigframe size: 1776 Jan 23 18:59:08.645745 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:59:08.645757 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:59:08.645769 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:59:08.645780 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:59:08.645792 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:59:08.645803 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:59:08.645818 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 18:59:08.645829 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 18:59:08.645841 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 23 18:59:08.645918 kernel: Memory: 2414472K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145388K reserved, 0K cma-reserved) Jan 23 18:59:08.645931 kernel: devtmpfs: initialized Jan 23 18:59:08.645942 kernel: x86/mm: Memory block size: 128MB Jan 23 18:59:08.645953 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 18:59:08.645964 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 18:59:08.645975 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 18:59:08.645991 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 23 18:59:08.646002 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 23 18:59:08.646014 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 23 18:59:08.646025 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:59:08.646036 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 18:59:08.646047 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:59:08.646184 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:59:08.646200 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:59:08.646212 kernel: audit: type=2000 audit(1769194732.858:1): state=initialized audit_enabled=0 res=1 Jan 23 18:59:08.646228 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:59:08.646313 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:59:08.646323 kernel: cpuidle: using governor menu Jan 23 18:59:08.646333 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:59:08.646343 kernel: dca service started, version 1.12.1 Jan 23 18:59:08.646354 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 18:59:08.646366 kernel: PCI: Using configuration type 1 for base access Jan 23 18:59:08.646379 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:59:08.646390 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:59:08.646407 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:59:08.646418 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:59:08.646429 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:59:08.646440 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:59:08.646451 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:59:08.646462 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:59:08.646472 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:59:08.646483 kernel: ACPI: Interpreter enabled Jan 23 18:59:08.646494 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:59:08.646509 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:59:08.646522 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:59:08.646533 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:59:08.646546 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:59:08.646558 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:59:08.647502 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:59:08.647736 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:59:08.647939 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:59:08.647956 kernel: PCI host bridge to bus 0000:00 Jan 23 18:59:08.648391 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:59:08.648584 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:59:08.648770 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:59:08.648971 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 23 18:59:08.649385 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 18:59:08.649598 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 23 18:59:08.649876 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:59:08.650411 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:59:08.650905 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:59:08.651334 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 23 18:59:08.651547 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 23 18:59:08.651759 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 18:59:08.651972 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:59:08.652394 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 11718 usecs Jan 23 18:59:08.652630 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 18:59:08.652845 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 23 18:59:08.653185 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 23 18:59:08.653501 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 23 18:59:08.653740 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 18:59:08.654047 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 23 18:59:08.654479 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 23 18:59:08.654711 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 23 18:59:08.654947 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 18:59:08.655378 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 23 18:59:08.655595 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 23 18:59:08.655827 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 23 18:59:08.656045 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 23 18:59:08.656562 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:59:08.656779 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:59:08.657017 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:59:08.657458 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 23 18:59:08.657784 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 23 18:59:08.658421 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:59:08.658648 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 23 18:59:08.658671 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:59:08.658684 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:59:08.658698 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:59:08.658710 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:59:08.658722 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:59:08.658735 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:59:08.658756 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:59:08.658767 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:59:08.658779 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:59:08.658790 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:59:08.658801 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:59:08.658813 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:59:08.658826 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:59:08.658837 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:59:08.658847 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:59:08.658862 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:59:08.658871 kernel: iommu: Default domain type: Translated Jan 23 18:59:08.658883 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:59:08.658896 kernel: efivars: Registered efivars operations Jan 23 18:59:08.658907 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:59:08.658917 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:59:08.658927 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 18:59:08.658937 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 18:59:08.658948 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 23 18:59:08.658966 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 23 18:59:08.658976 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 23 18:59:08.658988 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 23 18:59:08.659000 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 23 18:59:08.659011 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 23 18:59:08.659426 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:59:08.659635 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:59:08.659828 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:59:08.659851 kernel: vgaarb: loaded Jan 23 18:59:08.659863 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 18:59:08.659874 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 18:59:08.659886 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:59:08.659899 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:59:08.659912 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:59:08.659924 kernel: pnp: PnP ACPI init Jan 23 18:59:08.660355 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 18:59:08.660383 kernel: pnp: PnP ACPI: found 6 devices Jan 23 18:59:08.660396 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:59:08.660407 kernel: NET: Registered PF_INET protocol family Jan 23 18:59:08.660419 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:59:08.660430 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:59:08.660442 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:59:08.660480 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:59:08.660496 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:59:08.660508 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:59:08.660524 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:59:08.660536 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:59:08.660550 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:59:08.660563 kernel: NET: Registered PF_XDP protocol family Jan 23 18:59:08.660785 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 18:59:08.660989 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 23 18:59:08.661422 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:59:08.661615 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:59:08.661957 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:59:08.662500 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 23 18:59:08.662932 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 18:59:08.663340 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 23 18:59:08.663364 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:59:08.663375 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 18:59:08.663386 kernel: Initialise system trusted keyrings Jan 23 18:59:08.663403 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:59:08.663420 kernel: Key type asymmetric registered Jan 23 18:59:08.663432 kernel: Asymmetric key parser 'x509' registered Jan 23 18:59:08.663442 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:59:08.663453 kernel: io scheduler mq-deadline registered Jan 23 18:59:08.663464 kernel: io scheduler kyber registered Jan 23 18:59:08.663474 kernel: io scheduler bfq registered Jan 23 18:59:08.663486 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:59:08.663501 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:59:08.663512 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:59:08.663529 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 18:59:08.663542 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:59:08.663556 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:59:08.663570 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:59:08.663584 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:59:08.663596 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:59:08.664042 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 18:59:08.664658 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 18:59:08.664680 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:59:08.664869 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T18:59:06 UTC (1769194746) Jan 23 18:59:08.665050 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 23 18:59:08.665179 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 18:59:08.665191 kernel: efifb: probing for efifb Jan 23 18:59:08.665203 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 23 18:59:08.665220 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 18:59:08.665294 kernel: efifb: scrolling: redraw Jan 23 18:59:08.665308 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:59:08.665321 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:59:08.665332 kernel: fb0: EFI VGA frame buffer device Jan 23 18:59:08.665344 kernel: pstore: Using crash dump compression: deflate Jan 23 18:59:08.665355 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:59:08.665366 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:59:08.665379 kernel: Segment Routing with IPv6 Jan 23 18:59:08.665394 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:59:08.665406 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:59:08.665417 kernel: Key type dns_resolver registered Jan 23 18:59:08.665429 kernel: IPI shorthand broadcast: enabled Jan 23 18:59:08.665440 kernel: sched_clock: Marking stable (12897065471, 1328078084)->(15353709736, -1128566181) Jan 23 18:59:08.665452 kernel: registered taskstats version 1 Jan 23 18:59:08.665463 kernel: Loading compiled-in X.509 certificates Jan 23 18:59:08.665474 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:59:08.665486 kernel: Demotion targets for Node 0: null Jan 23 18:59:08.665500 kernel: Key type .fscrypt registered Jan 23 18:59:08.665513 kernel: Key type fscrypt-provisioning registered Jan 23 18:59:08.665526 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:59:08.665540 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:59:08.665552 kernel: ima: No architecture policies found Jan 23 18:59:08.665562 kernel: clk: Disabling unused clocks Jan 23 18:59:08.665572 kernel: Warning: unable to open an initial console. Jan 23 18:59:08.665583 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:59:08.665593 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:59:08.665611 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:59:08.665624 kernel: Run /init as init process Jan 23 18:59:08.665634 kernel: with arguments: Jan 23 18:59:08.665645 kernel: /init Jan 23 18:59:08.665655 kernel: with environment: Jan 23 18:59:08.665665 kernel: HOME=/ Jan 23 18:59:08.665677 kernel: TERM=linux Jan 23 18:59:08.665743 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:59:08.665763 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:59:08.665782 systemd[1]: Detected virtualization kvm. Jan 23 18:59:08.665792 systemd[1]: Detected architecture x86-64. Jan 23 18:59:08.665803 systemd[1]: Running in initrd. Jan 23 18:59:08.665813 systemd[1]: No hostname configured, using default hostname. Jan 23 18:59:08.665826 systemd[1]: Hostname set to . Jan 23 18:59:08.665840 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:59:08.665851 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:59:08.665866 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:59:08.665877 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:59:08.665892 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:59:08.665907 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:59:08.665919 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:59:08.665935 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:59:08.665951 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:59:08.665971 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:59:08.665985 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:59:08.665997 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:59:08.666010 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:59:08.666022 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:59:08.666035 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:59:08.666047 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:59:08.666202 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:59:08.666226 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:59:08.666309 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:59:08.666321 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:59:08.666333 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:59:08.666350 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:59:08.666363 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:59:08.666375 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:59:08.666387 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:59:08.666399 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:59:08.666416 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:59:08.666430 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:59:08.666443 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:59:08.666456 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:59:08.666470 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:59:08.666484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:59:08.666497 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:59:08.666515 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:59:08.666530 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:59:08.666546 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:59:08.666561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:59:08.666575 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:59:08.666762 systemd-journald[203]: Collecting audit messages is disabled. Jan 23 18:59:08.666809 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:59:08.666825 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:59:08.666841 systemd-journald[203]: Journal started Jan 23 18:59:08.666931 systemd-journald[203]: Runtime Journal (/run/log/journal/0fbb1d908ca54ac8841cfacf21c86629) is 6M, max 48.1M, 42.1M free. Jan 23 18:59:08.575325 systemd-modules-load[204]: Inserted module 'overlay' Jan 23 18:59:08.708647 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:59:08.706364 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:59:08.782879 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:59:08.801407 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:59:08.826981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:59:08.860885 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:59:08.893751 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:59:08.950390 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:59:08.966589 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:59:09.032937 kernel: Bridge firewalling registered Jan 23 18:59:09.032919 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 23 18:59:09.036601 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:59:09.083310 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:59:09.171595 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:59:09.219020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:59:09.563009 systemd-resolved[269]: Positive Trust Anchors: Jan 23 18:59:09.563320 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:59:09.563362 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:59:09.578933 systemd-resolved[269]: Defaulting to hostname 'linux'. Jan 23 18:59:09.586418 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:59:09.643632 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:59:09.687786 kernel: SCSI subsystem initialized Jan 23 18:59:09.714797 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:59:09.757321 kernel: iscsi: registered transport (tcp) Jan 23 18:59:09.820569 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:59:09.820673 kernel: QLogic iSCSI HBA Driver Jan 23 18:59:09.930508 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:59:10.013723 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:59:10.047637 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:59:10.314228 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:59:10.331576 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:59:10.535454 kernel: raid6: avx2x4 gen() 15300 MB/s Jan 23 18:59:10.563428 kernel: raid6: avx2x2 gen() 14986 MB/s Jan 23 18:59:10.589971 kernel: raid6: avx2x1 gen() 7595 MB/s Jan 23 18:59:10.590208 kernel: raid6: using algorithm avx2x4 gen() 15300 MB/s Jan 23 18:59:10.618391 kernel: raid6: .... xor() 4716 MB/s, rmw enabled Jan 23 18:59:10.618497 kernel: raid6: using avx2x2 recovery algorithm Jan 23 18:59:10.667606 kernel: xor: automatically using best checksumming function avx Jan 23 18:59:11.720342 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:59:11.760922 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:59:11.790044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:59:11.880644 systemd-udevd[452]: Using default interface naming scheme 'v255'. Jan 23 18:59:11.904345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:59:11.953578 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:59:12.095169 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jan 23 18:59:12.283355 kernel: hrtimer: interrupt took 4621516 ns Jan 23 18:59:12.306390 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:59:12.324239 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:59:12.547532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:59:12.562878 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:59:12.729609 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 18:59:12.750625 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:59:12.818880 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:59:12.882033 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 18:59:12.883418 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:59:12.883441 kernel: GPT:9289727 != 19775487 Jan 23 18:59:12.883457 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:59:12.883473 kernel: GPT:9289727 != 19775487 Jan 23 18:59:12.883488 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:59:12.883504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:59:12.819793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:59:12.853681 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:59:12.900045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:59:12.928790 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:59:13.025946 kernel: libata version 3.00 loaded. Jan 23 18:59:13.073747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:59:13.137269 kernel: AES CTR mode by8 optimization enabled Jan 23 18:59:13.137408 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 23 18:59:13.150230 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:59:13.164204 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:59:13.163960 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 18:59:13.204489 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 18:59:13.260606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:59:13.295490 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:59:13.295933 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:59:13.296384 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:59:13.312682 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 18:59:13.409792 kernel: scsi host0: ahci Jan 23 18:59:13.410462 kernel: scsi host1: ahci Jan 23 18:59:13.410747 kernel: scsi host2: ahci Jan 23 18:59:13.411025 kernel: scsi host3: ahci Jan 23 18:59:13.411584 kernel: scsi host4: ahci Jan 23 18:59:13.411845 kernel: scsi host5: ahci Jan 23 18:59:13.412263 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Jan 23 18:59:13.413272 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Jan 23 18:59:13.413362 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Jan 23 18:59:13.413375 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Jan 23 18:59:13.413386 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Jan 23 18:59:13.413397 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Jan 23 18:59:13.337573 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 18:59:13.469986 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:59:13.559449 disk-uuid[617]: Primary Header is updated. Jan 23 18:59:13.559449 disk-uuid[617]: Secondary Entries is updated. Jan 23 18:59:13.559449 disk-uuid[617]: Secondary Header is updated. Jan 23 18:59:13.598726 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:59:13.745234 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 18:59:13.764268 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:59:13.785739 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:59:13.785799 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:59:13.795460 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:59:13.818211 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:59:13.818348 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:59:13.818452 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 18:59:13.818468 kernel: ata3.00: applying bridge limits Jan 23 18:59:13.832651 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:59:13.833508 kernel: ata3.00: configured for UDMA/100 Jan 23 18:59:13.844209 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 18:59:13.941552 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 18:59:13.942020 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:59:13.969494 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 18:59:14.641242 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:59:14.651809 disk-uuid[618]: The operation has completed successfully. Jan 23 18:59:14.755394 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:59:14.755644 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:59:14.779007 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:59:14.854686 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:59:14.870740 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:59:14.887945 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:59:14.903715 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:59:14.942934 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:59:15.022474 sh[642]: Success Jan 23 18:59:15.047815 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:59:15.106235 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:59:15.106796 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:59:15.121530 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:59:15.209566 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 18:59:15.351203 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:59:15.374659 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:59:15.446429 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:59:15.524274 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (661) Jan 23 18:59:15.524398 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:59:15.524414 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:59:15.590225 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:59:15.590428 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:59:15.597023 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:59:15.603539 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:59:15.606698 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:59:15.609686 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:59:15.674893 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:59:15.760535 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (690) Jan 23 18:59:15.760684 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:59:15.783697 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:59:15.818794 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:59:15.818882 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:59:15.850000 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:59:15.865929 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:59:15.881737 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:59:20.612216 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 4628352021 wd_nsec: 4628307924 Jan 23 18:59:20.772911 ignition[747]: Ignition 2.22.0 Jan 23 18:59:20.774155 ignition[747]: Stage: fetch-offline Jan 23 18:59:20.774216 ignition[747]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:59:20.774307 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:59:20.774676 ignition[747]: parsed url from cmdline: "" Jan 23 18:59:20.774683 ignition[747]: no config URL provided Jan 23 18:59:20.774692 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:59:20.774768 ignition[747]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:59:20.774961 ignition[747]: op(1): [started] loading QEMU firmware config module Jan 23 18:59:20.774969 ignition[747]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 18:59:20.847554 ignition[747]: op(1): [finished] loading QEMU firmware config module Jan 23 18:59:20.908463 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:59:20.971651 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:59:21.209648 systemd-networkd[839]: lo: Link UP Jan 23 18:59:21.209996 systemd-networkd[839]: lo: Gained carrier Jan 23 18:59:21.216446 systemd-networkd[839]: Enumeration completed Jan 23 18:59:21.220193 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:59:21.224196 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:59:21.224204 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:59:21.243462 systemd-networkd[839]: eth0: Link UP Jan 23 18:59:21.244481 systemd-networkd[839]: eth0: Gained carrier Jan 23 18:59:21.244503 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:59:21.250962 systemd[1]: Reached target network.target - Network. Jan 23 18:59:21.422594 systemd-networkd[839]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:59:21.985677 ignition[747]: parsing config with SHA512: 4a525be495cff99289283dccae99375a1d2dc831ec5f95f7dc29964aacd5fc6775e1d5edf641e9f8724a66366e5549fb7bec7323db0a5a4f8d4fbf5f881e75ba Jan 23 18:59:22.031853 unknown[747]: fetched base config from "system" Jan 23 18:59:22.031871 unknown[747]: fetched user config from "qemu" Jan 23 18:59:22.032902 ignition[747]: fetch-offline: fetch-offline passed Jan 23 18:59:22.033007 ignition[747]: Ignition finished successfully Jan 23 18:59:22.086462 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:59:22.114469 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 18:59:22.120020 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:59:22.526584 ignition[844]: Ignition 2.22.0 Jan 23 18:59:22.527047 ignition[844]: Stage: kargs Jan 23 18:59:22.567334 ignition[844]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:59:22.567595 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:59:22.574980 ignition[844]: kargs: kargs passed Jan 23 18:59:22.575269 ignition[844]: Ignition finished successfully Jan 23 18:59:22.607283 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:59:22.651047 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:59:22.863602 ignition[852]: Ignition 2.22.0 Jan 23 18:59:22.863713 ignition[852]: Stage: disks Jan 23 18:59:22.866565 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:59:22.866589 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:59:22.929648 ignition[852]: disks: disks passed Jan 23 18:59:22.929971 ignition[852]: Ignition finished successfully Jan 23 18:59:22.943310 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:59:22.980573 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:59:22.992932 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:59:22.993248 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:59:23.026843 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:59:23.083972 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:59:23.109950 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:59:23.218222 systemd-networkd[839]: eth0: Gained IPv6LL Jan 23 18:59:23.306983 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 18:59:23.329364 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:59:23.381668 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:59:24.531606 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:59:24.534554 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:59:24.545706 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:59:24.595935 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:59:24.625775 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:59:24.637921 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:59:24.638003 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:59:24.638047 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:59:24.749995 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:59:24.843800 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (872) Jan 23 18:59:24.843887 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:59:24.843908 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:59:24.797798 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:59:24.955902 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:59:24.962979 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:59:24.987688 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:59:25.804210 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:59:25.877834 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:59:25.914852 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:59:25.971553 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:59:27.133535 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:59:27.173615 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:59:27.216353 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:59:27.371808 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:59:27.395846 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:59:27.440289 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:59:28.305291 ignition[986]: INFO : Ignition 2.22.0 Jan 23 18:59:28.305291 ignition[986]: INFO : Stage: mount Jan 23 18:59:28.325809 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:59:28.325809 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:59:28.388775 ignition[986]: INFO : mount: mount passed Jan 23 18:59:28.388775 ignition[986]: INFO : Ignition finished successfully Jan 23 18:59:28.401775 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:59:28.449283 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:59:28.588651 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:59:28.766040 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (999) Jan 23 18:59:28.816973 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:59:28.817696 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:59:28.910213 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:59:28.910752 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:59:28.979005 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:59:29.721860 ignition[1016]: INFO : Ignition 2.22.0 Jan 23 18:59:29.812731 ignition[1016]: INFO : Stage: files Jan 23 18:59:29.838535 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:59:29.838535 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:59:29.838535 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:59:29.900528 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:59:29.900528 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:59:29.900528 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:59:29.900528 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:59:29.900528 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:59:29.899007 unknown[1016]: wrote ssh authorized keys file for user: core Jan 23 18:59:30.001893 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 18:59:30.001893 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 18:59:30.116200 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:59:30.672300 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 18:59:30.693647 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 18:59:30.693647 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 18:59:30.977202 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 18:59:32.122404 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 18:59:32.122404 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:59:32.166636 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:59:32.166636 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:59:32.203964 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 18:59:32.449600 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 18:59:40.503390 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:59:40.503390 ignition[1016]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 18:59:40.545613 ignition[1016]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:59:40.545613 ignition[1016]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:59:40.545613 ignition[1016]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 18:59:40.545613 ignition[1016]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 23 18:59:40.545613 ignition[1016]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:59:40.545613 ignition[1016]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:59:40.545613 ignition[1016]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 23 18:59:40.545613 ignition[1016]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 18:59:41.068903 ignition[1016]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:59:41.128787 ignition[1016]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:59:41.179836 ignition[1016]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 18:59:41.179836 ignition[1016]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:59:41.179836 ignition[1016]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:59:41.179836 ignition[1016]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:59:41.179836 ignition[1016]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:59:41.179836 ignition[1016]: INFO : files: files passed Jan 23 18:59:41.179836 ignition[1016]: INFO : Ignition finished successfully Jan 23 18:59:41.236648 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:59:41.337647 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:59:41.399841 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:59:41.423869 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:59:41.424375 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:59:41.505777 initrd-setup-root-after-ignition[1045]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 18:59:41.534515 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:59:41.534515 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:59:41.597376 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:59:41.559474 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:59:41.584363 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:59:41.632386 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:59:41.838265 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:59:41.840740 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:59:41.884387 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:59:41.896988 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:59:41.932272 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:59:41.935466 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:59:42.174348 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:59:42.208032 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:59:42.286780 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:59:42.298403 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:59:42.329746 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:59:42.361025 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:59:42.361931 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:59:42.426499 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:59:42.441464 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:59:42.465865 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:59:42.479427 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:59:42.509396 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:59:42.543211 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:59:42.601024 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:59:42.640802 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:59:42.673539 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:59:42.708417 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:59:42.721480 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:59:42.776007 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:59:42.776442 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:59:42.811504 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:59:42.828696 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:59:42.878690 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:59:42.892758 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:59:42.906034 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:59:42.906411 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:59:42.938776 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:59:42.938991 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:59:42.960430 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:59:43.002934 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:59:43.008360 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:59:43.046813 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:59:43.061908 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:59:43.103468 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:59:43.103830 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:59:43.119652 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:59:43.119806 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:59:43.133347 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:59:43.134352 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:59:43.170933 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:59:43.172362 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:59:43.185738 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:59:43.218425 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:59:43.289224 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:59:43.290027 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:59:43.338944 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:59:43.340425 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:59:43.383799 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:59:43.384033 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:59:44.022676 ignition[1071]: INFO : Ignition 2.22.0 Jan 23 18:59:44.022676 ignition[1071]: INFO : Stage: umount Jan 23 18:59:44.043307 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:59:44.043307 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:59:44.043569 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:59:44.071830 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:59:44.072221 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:59:44.113721 ignition[1071]: INFO : umount: umount passed Jan 23 18:59:44.113721 ignition[1071]: INFO : Ignition finished successfully Jan 23 18:59:44.143565 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:59:44.143958 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:59:44.182023 systemd[1]: Stopped target network.target - Network. Jan 23 18:59:44.193007 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:59:44.193415 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:59:44.202908 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:59:44.203033 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:59:44.226014 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:59:44.226266 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:59:44.233876 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:59:44.233982 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:59:44.257034 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:59:44.257287 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:59:44.286042 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:59:44.286553 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:59:44.359887 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:59:44.360185 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:59:44.411411 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:59:44.411984 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:59:44.412325 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:59:44.489036 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:59:44.506445 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:59:44.528937 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:59:44.529261 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:59:44.544189 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:59:44.554825 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:59:44.554910 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:59:44.573245 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:59:44.573339 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:59:44.636917 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:59:44.650222 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:59:44.679568 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:59:44.681970 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:59:44.716242 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:59:44.740866 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:59:44.740993 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:59:44.781524 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:59:44.796010 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:59:44.826281 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:59:44.826533 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:59:44.837559 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:59:44.837721 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:59:44.877384 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:59:44.877494 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:59:44.900764 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:59:44.900848 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:59:44.924454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:59:44.924582 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:59:44.958010 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:59:44.964177 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:59:44.964255 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:59:45.022912 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:59:45.023028 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:59:45.090568 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 18:59:45.090831 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:59:45.126857 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:59:45.127047 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:59:45.130432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:59:45.130506 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:59:45.175206 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 18:59:45.175311 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 18:59:45.175389 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 18:59:45.175483 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:59:45.176595 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:59:45.176955 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:59:45.191712 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:59:45.191961 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:59:45.238286 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:59:45.316224 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:59:45.441888 systemd[1]: Switching root. Jan 23 18:59:45.515837 systemd-journald[203]: Journal stopped Jan 23 18:59:52.965994 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 23 18:59:52.966293 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:59:52.966318 kernel: SELinux: policy capability open_perms=1 Jan 23 18:59:52.966340 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:59:52.966356 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:59:52.966374 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:59:52.966392 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:59:52.966407 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:59:52.966429 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:59:52.966443 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:59:52.966466 kernel: audit: type=1403 audit(1769194786.190:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:59:52.966591 systemd[1]: Successfully loaded SELinux policy in 386.560ms. Jan 23 18:59:52.966618 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 26.125ms. Jan 23 18:59:52.966641 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:59:52.966658 systemd[1]: Detected virtualization kvm. Jan 23 18:59:52.966673 systemd[1]: Detected architecture x86-64. Jan 23 18:59:52.966785 systemd[1]: Detected first boot. Jan 23 18:59:52.966805 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:59:52.966821 zram_generator::config[1116]: No configuration found. Jan 23 18:59:52.966843 kernel: Guest personality initialized and is inactive Jan 23 18:59:52.966859 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:59:52.966962 kernel: Initialized host personality Jan 23 18:59:52.966980 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:59:52.966995 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:59:52.967012 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:59:52.967030 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:59:52.967047 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:59:52.967207 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:59:52.967229 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:59:52.967326 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:59:52.967346 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:59:52.967378 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:59:52.967483 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:59:52.967500 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:59:52.967518 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:59:52.967546 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:59:52.967569 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:59:52.967585 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:59:52.967605 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:59:52.967621 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:59:52.967784 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:59:52.967806 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:59:52.967823 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:59:52.967842 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:59:52.967858 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:59:52.967874 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:59:52.967976 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:59:52.967993 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:59:52.968009 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:59:52.968029 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:59:52.968048 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:59:52.968215 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:59:52.968234 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:59:52.968332 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:59:52.968350 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:59:52.968374 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:59:52.968393 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:59:52.968412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:59:52.968429 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:59:52.968446 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:59:52.968541 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:59:52.968564 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:59:52.968580 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:59:52.968596 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:59:52.968617 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:59:52.968633 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:59:52.968650 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:59:52.968667 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:59:52.968768 systemd[1]: Reached target machines.target - Containers. Jan 23 18:59:52.968789 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:59:52.968807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:59:52.968825 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:59:52.968915 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:59:52.968937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:59:52.969028 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:59:52.969047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:59:52.969210 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:59:52.969231 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:59:52.969249 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:59:52.969266 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:59:52.969364 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:59:52.969384 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:59:52.969401 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:59:52.978808 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:59:52.979607 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:59:52.979630 kernel: ACPI: bus type drm_connector registered Jan 23 18:59:52.979648 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:59:52.979772 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:59:52.979798 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:59:52.979816 kernel: fuse: init (API version 7.41) Jan 23 18:59:52.979835 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:59:52.979853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:59:52.980457 systemd-journald[1201]: Collecting audit messages is disabled. Jan 23 18:59:52.980516 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:59:52.980536 systemd-journald[1201]: Journal started Jan 23 18:59:52.980782 systemd-journald[1201]: Runtime Journal (/run/log/journal/0fbb1d908ca54ac8841cfacf21c86629) is 6M, max 48.1M, 42.1M free. Jan 23 18:59:50.392350 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:59:50.480542 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 18:59:50.482973 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:59:50.487945 systemd[1]: systemd-journald.service: Consumed 2.776s CPU time. Jan 23 18:59:53.016249 systemd[1]: Stopped verity-setup.service. Jan 23 18:59:53.101221 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:59:53.144652 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:59:53.183856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:59:53.226583 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:59:53.271469 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:59:53.293980 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:59:53.384859 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:59:53.484826 kernel: loop: module loaded Jan 23 18:59:53.488844 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:59:53.506488 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:59:53.534207 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:59:53.551454 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:59:53.552969 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:59:53.577411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:59:53.578386 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:59:53.592268 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:59:53.593370 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:59:53.614986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:59:53.615601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:59:53.631674 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:59:53.633839 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:59:53.664885 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:59:53.672246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:59:53.702988 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:59:53.745036 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:59:53.792235 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:59:53.828960 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:59:53.991617 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:59:54.027626 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:59:54.074394 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:59:54.100976 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:59:54.101300 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:59:54.128000 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:59:54.180667 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:59:54.199575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:59:54.209050 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:59:54.244543 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:59:54.267539 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:59:54.271522 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:59:54.285031 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:59:54.295390 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:59:54.296567 systemd-journald[1201]: Time spent on flushing to /var/log/journal/0fbb1d908ca54ac8841cfacf21c86629 is 125.598ms for 1067 entries. Jan 23 18:59:54.296567 systemd-journald[1201]: System Journal (/var/log/journal/0fbb1d908ca54ac8841cfacf21c86629) is 8M, max 195.6M, 187.6M free. Jan 23 18:59:54.588318 systemd-journald[1201]: Received client request to flush runtime journal. Jan 23 18:59:54.332434 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:59:54.377279 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:59:54.408835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:59:54.439563 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:59:54.465485 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:59:54.740697 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:59:54.781279 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:59:54.835465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:59:54.874377 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:59:54.900205 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 18:59:54.912429 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:59:55.189380 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 23 18:59:55.191684 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 23 18:59:55.343946 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:59:55.361614 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:59:55.402341 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:59:55.406449 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:59:55.430603 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:59:55.500498 kernel: loop1: detected capacity change from 0 to 224512 Jan 23 18:59:55.882387 kernel: loop2: detected capacity change from 0 to 128560 Jan 23 18:59:56.234675 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:59:56.310476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:59:57.624372 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 18:59:57.896335 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 23 18:59:57.896561 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 23 18:59:57.960925 kernel: loop4: detected capacity change from 0 to 224512 Jan 23 18:59:58.013624 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:59:58.382212 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 18:59:58.543289 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 18:59:58.544948 (sd-merge)[1261]: Merged extensions into '/usr'. Jan 23 18:59:58.579395 systemd[1]: Reload requested from client PID 1235 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:59:58.579496 systemd[1]: Reloading... Jan 23 18:59:59.364331 zram_generator::config[1293]: No configuration found. Jan 23 18:59:59.888206 ldconfig[1230]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 19:00:00.308550 systemd[1]: Reloading finished in 1727 ms. Jan 23 19:00:00.398927 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 19:00:00.411340 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 19:00:00.490478 systemd[1]: Starting ensure-sysext.service... Jan 23 19:00:00.506576 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:00:00.625961 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Jan 23 19:00:00.626263 systemd[1]: Reloading... Jan 23 19:00:00.812909 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 19:00:00.814346 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 19:00:00.814932 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 19:00:00.815626 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 19:00:00.821368 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 19:00:00.822052 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 23 19:00:00.822312 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 23 19:00:00.850644 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:00:00.850668 systemd-tmpfiles[1328]: Skipping /boot Jan 23 19:00:00.921317 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:00:00.921343 systemd-tmpfiles[1328]: Skipping /boot Jan 23 19:00:01.008618 zram_generator::config[1367]: No configuration found. Jan 23 19:00:01.977681 systemd[1]: Reloading finished in 1350 ms. Jan 23 19:00:02.026424 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 19:00:02.088017 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:00:02.269570 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:00:02.304690 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 19:00:02.331716 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 19:00:02.369702 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:00:02.392702 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:00:02.419551 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 19:00:02.479032 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 19:00:02.575288 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 19:00:02.622660 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:00:02.630430 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:00:02.651353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:00:02.683498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:00:02.719459 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:00:02.731664 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:00:02.735335 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:00:02.742008 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 19:00:02.770714 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Jan 23 19:00:02.781327 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 19:00:02.794565 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:00:02.801417 augenrules[1425]: No rules Jan 23 19:00:02.806727 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:00:02.807583 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:00:02.822252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:00:02.823464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:00:02.839458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:00:02.845341 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:00:02.878466 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:00:02.881395 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:00:02.916885 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 19:00:03.014931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:00:03.037702 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 19:00:03.082359 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:00:03.086912 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:00:03.096908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:00:03.102367 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:00:03.129501 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:00:03.178309 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:00:03.199549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:00:03.211234 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:00:03.211306 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:00:03.234621 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:00:03.245906 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 19:00:03.245979 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:00:03.246747 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 19:00:03.265952 systemd[1]: Finished ensure-sysext.service. Jan 23 19:00:03.273967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:00:03.274438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:00:03.288582 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:00:03.290333 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:00:03.320722 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:00:03.321468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:00:03.334399 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:00:03.334743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:00:03.368546 augenrules[1453]: /sbin/augenrules: No change Jan 23 19:00:03.370322 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:00:03.370412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:00:03.379304 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 19:00:03.462911 augenrules[1498]: No rules Jan 23 19:00:03.469648 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:00:03.470967 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:00:03.500297 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 19:00:03.940470 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 19:00:03.972442 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 19:00:03.975521 systemd-resolved[1403]: Positive Trust Anchors: Jan 23 19:00:03.975547 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:00:03.975589 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:00:03.987400 kernel: ACPI: button: Power Button [PWRF] Jan 23 19:00:03.991555 systemd-resolved[1403]: Defaulting to hostname 'linux'. Jan 23 19:00:03.995037 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:00:04.008411 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:00:04.028510 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:00:04.044504 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 19:00:04.120256 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 19:00:04.129475 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 19:00:04.149943 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 19:00:04.193335 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 19:00:04.195318 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:00:04.195583 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 19:00:04.198993 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 19:00:04.237293 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 19:00:04.239378 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 19:00:04.239791 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 19:00:04.241248 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:00:04.258617 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 19:00:04.259627 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 19:00:04.260500 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 19:00:04.283264 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:00:04.307960 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 19:00:04.352636 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 19:00:04.388784 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 19:00:04.402666 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 19:00:04.422019 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 19:00:04.455734 systemd-networkd[1472]: lo: Link UP Jan 23 19:00:04.455896 systemd-networkd[1472]: lo: Gained carrier Jan 23 19:00:04.469703 systemd-networkd[1472]: Enumeration completed Jan 23 19:00:04.483673 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:00:04.483690 systemd-networkd[1472]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:00:04.483727 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 19:00:04.490234 systemd-networkd[1472]: eth0: Link UP Jan 23 19:00:04.494972 systemd-networkd[1472]: eth0: Gained carrier Jan 23 19:00:04.495240 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:00:04.511445 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 19:00:04.554346 systemd-networkd[1472]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 19:00:04.556382 systemd-timesyncd[1492]: Network configuration changed, trying to establish connection. Jan 23 19:00:04.559004 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:00:05.718627 systemd-timesyncd[1492]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 19:00:05.718712 systemd-timesyncd[1492]: Initial clock synchronization to Fri 2026-01-23 19:00:05.714099 UTC. Jan 23 19:00:05.718790 systemd-resolved[1403]: Clock change detected. Flushing caches. Jan 23 19:00:05.729263 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 19:00:05.766796 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 19:00:05.912608 systemd[1]: Reached target network.target - Network. Jan 23 19:00:05.928874 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:00:05.947635 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:00:05.961051 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:00:05.961525 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:00:05.976762 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 19:00:06.007060 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 19:00:06.028346 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 19:00:06.069510 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 19:00:06.103930 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 19:00:06.125356 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 19:00:06.171549 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 19:00:06.182103 jq[1539]: false Jan 23 19:00:06.188618 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 19:00:06.351581 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 19:00:06.378558 extend-filesystems[1540]: Found /dev/vda6 Jan 23 19:00:06.378558 extend-filesystems[1540]: Found /dev/vda9 Jan 23 19:00:06.420976 extend-filesystems[1540]: Checking size of /dev/vda9 Jan 23 19:00:06.432261 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing passwd entry cache Jan 23 19:00:06.408253 oslogin_cache_refresh[1541]: Refreshing passwd entry cache Jan 23 19:00:06.409698 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 19:00:06.482784 extend-filesystems[1540]: Resized partition /dev/vda9 Jan 23 19:00:06.512873 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting users, quitting Jan 23 19:00:06.512873 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:00:06.512873 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing group entry cache Jan 23 19:00:06.505664 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 19:00:06.488022 oslogin_cache_refresh[1541]: Failure getting users, quitting Jan 23 19:00:06.488056 oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:00:06.488238 oslogin_cache_refresh[1541]: Refreshing group entry cache Jan 23 19:00:06.538893 extend-filesystems[1560]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 19:00:06.557069 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting groups, quitting Jan 23 19:00:06.557069 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:00:06.546735 oslogin_cache_refresh[1541]: Failure getting groups, quitting Jan 23 19:00:06.546760 oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:00:06.565049 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 19:00:06.583106 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 19:00:06.607012 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 19:00:06.659111 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 19:00:06.697113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:00:06.732084 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 19:00:06.741644 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 19:00:06.745322 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 19:00:06.829899 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 19:00:06.792288 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 19:00:06.837796 extend-filesystems[1560]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 19:00:06.837796 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 19:00:06.837796 extend-filesystems[1560]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 19:00:06.912094 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Jan 23 19:00:06.849849 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 19:00:06.900796 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 19:00:06.902562 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 19:00:06.903367 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 19:00:06.912085 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 19:00:06.926634 systemd-networkd[1472]: eth0: Gained IPv6LL Jan 23 19:00:06.933930 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 19:00:06.934709 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 19:00:06.938025 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 19:00:06.938720 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 19:00:06.973614 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 19:00:07.002799 jq[1570]: true Jan 23 19:00:07.008727 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 19:00:07.081634 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 19:00:07.228628 tar[1575]: linux-amd64/LICENSE Jan 23 19:00:07.228628 tar[1575]: linux-amd64/helm Jan 23 19:00:07.260046 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 19:00:07.282355 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 19:00:07.285960 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 19:00:07.300559 jq[1578]: true Jan 23 19:00:07.302093 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 19:00:07.326536 update_engine[1569]: I20260123 19:00:07.325899 1569 main.cc:92] Flatcar Update Engine starting Jan 23 19:00:07.344882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:00:07.365502 sshd_keygen[1571]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 19:00:07.367965 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 19:00:07.369761 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 19:00:07.510911 dbus-daemon[1537]: [system] SELinux support is enabled Jan 23 19:00:07.512874 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 19:00:07.517328 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 19:00:07.522625 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 19:00:07.523285 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 19:00:07.523313 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 19:00:07.566964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:00:07.573785 update_engine[1569]: I20260123 19:00:07.573721 1569 update_check_scheduler.cc:74] Next update check in 5m47s Jan 23 19:00:07.588325 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 19:00:07.589091 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 19:00:07.624357 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 19:00:07.636121 systemd[1]: Started update-engine.service - Update Engine. Jan 23 19:00:07.643894 systemd-logind[1561]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 19:00:07.643935 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 19:00:07.652599 systemd-logind[1561]: New seat seat0. Jan 23 19:00:07.668840 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 19:00:07.680272 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 19:00:07.684937 systemd[1]: Started sshd@0-10.0.0.46:22-10.0.0.1:43092.service - OpenSSH per-connection server daemon (10.0.0.1:43092). Jan 23 19:00:07.773692 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 19:00:07.788247 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 19:00:07.805853 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 19:00:07.942099 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 19:00:07.943290 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 19:00:07.970924 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Jan 23 19:00:07.996317 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 19:00:08.048228 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 19:00:08.069265 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 19:00:08.672015 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 19:00:08.754961 kernel: kvm_amd: TSC scaling supported Jan 23 19:00:08.771942 kernel: kvm_amd: Nested Virtualization enabled Jan 23 19:00:08.771982 kernel: kvm_amd: Nested Paging enabled Jan 23 19:00:08.772003 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 19:00:08.772023 kernel: kvm_amd: PMU virtualization is disabled Jan 23 19:00:08.756136 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 19:00:08.776658 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 19:00:08.795314 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 19:00:08.935746 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 19:00:09.336898 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 43092 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:00:09.345971 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:09.421797 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 19:00:09.666660 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 19:00:09.917321 systemd-logind[1561]: New session 1 of user core. Jan 23 19:00:10.042596 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 19:00:10.068566 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 19:00:10.282842 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 19:00:10.305843 systemd-logind[1561]: New session c1 of user core. Jan 23 19:00:10.409978 kernel: EDAC MC: Ver: 3.0.0 Jan 23 19:00:10.973023 systemd[1664]: Queued start job for default target default.target. Jan 23 19:00:10.981988 systemd[1664]: Created slice app.slice - User Application Slice. Jan 23 19:00:10.982020 systemd[1664]: Reached target paths.target - Paths. Jan 23 19:00:10.983664 systemd[1664]: Reached target timers.target - Timers. Jan 23 19:00:11.005712 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 19:00:11.075804 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 19:00:11.078281 systemd[1664]: Reached target sockets.target - Sockets. Jan 23 19:00:11.078556 systemd[1664]: Reached target basic.target - Basic System. Jan 23 19:00:11.078651 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 19:00:11.078979 systemd[1664]: Reached target default.target - Main User Target. Jan 23 19:00:11.079023 systemd[1664]: Startup finished in 642ms. Jan 23 19:00:11.116560 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 19:00:11.276891 systemd[1]: Started sshd@1-10.0.0.46:22-10.0.0.1:43106.service - OpenSSH per-connection server daemon (10.0.0.1:43106). Jan 23 19:00:11.320603 containerd[1579]: time="2026-01-23T19:00:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 19:00:11.323049 containerd[1579]: time="2026-01-23T19:00:11.322941214Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 19:00:11.433299 containerd[1579]: time="2026-01-23T19:00:11.431570313Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="187.179µs" Jan 23 19:00:11.433299 containerd[1579]: time="2026-01-23T19:00:11.432617067Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 19:00:11.433299 containerd[1579]: time="2026-01-23T19:00:11.432776835Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 19:00:11.443606 containerd[1579]: time="2026-01-23T19:00:11.437552139Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 19:00:11.443606 containerd[1579]: time="2026-01-23T19:00:11.437725913Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 19:00:11.443606 containerd[1579]: time="2026-01-23T19:00:11.437772810Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:00:11.443606 containerd[1579]: time="2026-01-23T19:00:11.438348145Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:00:11.443606 containerd[1579]: time="2026-01-23T19:00:11.438488406Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:00:11.443606 containerd[1579]: time="2026-01-23T19:00:11.439156463Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:00:11.443606 containerd[1579]: time="2026-01-23T19:00:11.440290650Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:00:11.443606 containerd[1579]: time="2026-01-23T19:00:11.440310066Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:00:11.443606 containerd[1579]: time="2026-01-23T19:00:11.440323261Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 19:00:11.443606 containerd[1579]: time="2026-01-23T19:00:11.440688313Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 19:00:11.466716 containerd[1579]: time="2026-01-23T19:00:11.466547219Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:00:11.466849 containerd[1579]: time="2026-01-23T19:00:11.466732325Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:00:11.466849 containerd[1579]: time="2026-01-23T19:00:11.466752953Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 19:00:11.467362 containerd[1579]: time="2026-01-23T19:00:11.467127132Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 19:00:11.468588 containerd[1579]: time="2026-01-23T19:00:11.468329296Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 19:00:11.468758 containerd[1579]: time="2026-01-23T19:00:11.468676093Z" level=info msg="metadata content store policy set" policy=shared Jan 23 19:00:11.479535 containerd[1579]: time="2026-01-23T19:00:11.479321262Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 19:00:11.479684 containerd[1579]: time="2026-01-23T19:00:11.479615512Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 19:00:11.479898 containerd[1579]: time="2026-01-23T19:00:11.479737369Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 19:00:11.479898 containerd[1579]: time="2026-01-23T19:00:11.479759621Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 19:00:11.479898 containerd[1579]: time="2026-01-23T19:00:11.479778977Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 19:00:11.479898 containerd[1579]: time="2026-01-23T19:00:11.479796840Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 19:00:11.479898 containerd[1579]: time="2026-01-23T19:00:11.479815405Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 19:00:11.479898 containerd[1579]: time="2026-01-23T19:00:11.479878332Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 19:00:11.479898 containerd[1579]: time="2026-01-23T19:00:11.479895614Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 19:00:11.480101 containerd[1579]: time="2026-01-23T19:00:11.479909520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 19:00:11.480101 containerd[1579]: time="2026-01-23T19:00:11.479922535Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 19:00:11.480101 containerd[1579]: time="2026-01-23T19:00:11.479939656Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 19:00:11.480327 containerd[1579]: time="2026-01-23T19:00:11.480246490Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 19:00:11.480484 containerd[1579]: time="2026-01-23T19:00:11.480462222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 19:00:11.480592 containerd[1579]: time="2026-01-23T19:00:11.480488972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 19:00:11.480592 containerd[1579]: time="2026-01-23T19:00:11.480555295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 19:00:11.480592 containerd[1579]: time="2026-01-23T19:00:11.480572167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 19:00:11.480592 containerd[1579]: time="2026-01-23T19:00:11.480585792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 19:00:11.480712 containerd[1579]: time="2026-01-23T19:00:11.480601652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 19:00:11.480712 containerd[1579]: time="2026-01-23T19:00:11.480617161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 19:00:11.480712 containerd[1579]: time="2026-01-23T19:00:11.480632540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 19:00:11.480712 containerd[1579]: time="2026-01-23T19:00:11.480645955Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 19:00:11.480904 containerd[1579]: time="2026-01-23T19:00:11.480859704Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 19:00:11.481363 containerd[1579]: time="2026-01-23T19:00:11.481280409Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 19:00:11.481363 containerd[1579]: time="2026-01-23T19:00:11.481347785Z" level=info msg="Start snapshots syncer" Jan 23 19:00:11.482462 containerd[1579]: time="2026-01-23T19:00:11.481578044Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 19:00:11.482670 containerd[1579]: time="2026-01-23T19:00:11.482566720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 19:00:11.484496 containerd[1579]: time="2026-01-23T19:00:11.483629183Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 19:00:11.484496 containerd[1579]: time="2026-01-23T19:00:11.483847129Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 19:00:11.484496 containerd[1579]: time="2026-01-23T19:00:11.484076668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 19:00:11.484496 containerd[1579]: time="2026-01-23T19:00:11.484235063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 19:00:11.484496 containerd[1579]: time="2026-01-23T19:00:11.484261142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 19:00:11.484937 containerd[1579]: time="2026-01-23T19:00:11.484910955Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 19:00:11.485084 containerd[1579]: time="2026-01-23T19:00:11.485059202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 19:00:11.485256 containerd[1579]: time="2026-01-23T19:00:11.485228267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 19:00:11.485335 containerd[1579]: time="2026-01-23T19:00:11.485317825Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 19:00:11.485586 containerd[1579]: time="2026-01-23T19:00:11.485566329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 19:00:11.485661 containerd[1579]: time="2026-01-23T19:00:11.485646297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 19:00:11.485797 containerd[1579]: time="2026-01-23T19:00:11.485777353Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 19:00:11.486500 containerd[1579]: time="2026-01-23T19:00:11.486475866Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:00:11.486598 containerd[1579]: time="2026-01-23T19:00:11.486578979Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:00:11.486655 containerd[1579]: time="2026-01-23T19:00:11.486641055Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:00:11.486710 containerd[1579]: time="2026-01-23T19:00:11.486695887Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:00:11.486759 containerd[1579]: time="2026-01-23T19:00:11.486745921Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 19:00:11.486815 containerd[1579]: time="2026-01-23T19:00:11.486802335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 19:00:11.486880 containerd[1579]: time="2026-01-23T19:00:11.486866094Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 19:00:11.487044 containerd[1579]: time="2026-01-23T19:00:11.487028237Z" level=info msg="runtime interface created" Jan 23 19:00:11.487254 containerd[1579]: time="2026-01-23T19:00:11.487084733Z" level=info msg="created NRI interface" Jan 23 19:00:11.487342 containerd[1579]: time="2026-01-23T19:00:11.487320262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 19:00:11.487534 containerd[1579]: time="2026-01-23T19:00:11.487512842Z" level=info msg="Connect containerd service" Jan 23 19:00:11.487621 containerd[1579]: time="2026-01-23T19:00:11.487604483Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 19:00:11.497738 containerd[1579]: time="2026-01-23T19:00:11.496220143Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:00:11.761452 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 43106 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:00:11.764764 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:11.862872 systemd-logind[1561]: New session 2 of user core. Jan 23 19:00:11.869873 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 19:00:11.898727 tar[1575]: linux-amd64/README.md Jan 23 19:00:11.960754 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 19:00:11.977988 sshd[1688]: Connection closed by 10.0.0.1 port 43106 Jan 23 19:00:11.981158 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:12.060589 systemd[1]: sshd@1-10.0.0.46:22-10.0.0.1:43106.service: Deactivated successfully. Jan 23 19:00:12.068917 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 19:00:12.078222 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Jan 23 19:00:12.083953 systemd[1]: Started sshd@2-10.0.0.46:22-10.0.0.1:43112.service - OpenSSH per-connection server daemon (10.0.0.1:43112). Jan 23 19:00:12.110509 systemd-logind[1561]: Removed session 2. Jan 23 19:00:12.315722 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 43112 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:00:12.316166 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:12.327160 systemd-logind[1561]: New session 3 of user core. Jan 23 19:00:12.335882 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 19:00:12.469100 containerd[1579]: time="2026-01-23T19:00:12.468493057Z" level=info msg="Start subscribing containerd event" Jan 23 19:00:12.469100 containerd[1579]: time="2026-01-23T19:00:12.468874879Z" level=info msg="Start recovering state" Jan 23 19:00:12.470676 containerd[1579]: time="2026-01-23T19:00:12.470588448Z" level=info msg="Start event monitor" Jan 23 19:00:12.470873 containerd[1579]: time="2026-01-23T19:00:12.470791817Z" level=info msg="Start cni network conf syncer for default" Jan 23 19:00:12.470873 containerd[1579]: time="2026-01-23T19:00:12.470845798Z" level=info msg="Start streaming server" Jan 23 19:00:12.471021 containerd[1579]: time="2026-01-23T19:00:12.470957216Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 19:00:12.471081 containerd[1579]: time="2026-01-23T19:00:12.471062433Z" level=info msg="runtime interface starting up..." Jan 23 19:00:12.471226 containerd[1579]: time="2026-01-23T19:00:12.471115251Z" level=info msg="starting plugins..." Jan 23 19:00:12.471284 containerd[1579]: time="2026-01-23T19:00:12.471242298Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 19:00:12.472799 containerd[1579]: time="2026-01-23T19:00:12.472634597Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 19:00:12.472799 containerd[1579]: time="2026-01-23T19:00:12.472762937Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 19:00:12.472982 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 19:00:12.475747 containerd[1579]: time="2026-01-23T19:00:12.475349555Z" level=info msg="containerd successfully booted in 1.156763s" Jan 23 19:00:12.522478 sshd[1707]: Connection closed by 10.0.0.1 port 43112 Jan 23 19:00:12.523742 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:12.530852 systemd[1]: sshd@2-10.0.0.46:22-10.0.0.1:43112.service: Deactivated successfully. Jan 23 19:00:12.535254 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 19:00:12.540516 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Jan 23 19:00:12.545517 systemd-logind[1561]: Removed session 3. Jan 23 19:00:14.703837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:00:14.704906 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 19:00:14.706316 systemd[1]: Startup finished in 13.239s (kernel) + 38.739s (initrd) + 27.738s (userspace) = 1min 19.717s. Jan 23 19:00:14.736550 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:00:15.900289 kubelet[1721]: E0123 19:00:15.899802 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:00:15.906300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:00:15.906715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:00:15.907351 systemd[1]: kubelet.service: Consumed 3.997s CPU time, 266.8M memory peak. Jan 23 19:00:22.596926 systemd[1]: Started sshd@3-10.0.0.46:22-10.0.0.1:55618.service - OpenSSH per-connection server daemon (10.0.0.1:55618). Jan 23 19:00:22.755044 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 55618 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:00:22.758887 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:22.782335 systemd-logind[1561]: New session 4 of user core. Jan 23 19:00:22.793894 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 19:00:22.951537 sshd[1733]: Connection closed by 10.0.0.1 port 55618 Jan 23 19:00:22.955180 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:22.978898 systemd[1]: sshd@3-10.0.0.46:22-10.0.0.1:55618.service: Deactivated successfully. Jan 23 19:00:22.992945 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 19:00:23.006122 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Jan 23 19:00:23.018956 systemd[1]: Started sshd@4-10.0.0.46:22-10.0.0.1:55620.service - OpenSSH per-connection server daemon (10.0.0.1:55620). Jan 23 19:00:23.027914 systemd-logind[1561]: Removed session 4. Jan 23 19:00:23.135886 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 55620 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:00:23.139534 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:23.245350 systemd-logind[1561]: New session 5 of user core. Jan 23 19:00:23.259671 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 19:00:23.362938 sshd[1742]: Connection closed by 10.0.0.1 port 55620 Jan 23 19:00:23.363030 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:23.393350 systemd[1]: sshd@4-10.0.0.46:22-10.0.0.1:55620.service: Deactivated successfully. Jan 23 19:00:23.433616 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 19:00:23.456260 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Jan 23 19:00:23.464861 systemd[1]: Started sshd@5-10.0.0.46:22-10.0.0.1:55630.service - OpenSSH per-connection server daemon (10.0.0.1:55630). Jan 23 19:00:23.468094 systemd-logind[1561]: Removed session 5. Jan 23 19:00:23.611502 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 55630 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:00:23.625515 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:23.685827 systemd-logind[1561]: New session 6 of user core. Jan 23 19:00:23.701535 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 19:00:23.821851 sshd[1751]: Connection closed by 10.0.0.1 port 55630 Jan 23 19:00:23.823161 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:23.857674 systemd[1]: sshd@5-10.0.0.46:22-10.0.0.1:55630.service: Deactivated successfully. Jan 23 19:00:23.883765 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 19:00:23.892061 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Jan 23 19:00:23.900756 systemd[1]: Started sshd@6-10.0.0.46:22-10.0.0.1:55646.service - OpenSSH per-connection server daemon (10.0.0.1:55646). Jan 23 19:00:23.906042 systemd-logind[1561]: Removed session 6. Jan 23 19:00:24.049073 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 55646 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:00:24.051544 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:24.072511 systemd-logind[1561]: New session 7 of user core. Jan 23 19:00:24.082941 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 19:00:24.227533 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 19:00:24.232664 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:00:24.283694 sudo[1761]: pam_unix(sudo:session): session closed for user root Jan 23 19:00:24.288362 sshd[1760]: Connection closed by 10.0.0.1 port 55646 Jan 23 19:00:24.289900 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:24.326751 systemd[1]: sshd@6-10.0.0.46:22-10.0.0.1:55646.service: Deactivated successfully. Jan 23 19:00:24.330786 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 19:00:24.334507 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Jan 23 19:00:24.339498 systemd[1]: Started sshd@7-10.0.0.46:22-10.0.0.1:55660.service - OpenSSH per-connection server daemon (10.0.0.1:55660). Jan 23 19:00:24.344653 systemd-logind[1561]: Removed session 7. Jan 23 19:00:24.450929 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 55660 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:00:24.454198 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:24.477179 systemd-logind[1561]: New session 8 of user core. Jan 23 19:00:24.494704 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 19:00:24.575888 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 19:00:24.576497 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:00:24.591714 sudo[1773]: pam_unix(sudo:session): session closed for user root Jan 23 19:00:24.611840 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 19:00:24.612584 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:00:24.632962 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:00:24.770546 augenrules[1795]: No rules Jan 23 19:00:24.773241 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:00:24.774126 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:00:24.776555 sudo[1772]: pam_unix(sudo:session): session closed for user root Jan 23 19:00:24.781232 sshd[1771]: Connection closed by 10.0.0.1 port 55660 Jan 23 19:00:24.783261 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:24.797054 systemd[1]: sshd@7-10.0.0.46:22-10.0.0.1:55660.service: Deactivated successfully. Jan 23 19:00:24.801072 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 19:00:24.806582 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Jan 23 19:00:24.813172 systemd[1]: Started sshd@8-10.0.0.46:22-10.0.0.1:41030.service - OpenSSH per-connection server daemon (10.0.0.1:41030). Jan 23 19:00:24.815884 systemd-logind[1561]: Removed session 8. Jan 23 19:00:24.926254 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 41030 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:00:24.932646 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:24.957103 systemd-logind[1561]: New session 9 of user core. Jan 23 19:00:24.967919 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 19:00:25.043796 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 19:00:25.044509 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:00:26.141241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 19:00:26.194095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:00:27.685568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:00:27.725748 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:00:28.612585 kubelet[1836]: E0123 19:00:28.609882 1836 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:00:28.622223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:00:28.624848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:00:28.625929 systemd[1]: kubelet.service: Consumed 1.721s CPU time, 113M memory peak. Jan 23 19:00:29.738477 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 19:00:29.774989 (dockerd)[1846]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 19:00:32.852728 dockerd[1846]: time="2026-01-23T19:00:32.851884151Z" level=info msg="Starting up" Jan 23 19:00:32.864887 dockerd[1846]: time="2026-01-23T19:00:32.864766997Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 19:00:33.080685 dockerd[1846]: time="2026-01-23T19:00:33.079584017Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 19:00:33.282209 systemd[1]: var-lib-docker-metacopy\x2dcheck2347305304-merged.mount: Deactivated successfully. Jan 23 19:00:33.600793 dockerd[1846]: time="2026-01-23T19:00:33.597043583Z" level=info msg="Loading containers: start." Jan 23 19:00:33.668898 kernel: Initializing XFRM netlink socket Jan 23 19:00:35.969932 systemd-networkd[1472]: docker0: Link UP Jan 23 19:00:36.005455 dockerd[1846]: time="2026-01-23T19:00:36.002729149Z" level=info msg="Loading containers: done." Jan 23 19:00:36.118027 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4135004621-merged.mount: Deactivated successfully. Jan 23 19:00:36.128912 dockerd[1846]: time="2026-01-23T19:00:36.128711842Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 19:00:36.129037 dockerd[1846]: time="2026-01-23T19:00:36.128931572Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 19:00:36.130061 dockerd[1846]: time="2026-01-23T19:00:36.129754584Z" level=info msg="Initializing buildkit" Jan 23 19:00:36.435751 dockerd[1846]: time="2026-01-23T19:00:36.434122126Z" level=info msg="Completed buildkit initialization" Jan 23 19:00:36.449618 dockerd[1846]: time="2026-01-23T19:00:36.449187249Z" level=info msg="Daemon has completed initialization" Jan 23 19:00:36.450086 dockerd[1846]: time="2026-01-23T19:00:36.449926486Z" level=info msg="API listen on /run/docker.sock" Jan 23 19:00:36.450656 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 19:00:38.636824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 19:00:38.640196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:00:39.556281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:00:39.589751 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:00:39.628632 containerd[1579]: time="2026-01-23T19:00:39.627651422Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 19:00:39.978090 kubelet[2069]: E0123 19:00:39.977842 2069 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:00:39.984788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:00:39.985148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:00:39.987095 systemd[1]: kubelet.service: Consumed 964ms CPU time, 110.7M memory peak. Jan 23 19:00:40.916005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890895309.mount: Deactivated successfully. Jan 23 19:00:47.032691 containerd[1579]: time="2026-01-23T19:00:47.032104655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:47.035070 containerd[1579]: time="2026-01-23T19:00:47.034078651Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 23 19:00:47.041084 containerd[1579]: time="2026-01-23T19:00:47.040785007Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:47.047522 containerd[1579]: time="2026-01-23T19:00:47.047274597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:47.049304 containerd[1579]: time="2026-01-23T19:00:47.049051461Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 7.421058862s" Jan 23 19:00:47.049304 containerd[1579]: time="2026-01-23T19:00:47.049243637Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 19:00:47.056174 containerd[1579]: time="2026-01-23T19:00:47.055971822Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 19:00:50.137533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 19:00:50.146118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:00:50.688901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:00:50.704991 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:00:51.172077 kubelet[2150]: E0123 19:00:51.171888 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:00:51.181701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:00:51.182001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:00:51.184248 systemd[1]: kubelet.service: Consumed 888ms CPU time, 112.1M memory peak. Jan 23 19:00:52.997279 update_engine[1569]: I20260123 19:00:52.993100 1569 update_attempter.cc:509] Updating boot flags... Jan 23 19:00:53.826030 containerd[1579]: time="2026-01-23T19:00:53.825558718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:53.831653 containerd[1579]: time="2026-01-23T19:00:53.831539042Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 23 19:00:53.834722 containerd[1579]: time="2026-01-23T19:00:53.834661971Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:53.842263 containerd[1579]: time="2026-01-23T19:00:53.842190626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:53.844596 containerd[1579]: time="2026-01-23T19:00:53.844235197Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 6.78815196s" Jan 23 19:00:53.844596 containerd[1579]: time="2026-01-23T19:00:53.844281451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 19:00:53.848897 containerd[1579]: time="2026-01-23T19:00:53.848576585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 19:01:00.299279 containerd[1579]: time="2026-01-23T19:01:00.298273185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:00.305846 containerd[1579]: time="2026-01-23T19:01:00.304867825Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 23 19:01:00.314033 containerd[1579]: time="2026-01-23T19:01:00.313820749Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:00.322164 containerd[1579]: time="2026-01-23T19:01:00.322123131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:00.323946 containerd[1579]: time="2026-01-23T19:01:00.323504307Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 6.474824412s" Jan 23 19:01:00.323946 containerd[1579]: time="2026-01-23T19:01:00.323705353Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 19:01:00.330743 containerd[1579]: time="2026-01-23T19:01:00.330637333Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 19:01:01.408162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 19:01:01.416697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:01:03.250676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:01:03.281887 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:01:03.778745 kubelet[2188]: E0123 19:01:03.777876 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:01:03.794316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:01:03.794935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:01:03.799292 systemd[1]: kubelet.service: Consumed 1.395s CPU time, 112M memory peak. Jan 23 19:01:05.571322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604129536.mount: Deactivated successfully. Jan 23 19:01:11.572867 containerd[1579]: time="2026-01-23T19:01:11.571249479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:11.579132 containerd[1579]: time="2026-01-23T19:01:11.578932182Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 23 19:01:11.582632 containerd[1579]: time="2026-01-23T19:01:11.581553208Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:11.594353 containerd[1579]: time="2026-01-23T19:01:11.594278764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:11.597574 containerd[1579]: time="2026-01-23T19:01:11.596318587Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 11.265624472s" Jan 23 19:01:11.597574 containerd[1579]: time="2026-01-23T19:01:11.596678087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 19:01:11.603884 containerd[1579]: time="2026-01-23T19:01:11.603723047Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 19:01:13.736498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231275101.mount: Deactivated successfully. Jan 23 19:01:13.884005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 19:01:13.900790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:01:15.202364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:01:15.330178 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:01:16.317894 kubelet[2221]: E0123 19:01:16.314652 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:01:16.338589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:01:16.339265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:01:16.349687 systemd[1]: kubelet.service: Consumed 947ms CPU time, 110.7M memory peak. Jan 23 19:01:24.679554 containerd[1579]: time="2026-01-23T19:01:24.677114257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:24.687185 containerd[1579]: time="2026-01-23T19:01:24.685552969Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 23 19:01:24.693623 containerd[1579]: time="2026-01-23T19:01:24.690924488Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:24.701777 containerd[1579]: time="2026-01-23T19:01:24.701519618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:24.707244 containerd[1579]: time="2026-01-23T19:01:24.707005434Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 13.103234205s" Jan 23 19:01:24.707244 containerd[1579]: time="2026-01-23T19:01:24.707145247Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 19:01:24.723996 containerd[1579]: time="2026-01-23T19:01:24.718073684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 19:01:26.651239 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 23 19:01:26.656865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3156571139.mount: Deactivated successfully. Jan 23 19:01:26.686971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:01:26.801543 containerd[1579]: time="2026-01-23T19:01:26.801483428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:01:26.850796 containerd[1579]: time="2026-01-23T19:01:26.849263138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 19:01:26.956940 containerd[1579]: time="2026-01-23T19:01:26.954571537Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:01:26.973336 containerd[1579]: time="2026-01-23T19:01:26.973050006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:01:26.974850 containerd[1579]: time="2026-01-23T19:01:26.974663406Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.256538034s" Jan 23 19:01:26.976569 containerd[1579]: time="2026-01-23T19:01:26.974852262Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 19:01:26.983095 containerd[1579]: time="2026-01-23T19:01:26.981962217Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 19:01:28.056197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:01:28.144336 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:01:28.803187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739465281.mount: Deactivated successfully. Jan 23 19:01:28.929225 kubelet[2284]: E0123 19:01:28.927999 2284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:01:28.956127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:01:28.956724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:01:28.962866 systemd[1]: kubelet.service: Consumed 941ms CPU time, 110.2M memory peak. Jan 23 19:01:39.578973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 23 19:01:39.587174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:01:40.971361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:01:41.042956 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:01:42.438155 kubelet[2353]: E0123 19:01:42.437714 2353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:01:42.465993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:01:42.467116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:01:42.469323 systemd[1]: kubelet.service: Consumed 1.282s CPU time, 110.1M memory peak. Jan 23 19:01:51.400579 containerd[1579]: time="2026-01-23T19:01:51.399207480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:51.414061 containerd[1579]: time="2026-01-23T19:01:51.404633817Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 23 19:01:51.435086 containerd[1579]: time="2026-01-23T19:01:51.427228583Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:51.458360 containerd[1579]: time="2026-01-23T19:01:51.457962172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:01:51.467622 containerd[1579]: time="2026-01-23T19:01:51.467324221Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 24.485304774s" Jan 23 19:01:51.467622 containerd[1579]: time="2026-01-23T19:01:51.467595788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 19:01:52.736173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 23 19:01:52.776725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:01:54.428924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:01:54.495764 (kubelet)[2395]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:01:55.122170 kubelet[2395]: E0123 19:01:55.121962 2395 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:01:55.138599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:01:55.140967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:01:55.143563 systemd[1]: kubelet.service: Consumed 894ms CPU time, 110.8M memory peak. Jan 23 19:02:05.463951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 23 19:02:05.502883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:02:06.817892 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 19:02:06.818045 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 19:02:06.818886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:02:06.819334 systemd[1]: kubelet.service: Consumed 369ms CPU time, 76.1M memory peak. Jan 23 19:02:06.852779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:02:07.057348 systemd[1]: Reload requested from client PID 2415 ('systemctl') (unit session-9.scope)... Jan 23 19:02:07.057616 systemd[1]: Reloading... Jan 23 19:02:07.735703 zram_generator::config[2458]: No configuration found. Jan 23 19:02:09.265821 systemd[1]: Reloading finished in 2206 ms. Jan 23 19:02:09.800048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:02:09.854587 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:02:09.857232 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 19:02:09.857929 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:02:09.858007 systemd[1]: kubelet.service: Consumed 428ms CPU time, 98.3M memory peak. Jan 23 19:02:09.876755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:02:11.053090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:02:11.129728 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:02:11.564756 kubelet[2507]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:02:11.568025 kubelet[2507]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:02:11.568025 kubelet[2507]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:02:11.568025 kubelet[2507]: I0123 19:02:11.565003 2507 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:02:13.330102 kubelet[2507]: I0123 19:02:13.329818 2507 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 19:02:13.330102 kubelet[2507]: I0123 19:02:13.330033 2507 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:02:13.331917 kubelet[2507]: I0123 19:02:13.330833 2507 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 19:02:13.521061 kubelet[2507]: I0123 19:02:13.518314 2507 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:02:13.521061 kubelet[2507]: E0123 19:02:13.518632 2507 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:13.558106 kubelet[2507]: I0123 19:02:13.557839 2507 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:02:13.627967 kubelet[2507]: I0123 19:02:13.617338 2507 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:02:13.627967 kubelet[2507]: I0123 19:02:13.626627 2507 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:02:13.627967 kubelet[2507]: I0123 19:02:13.626686 2507 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:02:13.627967 kubelet[2507]: I0123 19:02:13.627362 2507 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:02:13.632813 kubelet[2507]: I0123 19:02:13.627475 2507 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 19:02:13.634420 kubelet[2507]: I0123 19:02:13.633531 2507 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:02:13.657711 kubelet[2507]: I0123 19:02:13.653997 2507 kubelet.go:446] "Attempting to sync node with API server" Jan 23 19:02:13.658861 kubelet[2507]: I0123 19:02:13.658361 2507 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:02:13.658861 kubelet[2507]: I0123 19:02:13.658492 2507 kubelet.go:352] "Adding apiserver pod source" Jan 23 19:02:13.658861 kubelet[2507]: I0123 19:02:13.658507 2507 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:02:13.686839 kubelet[2507]: I0123 19:02:13.686642 2507 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:02:13.692061 kubelet[2507]: I0123 19:02:13.690356 2507 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 19:02:13.700258 kubelet[2507]: W0123 19:02:13.700051 2507 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 19:02:13.712806 kubelet[2507]: W0123 19:02:13.712054 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:13.712806 kubelet[2507]: W0123 19:02:13.712224 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:13.712806 kubelet[2507]: E0123 19:02:13.712243 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:13.712806 kubelet[2507]: E0123 19:02:13.712275 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:13.749807 kubelet[2507]: I0123 19:02:13.748608 2507 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:02:13.749807 kubelet[2507]: I0123 19:02:13.748693 2507 server.go:1287] "Started kubelet" Jan 23 19:02:13.749807 kubelet[2507]: I0123 19:02:13.748814 2507 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:02:13.755966 kubelet[2507]: I0123 19:02:13.755868 2507 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:02:13.758016 kubelet[2507]: I0123 19:02:13.757992 2507 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:02:13.765826 kubelet[2507]: I0123 19:02:13.759906 2507 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:02:13.765826 kubelet[2507]: I0123 19:02:13.762045 2507 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:02:13.765826 kubelet[2507]: I0123 19:02:13.762222 2507 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:02:13.765826 kubelet[2507]: I0123 19:02:13.762349 2507 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:02:13.765826 kubelet[2507]: W0123 19:02:13.763266 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:13.765826 kubelet[2507]: E0123 19:02:13.763332 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:13.777724 kubelet[2507]: E0123 19:02:13.772668 2507 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:02:13.777724 kubelet[2507]: I0123 19:02:13.774328 2507 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:02:13.778719 kubelet[2507]: I0123 19:02:13.778687 2507 server.go:479] "Adding debug handlers to kubelet server" Jan 23 19:02:13.785876 kubelet[2507]: I0123 19:02:13.785629 2507 factory.go:221] Registration of the systemd container factory successfully Jan 23 19:02:13.785876 kubelet[2507]: I0123 19:02:13.785828 2507 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:02:13.792941 kubelet[2507]: E0123 19:02:13.786592 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="200ms" Jan 23 19:02:13.792941 kubelet[2507]: I0123 19:02:13.791471 2507 factory.go:221] Registration of the containerd container factory successfully Jan 23 19:02:13.797787 kubelet[2507]: E0123 19:02:13.797050 2507 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:02:13.798202 kubelet[2507]: E0123 19:02:13.783756 2507 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d716b2c825e73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:02:13.748653683 +0000 UTC m=+2.577145983,LastTimestamp:2026-01-23 19:02:13.748653683 +0000 UTC m=+2.577145983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:02:13.873565 kubelet[2507]: E0123 19:02:13.873468 2507 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:02:13.995048 kubelet[2507]: E0123 19:02:13.993839 2507 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:02:14.000225 kubelet[2507]: E0123 19:02:13.999489 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="400ms" Jan 23 19:02:14.061890 kubelet[2507]: I0123 19:02:14.051818 2507 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 19:02:14.128017 kubelet[2507]: E0123 19:02:14.124311 2507 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:02:14.139230 kubelet[2507]: I0123 19:02:14.139085 2507 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 19:02:14.139896 kubelet[2507]: I0123 19:02:14.139871 2507 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 19:02:14.145697 kubelet[2507]: I0123 19:02:14.144272 2507 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:02:14.145697 kubelet[2507]: I0123 19:02:14.144352 2507 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 19:02:14.145697 kubelet[2507]: E0123 19:02:14.145018 2507 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:02:14.152655 kubelet[2507]: W0123 19:02:14.151257 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:14.155709 kubelet[2507]: E0123 19:02:14.154588 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:14.159742 kubelet[2507]: I0123 19:02:14.157051 2507 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:02:14.159742 kubelet[2507]: I0123 19:02:14.157067 2507 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:02:14.159742 kubelet[2507]: I0123 19:02:14.157094 2507 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:02:14.180727 kubelet[2507]: I0123 19:02:14.178936 2507 policy_none.go:49] "None policy: Start" Jan 23 19:02:14.180727 kubelet[2507]: I0123 19:02:14.178980 2507 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:02:14.180727 kubelet[2507]: I0123 19:02:14.179001 2507 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:02:14.231659 kubelet[2507]: E0123 19:02:14.227803 2507 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:02:14.241125 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 19:02:14.248580 kubelet[2507]: E0123 19:02:14.245328 2507 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:02:14.298341 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 19:02:14.330703 kubelet[2507]: E0123 19:02:14.329157 2507 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:02:14.337251 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 19:02:14.437141 kubelet[2507]: E0123 19:02:14.435965 2507 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:02:14.440608 kubelet[2507]: E0123 19:02:14.440044 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="800ms" Jan 23 19:02:14.451205 kubelet[2507]: E0123 19:02:14.449126 2507 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:02:14.492587 kubelet[2507]: I0123 19:02:14.491641 2507 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 19:02:14.492587 kubelet[2507]: I0123 19:02:14.492315 2507 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:02:14.492919 kubelet[2507]: I0123 19:02:14.492340 2507 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:02:14.493355 kubelet[2507]: I0123 19:02:14.493332 2507 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:02:14.509695 kubelet[2507]: E0123 19:02:14.509065 2507 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:02:14.510255 kubelet[2507]: E0123 19:02:14.509917 2507 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:02:14.624831 kubelet[2507]: I0123 19:02:14.623106 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:02:14.624831 kubelet[2507]: E0123 19:02:14.623897 2507 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Jan 23 19:02:14.728638 kubelet[2507]: W0123 19:02:14.724957 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:14.728638 kubelet[2507]: E0123 19:02:14.725132 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:14.811894 kubelet[2507]: W0123 19:02:14.809031 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:14.811894 kubelet[2507]: E0123 19:02:14.809259 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:14.834206 kubelet[2507]: I0123 19:02:14.834086 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:02:14.875789 kubelet[2507]: E0123 19:02:14.841354 2507 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Jan 23 19:02:14.881992 kubelet[2507]: W0123 19:02:14.880511 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:14.881992 kubelet[2507]: E0123 19:02:14.880648 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:14.959747 kubelet[2507]: I0123 19:02:14.959359 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:02:14.959747 kubelet[2507]: I0123 19:02:14.959673 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:02:14.959747 kubelet[2507]: I0123 19:02:14.959711 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:02:14.961026 kubelet[2507]: I0123 19:02:14.959969 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:02:14.961026 kubelet[2507]: I0123 19:02:14.960021 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:02:14.973046 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 23 19:02:15.025105 kubelet[2507]: E0123 19:02:15.025059 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:15.046089 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 23 19:02:15.055994 kubelet[2507]: E0123 19:02:15.055539 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:15.062648 kubelet[2507]: I0123 19:02:15.061606 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b6aaad054ea210a84bb7e6acfd37586-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6aaad054ea210a84bb7e6acfd37586\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:02:15.062648 kubelet[2507]: I0123 19:02:15.061669 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b6aaad054ea210a84bb7e6acfd37586-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b6aaad054ea210a84bb7e6acfd37586\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:02:15.062648 kubelet[2507]: I0123 19:02:15.061747 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 23 19:02:15.062648 kubelet[2507]: I0123 19:02:15.061773 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b6aaad054ea210a84bb7e6acfd37586-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6aaad054ea210a84bb7e6acfd37586\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:02:15.096338 systemd[1]: Created slice kubepods-burstable-pod8b6aaad054ea210a84bb7e6acfd37586.slice - libcontainer container kubepods-burstable-pod8b6aaad054ea210a84bb7e6acfd37586.slice. Jan 23 19:02:15.119603 kubelet[2507]: E0123 19:02:15.118140 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:15.263265 kubelet[2507]: E0123 19:02:15.261205 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="1.6s" Jan 23 19:02:15.334486 kubelet[2507]: W0123 19:02:15.261157 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:15.334486 kubelet[2507]: E0123 19:02:15.298238 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:15.338628 kubelet[2507]: E0123 19:02:15.337876 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:15.339164 kubelet[2507]: I0123 19:02:15.339006 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:02:15.340936 kubelet[2507]: E0123 19:02:15.340737 2507 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Jan 23 19:02:15.341005 containerd[1579]: time="2026-01-23T19:02:15.340874099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 23 19:02:15.366193 kubelet[2507]: E0123 19:02:15.363283 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:15.369936 containerd[1579]: time="2026-01-23T19:02:15.369675713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 23 19:02:15.423023 kubelet[2507]: E0123 19:02:15.421742 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:15.440515 containerd[1579]: time="2026-01-23T19:02:15.439629020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b6aaad054ea210a84bb7e6acfd37586,Namespace:kube-system,Attempt:0,}" Jan 23 19:02:15.735000 containerd[1579]: time="2026-01-23T19:02:15.733874470Z" level=info msg="connecting to shim 7a2d56124fd03e89c6b1861dcce43555d8c9e695d256a245f8987580f792cc8f" address="unix:///run/containerd/s/a60b369ab9bedf134c0b38fb155f96b2d9d6c3b4713bd9c711c6b0c56c0bc6b0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:02:15.748723 kubelet[2507]: E0123 19:02:15.748594 2507 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:15.823050 containerd[1579]: time="2026-01-23T19:02:15.822780806Z" level=info msg="connecting to shim 449374ba9aea84efb0197174e14879deb22ccfb5568298adba54842e2fae9da4" address="unix:///run/containerd/s/cecd73c6909ef4eb7fa75319e90fa9bccab883d569892469e80f9b8246f7812a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:02:15.933123 containerd[1579]: time="2026-01-23T19:02:15.932634754Z" level=info msg="connecting to shim 64d64eb3cb3273ceb0636d590db6956edd5e73d526cbb4ef7f3b6978201b582f" address="unix:///run/containerd/s/b112d6d03fa9a536a6047f0cb0d8a5967be7f7446f85d113fa006f58cf9185a7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:02:16.478681 kubelet[2507]: I0123 19:02:16.478637 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:02:16.482708 kubelet[2507]: E0123 19:02:16.482367 2507 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Jan 23 19:02:16.569921 systemd[1]: Started cri-containerd-64d64eb3cb3273ceb0636d590db6956edd5e73d526cbb4ef7f3b6978201b582f.scope - libcontainer container 64d64eb3cb3273ceb0636d590db6956edd5e73d526cbb4ef7f3b6978201b582f. Jan 23 19:02:16.616130 systemd[1]: Started cri-containerd-7a2d56124fd03e89c6b1861dcce43555d8c9e695d256a245f8987580f792cc8f.scope - libcontainer container 7a2d56124fd03e89c6b1861dcce43555d8c9e695d256a245f8987580f792cc8f. Jan 23 19:02:16.960327 kubelet[2507]: E0123 19:02:16.959106 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="3.2s" Jan 23 19:02:16.962322 kubelet[2507]: W0123 19:02:16.961244 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:16.962322 kubelet[2507]: E0123 19:02:16.961336 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:16.994720 systemd[1]: Started cri-containerd-449374ba9aea84efb0197174e14879deb22ccfb5568298adba54842e2fae9da4.scope - libcontainer container 449374ba9aea84efb0197174e14879deb22ccfb5568298adba54842e2fae9da4. Jan 23 19:02:17.237046 kubelet[2507]: W0123 19:02:17.236477 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:17.239772 kubelet[2507]: E0123 19:02:17.239647 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:17.347773 containerd[1579]: time="2026-01-23T19:02:17.347709558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a2d56124fd03e89c6b1861dcce43555d8c9e695d256a245f8987580f792cc8f\"" Jan 23 19:02:17.555615 kubelet[2507]: E0123 19:02:17.543955 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:17.559936 containerd[1579]: time="2026-01-23T19:02:17.558582174Z" level=info msg="CreateContainer within sandbox \"7a2d56124fd03e89c6b1861dcce43555d8c9e695d256a245f8987580f792cc8f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 19:02:17.577452 containerd[1579]: time="2026-01-23T19:02:17.575684789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b6aaad054ea210a84bb7e6acfd37586,Namespace:kube-system,Attempt:0,} returns sandbox id \"64d64eb3cb3273ceb0636d590db6956edd5e73d526cbb4ef7f3b6978201b582f\"" Jan 23 19:02:17.586219 kubelet[2507]: E0123 19:02:17.584175 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:17.594645 containerd[1579]: time="2026-01-23T19:02:17.594524036Z" level=info msg="CreateContainer within sandbox \"64d64eb3cb3273ceb0636d590db6956edd5e73d526cbb4ef7f3b6978201b582f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 19:02:17.614113 containerd[1579]: time="2026-01-23T19:02:17.609007599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"449374ba9aea84efb0197174e14879deb22ccfb5568298adba54842e2fae9da4\"" Jan 23 19:02:17.619497 kubelet[2507]: E0123 19:02:17.619294 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:17.637969 containerd[1579]: time="2026-01-23T19:02:17.635233636Z" level=info msg="CreateContainer within sandbox \"449374ba9aea84efb0197174e14879deb22ccfb5568298adba54842e2fae9da4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 19:02:17.695579 containerd[1579]: time="2026-01-23T19:02:17.695519735Z" level=info msg="Container 3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:02:17.708297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3127091334.mount: Deactivated successfully. Jan 23 19:02:17.722801 containerd[1579]: time="2026-01-23T19:02:17.720232636Z" level=info msg="Container d0218c174c800d579ad07b88ae2e73dea500a9ee32028fcb1e9adc2e813977bd: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:02:17.744158 kubelet[2507]: W0123 19:02:17.743647 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:17.744158 kubelet[2507]: E0123 19:02:17.743730 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:17.761626 containerd[1579]: time="2026-01-23T19:02:17.757148006Z" level=info msg="CreateContainer within sandbox \"64d64eb3cb3273ceb0636d590db6956edd5e73d526cbb4ef7f3b6978201b582f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d0218c174c800d579ad07b88ae2e73dea500a9ee32028fcb1e9adc2e813977bd\"" Jan 23 19:02:17.766207 containerd[1579]: time="2026-01-23T19:02:17.766157444Z" level=info msg="StartContainer for \"d0218c174c800d579ad07b88ae2e73dea500a9ee32028fcb1e9adc2e813977bd\"" Jan 23 19:02:17.819685 containerd[1579]: time="2026-01-23T19:02:17.776555242Z" level=info msg="connecting to shim d0218c174c800d579ad07b88ae2e73dea500a9ee32028fcb1e9adc2e813977bd" address="unix:///run/containerd/s/b112d6d03fa9a536a6047f0cb0d8a5967be7f7446f85d113fa006f58cf9185a7" protocol=ttrpc version=3 Jan 23 19:02:17.858873 containerd[1579]: time="2026-01-23T19:02:17.858695803Z" level=info msg="CreateContainer within sandbox \"7a2d56124fd03e89c6b1861dcce43555d8c9e695d256a245f8987580f792cc8f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a\"" Jan 23 19:02:17.859535 containerd[1579]: time="2026-01-23T19:02:17.859498565Z" level=info msg="Container 08df4eea065b6f16f1deee503b0d4adaed4caaa6404ec2d4f32b44058a96a055: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:02:17.863473 containerd[1579]: time="2026-01-23T19:02:17.861535356Z" level=info msg="StartContainer for \"3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a\"" Jan 23 19:02:17.889339 containerd[1579]: time="2026-01-23T19:02:17.889088832Z" level=info msg="connecting to shim 3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a" address="unix:///run/containerd/s/a60b369ab9bedf134c0b38fb155f96b2d9d6c3b4713bd9c711c6b0c56c0bc6b0" protocol=ttrpc version=3 Jan 23 19:02:17.932984 containerd[1579]: time="2026-01-23T19:02:17.932608054Z" level=info msg="CreateContainer within sandbox \"449374ba9aea84efb0197174e14879deb22ccfb5568298adba54842e2fae9da4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"08df4eea065b6f16f1deee503b0d4adaed4caaa6404ec2d4f32b44058a96a055\"" Jan 23 19:02:17.937659 containerd[1579]: time="2026-01-23T19:02:17.937621669Z" level=info msg="StartContainer for \"08df4eea065b6f16f1deee503b0d4adaed4caaa6404ec2d4f32b44058a96a055\"" Jan 23 19:02:17.942579 kubelet[2507]: W0123 19:02:17.942095 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:17.942579 kubelet[2507]: E0123 19:02:17.942155 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:17.948457 containerd[1579]: time="2026-01-23T19:02:17.948096362Z" level=info msg="connecting to shim 08df4eea065b6f16f1deee503b0d4adaed4caaa6404ec2d4f32b44058a96a055" address="unix:///run/containerd/s/cecd73c6909ef4eb7fa75319e90fa9bccab883d569892469e80f9b8246f7812a" protocol=ttrpc version=3 Jan 23 19:02:17.968920 systemd[1]: Started cri-containerd-d0218c174c800d579ad07b88ae2e73dea500a9ee32028fcb1e9adc2e813977bd.scope - libcontainer container d0218c174c800d579ad07b88ae2e73dea500a9ee32028fcb1e9adc2e813977bd. Jan 23 19:02:18.026762 systemd[1]: Started cri-containerd-3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a.scope - libcontainer container 3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a. Jan 23 19:02:18.213675 kubelet[2507]: I0123 19:02:18.211741 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:02:18.214921 kubelet[2507]: E0123 19:02:18.214365 2507 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Jan 23 19:02:18.245137 systemd[1]: Started cri-containerd-08df4eea065b6f16f1deee503b0d4adaed4caaa6404ec2d4f32b44058a96a055.scope - libcontainer container 08df4eea065b6f16f1deee503b0d4adaed4caaa6404ec2d4f32b44058a96a055. Jan 23 19:02:23.749629 kubelet[2507]: E0123 19:02:23.734187 2507 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d716b2c825e73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:02:13.748653683 +0000 UTC m=+2.577145983,LastTimestamp:2026-01-23 19:02:13.748653683 +0000 UTC m=+2.577145983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:02:23.749629 kubelet[2507]: W0123 19:02:23.735080 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:23.749629 kubelet[2507]: E0123 19:02:23.735180 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:23.749629 kubelet[2507]: E0123 19:02:23.735297 2507 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:23.777132 kubelet[2507]: W0123 19:02:23.744636 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:23.777132 kubelet[2507]: E0123 19:02:23.744821 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:23.777132 kubelet[2507]: W0123 19:02:23.744941 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:23.777132 kubelet[2507]: E0123 19:02:23.744976 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:23.777132 kubelet[2507]: W0123 19:02:23.771017 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Jan 23 19:02:23.777132 kubelet[2507]: E0123 19:02:23.771080 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:02:23.779656 kubelet[2507]: E0123 19:02:23.779284 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="6.4s" Jan 23 19:02:23.787804 containerd[1579]: time="2026-01-23T19:02:23.787741681Z" level=error msg="get state for d0218c174c800d579ad07b88ae2e73dea500a9ee32028fcb1e9adc2e813977bd" error="context deadline exceeded" Jan 23 19:02:23.794112 containerd[1579]: time="2026-01-23T19:02:23.792529885Z" level=warning msg="unknown status" status=0 Jan 23 19:02:23.794112 containerd[1579]: time="2026-01-23T19:02:23.788824196Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 23 19:02:23.862628 kubelet[2507]: I0123 19:02:23.855191 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:02:23.874855 kubelet[2507]: E0123 19:02:23.873824 2507 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Jan 23 19:02:24.228148 containerd[1579]: time="2026-01-23T19:02:24.226578140Z" level=info msg="StartContainer for \"3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a\" returns successfully" Jan 23 19:02:24.246496 containerd[1579]: time="2026-01-23T19:02:24.245769342Z" level=info msg="StartContainer for \"d0218c174c800d579ad07b88ae2e73dea500a9ee32028fcb1e9adc2e813977bd\" returns successfully" Jan 23 19:02:24.328537 containerd[1579]: time="2026-01-23T19:02:24.328151211Z" level=info msg="StartContainer for \"08df4eea065b6f16f1deee503b0d4adaed4caaa6404ec2d4f32b44058a96a055\" returns successfully" Jan 23 19:02:24.530973 kubelet[2507]: E0123 19:02:24.518723 2507 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:02:25.210720 kubelet[2507]: E0123 19:02:25.209888 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:25.218492 kubelet[2507]: E0123 19:02:25.217725 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:25.220032 kubelet[2507]: E0123 19:02:25.219950 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:25.222571 kubelet[2507]: E0123 19:02:25.222490 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:25.239863 kubelet[2507]: E0123 19:02:25.239759 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:25.242468 kubelet[2507]: E0123 19:02:25.240108 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:26.389683 kubelet[2507]: E0123 19:02:26.388826 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:26.415552 kubelet[2507]: E0123 19:02:26.388355 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:26.415552 kubelet[2507]: E0123 19:02:26.390365 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:26.415552 kubelet[2507]: E0123 19:02:26.411493 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:26.415552 kubelet[2507]: E0123 19:02:26.415496 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:26.416848 kubelet[2507]: E0123 19:02:26.416824 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:27.715850 kubelet[2507]: E0123 19:02:27.714861 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:27.724751 kubelet[2507]: E0123 19:02:27.724623 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:27.726503 kubelet[2507]: E0123 19:02:27.724926 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:27.729269 kubelet[2507]: E0123 19:02:27.728581 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:30.397151 kubelet[2507]: I0123 19:02:30.396350 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:02:33.753035 kubelet[2507]: E0123 19:02:33.752316 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:33.776794 kubelet[2507]: E0123 19:02:33.755913 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:33.796015 kubelet[2507]: E0123 19:02:33.794735 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:33.799913 kubelet[2507]: E0123 19:02:33.799287 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:34.503655 kubelet[2507]: E0123 19:02:34.497572 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:34.505881 kubelet[2507]: E0123 19:02:34.505232 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:34.536114 kubelet[2507]: E0123 19:02:34.536059 2507 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:02:34.747158 kubelet[2507]: E0123 19:02:34.746107 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:34.747158 kubelet[2507]: E0123 19:02:34.746918 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:40.439280 kubelet[2507]: E0123 19:02:40.346821 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 23 19:02:41.597076 kubelet[2507]: E0123 19:02:40.564145 2507 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 23 19:02:42.747962 kubelet[2507]: W0123 19:02:42.726907 2507 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 19:02:42.753569 kubelet[2507]: E0123 19:02:42.753298 2507 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 19:02:42.761119 kubelet[2507]: E0123 19:02:42.760881 2507 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 19:02:43.474096 kubelet[2507]: E0123 19:02:43.465960 2507 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d716b2c825e73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:02:13.748653683 +0000 UTC m=+2.577145983,LastTimestamp:2026-01-23 19:02:13.748653683 +0000 UTC m=+2.577145983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:02:43.642734 kubelet[2507]: E0123 19:02:43.640522 2507 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d716b2f6477e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 19:02:13.797025767 +0000 UTC m=+2.625518047,LastTimestamp:2026-01-23 19:02:13.797025767 +0000 UTC m=+2.625518047,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 19:02:43.756240 kubelet[2507]: I0123 19:02:43.753525 2507 apiserver.go:52] "Watching apiserver" Jan 23 19:02:43.871052 kubelet[2507]: I0123 19:02:43.862955 2507 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:02:43.872141 kubelet[2507]: E0123 19:02:43.872096 2507 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 19:02:43.872773 kubelet[2507]: E0123 19:02:43.872747 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:44.225356 kubelet[2507]: E0123 19:02:44.224816 2507 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 19:02:44.540348 kubelet[2507]: E0123 19:02:44.539934 2507 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 19:02:44.853545 kubelet[2507]: E0123 19:02:44.849218 2507 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 19:02:45.958939 kubelet[2507]: E0123 19:02:45.955178 2507 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 19:02:47.210314 kubelet[2507]: E0123 19:02:47.206354 2507 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 19:02:47.558712 kubelet[2507]: E0123 19:02:47.555631 2507 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 19:02:47.658634 kubelet[2507]: I0123 19:02:47.656065 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:02:47.857605 kubelet[2507]: I0123 19:02:47.840042 2507 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 19:02:47.857605 kubelet[2507]: E0123 19:02:47.841020 2507 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 23 19:02:47.895560 kubelet[2507]: I0123 19:02:47.893898 2507 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 19:02:48.040772 kubelet[2507]: I0123 19:02:48.039032 2507 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 19:02:48.052210 kubelet[2507]: E0123 19:02:48.050213 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:48.146048 kubelet[2507]: I0123 19:02:48.141877 2507 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 19:02:48.161598 kubelet[2507]: E0123 19:02:48.159259 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:48.192582 kubelet[2507]: E0123 19:02:48.190184 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:53.788491 kubelet[2507]: E0123 19:02:53.785090 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:54.052226 systemd[1]: Reload requested from client PID 2793 ('systemctl') (unit session-9.scope)... Jan 23 19:02:54.052251 systemd[1]: Reloading... Jan 23 19:02:54.952157 kubelet[2507]: I0123 19:02:54.951200 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.951178661 podStartE2EDuration="6.951178661s" podCreationTimestamp="2026-01-23 19:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:02:54.936159067 +0000 UTC m=+43.764651367" watchObservedRunningTime="2026-01-23 19:02:54.951178661 +0000 UTC m=+43.779670941" Jan 23 19:02:54.981202 kubelet[2507]: E0123 19:02:54.978166 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:55.034838 kubelet[2507]: E0123 19:02:55.034308 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:02:56.691727 kubelet[2507]: I0123 19:02:56.691497 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=9.687352888 podStartE2EDuration="9.687352888s" podCreationTimestamp="2026-01-23 19:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:02:56.676728909 +0000 UTC m=+45.505221200" watchObservedRunningTime="2026-01-23 19:02:56.687352888 +0000 UTC m=+45.515845258" Jan 23 19:02:56.779799 zram_generator::config[2836]: No configuration found. Jan 23 19:02:56.900650 kubelet[2507]: I0123 19:02:56.900264 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.900224263 podStartE2EDuration="8.900224263s" podCreationTimestamp="2026-01-23 19:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:02:56.776639851 +0000 UTC m=+45.605132131" watchObservedRunningTime="2026-01-23 19:02:56.900224263 +0000 UTC m=+45.728716553" Jan 23 19:03:00.228893 systemd[1]: Reloading finished in 6173 ms. Jan 23 19:03:00.325658 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:03:00.353141 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 19:03:00.356159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:03:00.356866 systemd[1]: kubelet.service: Consumed 10.934s CPU time, 136.8M memory peak. Jan 23 19:03:00.368982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:03:01.629803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:03:01.680162 (kubelet)[2882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:03:02.274516 kubelet[2882]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:03:02.274516 kubelet[2882]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:03:02.274516 kubelet[2882]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:03:02.274516 kubelet[2882]: I0123 19:03:02.269135 2882 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:03:02.368342 kubelet[2882]: I0123 19:03:02.367889 2882 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 19:03:02.368342 kubelet[2882]: I0123 19:03:02.367987 2882 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:03:02.374627 kubelet[2882]: I0123 19:03:02.374118 2882 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 19:03:02.379837 kubelet[2882]: I0123 19:03:02.378093 2882 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 19:03:02.391083 kubelet[2882]: I0123 19:03:02.390744 2882 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:03:02.431124 kubelet[2882]: I0123 19:03:02.431053 2882 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:03:02.457074 kubelet[2882]: I0123 19:03:02.456927 2882 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:03:02.461346 kubelet[2882]: I0123 19:03:02.460814 2882 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:03:02.461346 kubelet[2882]: I0123 19:03:02.460879 2882 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:03:02.461346 kubelet[2882]: I0123 19:03:02.461199 2882 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:03:02.461346 kubelet[2882]: I0123 19:03:02.461217 2882 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 19:03:02.462220 kubelet[2882]: I0123 19:03:02.461293 2882 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:03:02.463023 kubelet[2882]: I0123 19:03:02.462958 2882 kubelet.go:446] "Attempting to sync node with API server" Jan 23 19:03:02.463023 kubelet[2882]: I0123 19:03:02.462993 2882 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:03:02.463726 kubelet[2882]: I0123 19:03:02.463547 2882 kubelet.go:352] "Adding apiserver pod source" Jan 23 19:03:02.463726 kubelet[2882]: I0123 19:03:02.463646 2882 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:03:02.477337 kubelet[2882]: I0123 19:03:02.473806 2882 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:03:02.482536 kubelet[2882]: I0123 19:03:02.482345 2882 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 19:03:02.490499 kubelet[2882]: I0123 19:03:02.489798 2882 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:03:02.490499 kubelet[2882]: I0123 19:03:02.489851 2882 server.go:1287] "Started kubelet" Jan 23 19:03:02.499825 kubelet[2882]: I0123 19:03:02.499651 2882 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:03:02.985304 kubelet[2882]: I0123 19:03:02.981163 2882 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:03:03.039120 kubelet[2882]: I0123 19:03:02.991898 2882 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:03:03.180753 kubelet[2882]: I0123 19:03:02.987648 2882 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:03:03.181359 kubelet[2882]: I0123 19:03:03.172133 2882 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:03:03.185214 kubelet[2882]: I0123 19:03:03.172206 2882 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:03:03.185214 kubelet[2882]: I0123 19:03:03.182281 2882 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:03:03.230746 kubelet[2882]: I0123 19:03:03.230587 2882 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:03:03.260764 kubelet[2882]: E0123 19:03:03.259883 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 19:03:03.275964 sudo[2900]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 19:03:03.278067 sudo[2900]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 19:03:03.287240 kubelet[2882]: I0123 19:03:03.284315 2882 factory.go:221] Registration of the systemd container factory successfully Jan 23 19:03:03.295584 kubelet[2882]: I0123 19:03:03.293237 2882 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:03:03.396649 kubelet[2882]: I0123 19:03:03.381693 2882 factory.go:221] Registration of the containerd container factory successfully Jan 23 19:03:03.397101 kubelet[2882]: I0123 19:03:03.396891 2882 server.go:479] "Adding debug handlers to kubelet server" Jan 23 19:03:03.481493 kubelet[2882]: I0123 19:03:03.467130 2882 apiserver.go:52] "Watching apiserver" Jan 23 19:03:03.858020 kubelet[2882]: I0123 19:03:03.856864 2882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 19:03:04.032052 kubelet[2882]: I0123 19:03:04.022564 2882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 19:03:04.032052 kubelet[2882]: I0123 19:03:04.022637 2882 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 19:03:04.032052 kubelet[2882]: I0123 19:03:04.022796 2882 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:03:04.032052 kubelet[2882]: I0123 19:03:04.022814 2882 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 19:03:04.032052 kubelet[2882]: E0123 19:03:04.023005 2882 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:03:04.138653 kubelet[2882]: E0123 19:03:04.125105 2882 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:03:04.329658 kubelet[2882]: E0123 19:03:04.328152 2882 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:03:04.666044 kubelet[2882]: I0123 19:03:04.665349 2882 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:03:04.666044 kubelet[2882]: I0123 19:03:04.665546 2882 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:03:04.666044 kubelet[2882]: I0123 19:03:04.665587 2882 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:03:04.669183 kubelet[2882]: I0123 19:03:04.666303 2882 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 19:03:04.669183 kubelet[2882]: I0123 19:03:04.666323 2882 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 19:03:04.669183 kubelet[2882]: I0123 19:03:04.666351 2882 policy_none.go:49] "None policy: Start" Jan 23 19:03:04.669183 kubelet[2882]: I0123 19:03:04.667825 2882 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:03:04.669183 kubelet[2882]: I0123 19:03:04.667855 2882 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:03:04.669183 kubelet[2882]: I0123 19:03:04.668023 2882 state_mem.go:75] "Updated machine memory state" Jan 23 19:03:04.869670 kubelet[2882]: E0123 19:03:04.823201 2882 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:03:04.989604 kubelet[2882]: I0123 19:03:04.986855 2882 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 19:03:04.989604 kubelet[2882]: I0123 19:03:04.987605 2882 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:03:04.989604 kubelet[2882]: I0123 19:03:04.987628 2882 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:03:04.990350 kubelet[2882]: I0123 19:03:04.990338 2882 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:03:05.022587 kubelet[2882]: E0123 19:03:05.020030 2882 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:03:05.409588 kubelet[2882]: I0123 19:03:05.408830 2882 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 19:03:05.543222 kubelet[2882]: I0123 19:03:05.543064 2882 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 19:03:05.543222 kubelet[2882]: I0123 19:03:05.543201 2882 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 19:03:05.543904 kubelet[2882]: I0123 19:03:05.543353 2882 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 19:03:05.562253 containerd[1579]: time="2026-01-23T19:03:05.561194793Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 19:03:05.564869 kubelet[2882]: I0123 19:03:05.563574 2882 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 19:03:05.747868 kubelet[2882]: I0123 19:03:05.746978 2882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 19:03:05.794649 kubelet[2882]: I0123 19:03:05.791935 2882 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:03:05.877119 systemd[1]: Created slice kubepods-besteffort-poda94b0c7d_c2d2_44df_8148_63827db8968a.slice - libcontainer container kubepods-besteffort-poda94b0c7d_c2d2_44df_8148_63827db8968a.slice. Jan 23 19:03:05.922715 kubelet[2882]: I0123 19:03:05.921638 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6wb\" (UniqueName: \"kubernetes.io/projected/a94b0c7d-c2d2-44df-8148-63827db8968a-kube-api-access-hm6wb\") pod \"kube-proxy-5dx49\" (UID: \"a94b0c7d-c2d2-44df-8148-63827db8968a\") " pod="kube-system/kube-proxy-5dx49" Jan 23 19:03:05.922715 kubelet[2882]: I0123 19:03:05.921887 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b6aaad054ea210a84bb7e6acfd37586-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6aaad054ea210a84bb7e6acfd37586\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:03:05.922715 kubelet[2882]: I0123 19:03:05.922188 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:03:05.922715 kubelet[2882]: I0123 19:03:05.922220 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a94b0c7d-c2d2-44df-8148-63827db8968a-kube-proxy\") pod \"kube-proxy-5dx49\" (UID: \"a94b0c7d-c2d2-44df-8148-63827db8968a\") " pod="kube-system/kube-proxy-5dx49" Jan 23 19:03:05.922715 kubelet[2882]: I0123 19:03:05.922258 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a94b0c7d-c2d2-44df-8148-63827db8968a-xtables-lock\") pod \"kube-proxy-5dx49\" (UID: \"a94b0c7d-c2d2-44df-8148-63827db8968a\") " pod="kube-system/kube-proxy-5dx49" Jan 23 19:03:05.929717 kubelet[2882]: I0123 19:03:05.922284 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a94b0c7d-c2d2-44df-8148-63827db8968a-lib-modules\") pod \"kube-proxy-5dx49\" (UID: \"a94b0c7d-c2d2-44df-8148-63827db8968a\") " pod="kube-system/kube-proxy-5dx49" Jan 23 19:03:05.929717 kubelet[2882]: I0123 19:03:05.922642 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:03:05.929717 kubelet[2882]: I0123 19:03:05.922822 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 23 19:03:05.929717 kubelet[2882]: I0123 19:03:05.922850 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b6aaad054ea210a84bb7e6acfd37586-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6aaad054ea210a84bb7e6acfd37586\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:03:05.929717 kubelet[2882]: I0123 19:03:05.922877 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b6aaad054ea210a84bb7e6acfd37586-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b6aaad054ea210a84bb7e6acfd37586\") " pod="kube-system/kube-apiserver-localhost" Jan 23 19:03:05.930095 kubelet[2882]: I0123 19:03:05.922901 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:03:05.930095 kubelet[2882]: I0123 19:03:05.922928 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:03:05.930095 kubelet[2882]: I0123 19:03:05.922954 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 19:03:05.937612 kubelet[2882]: E0123 19:03:05.935901 2882 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 23 19:03:06.253874 kubelet[2882]: E0123 19:03:06.253692 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:06.345840 kubelet[2882]: E0123 19:03:06.345678 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:06.364136 kubelet[2882]: E0123 19:03:06.363923 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:06.576672 kubelet[2882]: E0123 19:03:06.559345 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:06.577218 containerd[1579]: time="2026-01-23T19:03:06.572048320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5dx49,Uid:a94b0c7d-c2d2-44df-8148-63827db8968a,Namespace:kube-system,Attempt:0,}" Jan 23 19:03:06.652785 kubelet[2882]: E0123 19:03:06.651126 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:06.652785 kubelet[2882]: E0123 19:03:06.651828 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:06.681280 sudo[2900]: pam_unix(sudo:session): session closed for user root Jan 23 19:03:08.626832 kubelet[2882]: E0123 19:03:08.620085 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:08.647247 kubelet[2882]: E0123 19:03:08.627959 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:08.657179 containerd[1579]: time="2026-01-23T19:03:08.630090501Z" level=info msg="connecting to shim cfa44d9972c65eed23ef3b15ab527839453a7154795416762540fdd43d8fbb80" address="unix:///run/containerd/s/d7954164a93abfa306c43b5021171e2a52142d01a0abfd4a0798aef69d6b1df6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:03:08.885688 systemd[1]: Started cri-containerd-cfa44d9972c65eed23ef3b15ab527839453a7154795416762540fdd43d8fbb80.scope - libcontainer container cfa44d9972c65eed23ef3b15ab527839453a7154795416762540fdd43d8fbb80. Jan 23 19:03:09.196628 kubelet[2882]: E0123 19:03:09.194989 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:09.216112 containerd[1579]: time="2026-01-23T19:03:09.216049003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5dx49,Uid:a94b0c7d-c2d2-44df-8148-63827db8968a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfa44d9972c65eed23ef3b15ab527839453a7154795416762540fdd43d8fbb80\"" Jan 23 19:03:09.220158 kubelet[2882]: E0123 19:03:09.220125 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:09.355483 containerd[1579]: time="2026-01-23T19:03:09.354045684Z" level=info msg="CreateContainer within sandbox \"cfa44d9972c65eed23ef3b15ab527839453a7154795416762540fdd43d8fbb80\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 19:03:09.457646 containerd[1579]: time="2026-01-23T19:03:09.448897658Z" level=info msg="Container 0aa891201c21d7081be15c6d21dc6d9a3bafd9fdc7011b86203f8ff02c81c31d: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:03:09.470945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443782491.mount: Deactivated successfully. Jan 23 19:03:09.507893 containerd[1579]: time="2026-01-23T19:03:09.507836709Z" level=info msg="CreateContainer within sandbox \"cfa44d9972c65eed23ef3b15ab527839453a7154795416762540fdd43d8fbb80\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0aa891201c21d7081be15c6d21dc6d9a3bafd9fdc7011b86203f8ff02c81c31d\"" Jan 23 19:03:09.509779 containerd[1579]: time="2026-01-23T19:03:09.509711426Z" level=info msg="StartContainer for \"0aa891201c21d7081be15c6d21dc6d9a3bafd9fdc7011b86203f8ff02c81c31d\"" Jan 23 19:03:09.517333 containerd[1579]: time="2026-01-23T19:03:09.514622948Z" level=info msg="connecting to shim 0aa891201c21d7081be15c6d21dc6d9a3bafd9fdc7011b86203f8ff02c81c31d" address="unix:///run/containerd/s/d7954164a93abfa306c43b5021171e2a52142d01a0abfd4a0798aef69d6b1df6" protocol=ttrpc version=3 Jan 23 19:03:09.732688 systemd[1]: Started cri-containerd-0aa891201c21d7081be15c6d21dc6d9a3bafd9fdc7011b86203f8ff02c81c31d.scope - libcontainer container 0aa891201c21d7081be15c6d21dc6d9a3bafd9fdc7011b86203f8ff02c81c31d. Jan 23 19:03:10.312590 kubelet[2882]: E0123 19:03:10.312550 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:10.577859 kubelet[2882]: E0123 19:03:10.575477 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:10.924500 containerd[1579]: time="2026-01-23T19:03:10.923951874Z" level=info msg="StartContainer for \"0aa891201c21d7081be15c6d21dc6d9a3bafd9fdc7011b86203f8ff02c81c31d\" returns successfully" Jan 23 19:03:11.609278 kubelet[2882]: E0123 19:03:11.609116 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:11.823910 kubelet[2882]: I0123 19:03:11.823034 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5dx49" podStartSLOduration=6.823010313 podStartE2EDuration="6.823010313s" podCreationTimestamp="2026-01-23 19:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:03:11.81156054 +0000 UTC m=+10.072571527" watchObservedRunningTime="2026-01-23 19:03:11.823010313 +0000 UTC m=+10.084021300" Jan 23 19:03:11.956718 systemd[1]: Created slice kubepods-burstable-podea52bfa2_d943_454b_9545_cb748c071c83.slice - libcontainer container kubepods-burstable-podea52bfa2_d943_454b_9545_cb748c071c83.slice. Jan 23 19:03:12.022969 kubelet[2882]: I0123 19:03:12.021868 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-host-proc-sys-kernel\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.022969 kubelet[2882]: I0123 19:03:12.021938 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-hostproc\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.022969 kubelet[2882]: I0123 19:03:12.021976 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-xtables-lock\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.022969 kubelet[2882]: I0123 19:03:12.022002 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea52bfa2-d943-454b-9545-cb748c071c83-hubble-tls\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.022969 kubelet[2882]: I0123 19:03:12.022023 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-host-proc-sys-net\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.022969 kubelet[2882]: I0123 19:03:12.022049 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-etc-cni-netd\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.050062 kubelet[2882]: I0123 19:03:12.022073 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cni-path\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.050062 kubelet[2882]: I0123 19:03:12.022095 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-lib-modules\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.050062 kubelet[2882]: I0123 19:03:12.022309 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea52bfa2-d943-454b-9545-cb748c071c83-clustermesh-secrets\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.050062 kubelet[2882]: I0123 19:03:12.022348 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-run\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.050062 kubelet[2882]: I0123 19:03:12.028347 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-bpf-maps\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.050062 kubelet[2882]: I0123 19:03:12.034512 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-cgroup\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.168862 kubelet[2882]: I0123 19:03:12.038916 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-config-path\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.168862 kubelet[2882]: I0123 19:03:12.039092 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhv6k\" (UniqueName: \"kubernetes.io/projected/ea52bfa2-d943-454b-9545-cb748c071c83-kube-api-access-zhv6k\") pod \"cilium-2g5ln\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " pod="kube-system/cilium-2g5ln" Jan 23 19:03:12.168862 kubelet[2882]: I0123 19:03:12.167536 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtlq\" (UniqueName: \"kubernetes.io/projected/e82389fa-abf8-4ec1-a948-7ebb9c7c3a00-kube-api-access-xwtlq\") pod \"cilium-operator-6c4d7847fc-xrdpz\" (UID: \"e82389fa-abf8-4ec1-a948-7ebb9c7c3a00\") " pod="kube-system/cilium-operator-6c4d7847fc-xrdpz" Jan 23 19:03:12.060903 systemd[1]: Created slice kubepods-besteffort-pode82389fa_abf8_4ec1_a948_7ebb9c7c3a00.slice - libcontainer container kubepods-besteffort-pode82389fa_abf8_4ec1_a948_7ebb9c7c3a00.slice. Jan 23 19:03:12.194274 kubelet[2882]: I0123 19:03:12.190365 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e82389fa-abf8-4ec1-a948-7ebb9c7c3a00-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xrdpz\" (UID: \"e82389fa-abf8-4ec1-a948-7ebb9c7c3a00\") " pod="kube-system/cilium-operator-6c4d7847fc-xrdpz" Jan 23 19:03:12.625340 kubelet[2882]: E0123 19:03:12.623852 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:12.651667 kubelet[2882]: E0123 19:03:12.649849 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:12.666549 containerd[1579]: time="2026-01-23T19:03:12.663715861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2g5ln,Uid:ea52bfa2-d943-454b-9545-cb748c071c83,Namespace:kube-system,Attempt:0,}" Jan 23 19:03:12.815331 kubelet[2882]: E0123 19:03:12.812902 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:12.820538 containerd[1579]: time="2026-01-23T19:03:12.819954567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xrdpz,Uid:e82389fa-abf8-4ec1-a948-7ebb9c7c3a00,Namespace:kube-system,Attempt:0,}" Jan 23 19:03:13.015649 containerd[1579]: time="2026-01-23T19:03:13.015018350Z" level=info msg="connecting to shim 3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc" address="unix:///run/containerd/s/2b94258e37cfb2250d0574356897d885205dbee37bc0141adbbcf30b7d0333a7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:03:13.085785 containerd[1579]: time="2026-01-23T19:03:13.085732462Z" level=info msg="connecting to shim 653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f" address="unix:///run/containerd/s/b91330fdbab1c72a9748fee387e55a16ddd104fd2975b2311dadc6318bdf9905" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:03:13.272267 systemd[1]: Started cri-containerd-653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f.scope - libcontainer container 653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f. Jan 23 19:03:13.492846 systemd[1]: Started cri-containerd-3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc.scope - libcontainer container 3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc. Jan 23 19:03:13.648810 containerd[1579]: time="2026-01-23T19:03:13.648751583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2g5ln,Uid:ea52bfa2-d943-454b-9545-cb748c071c83,Namespace:kube-system,Attempt:0,} returns sandbox id \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\"" Jan 23 19:03:13.657830 kubelet[2882]: E0123 19:03:13.657753 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:13.668554 containerd[1579]: time="2026-01-23T19:03:13.668503911Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 19:03:13.862805 containerd[1579]: time="2026-01-23T19:03:13.861510513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xrdpz,Uid:e82389fa-abf8-4ec1-a948-7ebb9c7c3a00,Namespace:kube-system,Attempt:0,} returns sandbox id \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\"" Jan 23 19:03:13.876317 kubelet[2882]: E0123 19:03:13.870472 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:16.170763 kubelet[2882]: E0123 19:03:16.164838 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:16.688856 kubelet[2882]: E0123 19:03:16.685512 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:19.416139 kubelet[2882]: E0123 19:03:19.415728 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:03:44.930713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1474198429.mount: Deactivated successfully. Jan 23 19:04:13.678931 containerd[1579]: time="2026-01-23T19:04:13.678028758Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:13.685502 containerd[1579]: time="2026-01-23T19:04:13.685292388Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 19:04:13.692174 containerd[1579]: time="2026-01-23T19:04:13.692008275Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:13.698889 containerd[1579]: time="2026-01-23T19:04:13.698708794Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 1m0.028946353s" Jan 23 19:04:13.698889 containerd[1579]: time="2026-01-23T19:04:13.698773375Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 19:04:13.717552 containerd[1579]: time="2026-01-23T19:04:13.717498238Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 19:04:13.730564 containerd[1579]: time="2026-01-23T19:04:13.728833987Z" level=info msg="CreateContainer within sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 19:04:13.783004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290450813.mount: Deactivated successfully. Jan 23 19:04:13.810995 containerd[1579]: time="2026-01-23T19:04:13.810732501Z" level=info msg="Container 68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:13.814696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1972705694.mount: Deactivated successfully. Jan 23 19:04:13.834318 containerd[1579]: time="2026-01-23T19:04:13.834127722Z" level=info msg="CreateContainer within sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85\"" Jan 23 19:04:13.835773 containerd[1579]: time="2026-01-23T19:04:13.835657032Z" level=info msg="StartContainer for \"68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85\"" Jan 23 19:04:13.842744 containerd[1579]: time="2026-01-23T19:04:13.837358126Z" level=info msg="connecting to shim 68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85" address="unix:///run/containerd/s/b91330fdbab1c72a9748fee387e55a16ddd104fd2975b2311dadc6318bdf9905" protocol=ttrpc version=3 Jan 23 19:04:13.942762 systemd[1]: Started cri-containerd-68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85.scope - libcontainer container 68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85. Jan 23 19:04:14.135453 containerd[1579]: time="2026-01-23T19:04:14.132325640Z" level=info msg="StartContainer for \"68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85\" returns successfully" Jan 23 19:04:14.151307 systemd[1]: cri-containerd-68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85.scope: Deactivated successfully. Jan 23 19:04:14.165584 containerd[1579]: time="2026-01-23T19:04:14.164939223Z" level=info msg="received container exit event container_id:\"68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85\" id:\"68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85\" pid:3284 exited_at:{seconds:1769195054 nanos:163797016}" Jan 23 19:04:14.608575 kubelet[2882]: E0123 19:04:14.606545 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:14.615104 containerd[1579]: time="2026-01-23T19:04:14.614994977Z" level=info msg="CreateContainer within sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 19:04:14.801725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85-rootfs.mount: Deactivated successfully. Jan 23 19:04:14.886609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117751159.mount: Deactivated successfully. Jan 23 19:04:14.918968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009759965.mount: Deactivated successfully. Jan 23 19:04:14.933076 containerd[1579]: time="2026-01-23T19:04:14.932830507Z" level=info msg="Container c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:14.984364 containerd[1579]: time="2026-01-23T19:04:14.984107844Z" level=info msg="CreateContainer within sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68\"" Jan 23 19:04:14.987792 containerd[1579]: time="2026-01-23T19:04:14.987603723Z" level=info msg="StartContainer for \"c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68\"" Jan 23 19:04:14.990583 containerd[1579]: time="2026-01-23T19:04:14.990523645Z" level=info msg="connecting to shim c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68" address="unix:///run/containerd/s/b91330fdbab1c72a9748fee387e55a16ddd104fd2975b2311dadc6318bdf9905" protocol=ttrpc version=3 Jan 23 19:04:15.122310 systemd[1]: Started cri-containerd-c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68.scope - libcontainer container c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68. Jan 23 19:04:15.341936 containerd[1579]: time="2026-01-23T19:04:15.339826003Z" level=info msg="StartContainer for \"c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68\" returns successfully" Jan 23 19:04:15.388578 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:04:15.388930 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:04:15.394769 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:04:15.405878 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:04:15.426168 systemd[1]: cri-containerd-c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68.scope: Deactivated successfully. Jan 23 19:04:15.438363 containerd[1579]: time="2026-01-23T19:04:15.436086122Z" level=info msg="received container exit event container_id:\"c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68\" id:\"c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68\" pid:3334 exited_at:{seconds:1769195055 nanos:435184486}" Jan 23 19:04:15.481996 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:04:15.642447 kubelet[2882]: E0123 19:04:15.642062 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:16.667922 kubelet[2882]: E0123 19:04:16.667829 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:16.674783 containerd[1579]: time="2026-01-23T19:04:16.674671730Z" level=info msg="CreateContainer within sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 19:04:16.779944 containerd[1579]: time="2026-01-23T19:04:16.767893157Z" level=info msg="Container 2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:16.844654 containerd[1579]: time="2026-01-23T19:04:16.844543930Z" level=info msg="CreateContainer within sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a\"" Jan 23 19:04:16.858863 containerd[1579]: time="2026-01-23T19:04:16.857304726Z" level=info msg="StartContainer for \"2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a\"" Jan 23 19:04:16.886180 containerd[1579]: time="2026-01-23T19:04:16.886068971Z" level=info msg="connecting to shim 2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a" address="unix:///run/containerd/s/b91330fdbab1c72a9748fee387e55a16ddd104fd2975b2311dadc6318bdf9905" protocol=ttrpc version=3 Jan 23 19:04:16.984025 systemd[1]: Started cri-containerd-2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a.scope - libcontainer container 2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a. Jan 23 19:04:17.307716 systemd[1]: cri-containerd-2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a.scope: Deactivated successfully. Jan 23 19:04:17.310751 containerd[1579]: time="2026-01-23T19:04:17.310686814Z" level=info msg="received container exit event container_id:\"2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a\" id:\"2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a\" pid:3389 exited_at:{seconds:1769195057 nanos:309543096}" Jan 23 19:04:17.316107 containerd[1579]: time="2026-01-23T19:04:17.316061399Z" level=info msg="StartContainer for \"2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a\" returns successfully" Jan 23 19:04:17.591170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a-rootfs.mount: Deactivated successfully. Jan 23 19:04:17.688977 kubelet[2882]: E0123 19:04:17.688882 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:18.166238 containerd[1579]: time="2026-01-23T19:04:18.166075975Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:18.167568 containerd[1579]: time="2026-01-23T19:04:18.167452927Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 19:04:18.169601 containerd[1579]: time="2026-01-23T19:04:18.169546580Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:18.171953 containerd[1579]: time="2026-01-23T19:04:18.171878356Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.454146771s" Jan 23 19:04:18.171953 containerd[1579]: time="2026-01-23T19:04:18.171940572Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 19:04:18.177800 containerd[1579]: time="2026-01-23T19:04:18.177635793Z" level=info msg="CreateContainer within sandbox \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 19:04:18.210115 containerd[1579]: time="2026-01-23T19:04:18.210014182Z" level=info msg="Container 55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:18.229947 containerd[1579]: time="2026-01-23T19:04:18.229851708Z" level=info msg="CreateContainer within sandbox \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\"" Jan 23 19:04:18.232354 containerd[1579]: time="2026-01-23T19:04:18.232144951Z" level=info msg="StartContainer for \"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\"" Jan 23 19:04:18.236495 containerd[1579]: time="2026-01-23T19:04:18.236050719Z" level=info msg="connecting to shim 55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315" address="unix:///run/containerd/s/2b94258e37cfb2250d0574356897d885205dbee37bc0141adbbcf30b7d0333a7" protocol=ttrpc version=3 Jan 23 19:04:18.279350 systemd[1]: Started cri-containerd-55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315.scope - libcontainer container 55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315. Jan 23 19:04:18.413117 containerd[1579]: time="2026-01-23T19:04:18.411793060Z" level=info msg="StartContainer for \"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\" returns successfully" Jan 23 19:04:18.708012 kubelet[2882]: E0123 19:04:18.705262 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:18.738928 kubelet[2882]: E0123 19:04:18.738824 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:18.769496 containerd[1579]: time="2026-01-23T19:04:18.763530428Z" level=info msg="CreateContainer within sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 19:04:18.821561 containerd[1579]: time="2026-01-23T19:04:18.819217869Z" level=info msg="Container 887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:18.860876 containerd[1579]: time="2026-01-23T19:04:18.860682146Z" level=info msg="CreateContainer within sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1\"" Jan 23 19:04:18.862363 containerd[1579]: time="2026-01-23T19:04:18.862178752Z" level=info msg="StartContainer for \"887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1\"" Jan 23 19:04:18.864974 kubelet[2882]: I0123 19:04:18.864624 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xrdpz" podStartSLOduration=3.571416583 podStartE2EDuration="1m7.86459852s" podCreationTimestamp="2026-01-23 19:03:11 +0000 UTC" firstStartedPulling="2026-01-23 19:03:13.879770177 +0000 UTC m=+12.140781154" lastFinishedPulling="2026-01-23 19:04:18.172952124 +0000 UTC m=+76.433963091" observedRunningTime="2026-01-23 19:04:18.860935038 +0000 UTC m=+77.121946016" watchObservedRunningTime="2026-01-23 19:04:18.86459852 +0000 UTC m=+77.125609487" Jan 23 19:04:18.867940 containerd[1579]: time="2026-01-23T19:04:18.867892488Z" level=info msg="connecting to shim 887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1" address="unix:///run/containerd/s/b91330fdbab1c72a9748fee387e55a16ddd104fd2975b2311dadc6318bdf9905" protocol=ttrpc version=3 Jan 23 19:04:18.943049 systemd[1]: Started cri-containerd-887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1.scope - libcontainer container 887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1. Jan 23 19:04:19.089748 systemd[1]: cri-containerd-887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1.scope: Deactivated successfully. Jan 23 19:04:19.100747 containerd[1579]: time="2026-01-23T19:04:19.100507858Z" level=info msg="received container exit event container_id:\"887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1\" id:\"887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1\" pid:3467 exited_at:{seconds:1769195059 nanos:97328064}" Jan 23 19:04:19.122988 containerd[1579]: time="2026-01-23T19:04:19.122908232Z" level=info msg="StartContainer for \"887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1\" returns successfully" Jan 23 19:04:19.782444 kubelet[2882]: E0123 19:04:19.782006 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:19.783005 kubelet[2882]: E0123 19:04:19.782737 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:19.793048 containerd[1579]: time="2026-01-23T19:04:19.792721742Z" level=info msg="CreateContainer within sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 19:04:19.902236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489688051.mount: Deactivated successfully. Jan 23 19:04:19.936775 containerd[1579]: time="2026-01-23T19:04:19.936721425Z" level=info msg="Container 23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:04:19.943054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705008869.mount: Deactivated successfully. Jan 23 19:04:19.968196 containerd[1579]: time="2026-01-23T19:04:19.967943551Z" level=info msg="CreateContainer within sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\"" Jan 23 19:04:19.970821 containerd[1579]: time="2026-01-23T19:04:19.970787694Z" level=info msg="StartContainer for \"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\"" Jan 23 19:04:19.976453 containerd[1579]: time="2026-01-23T19:04:19.976199314Z" level=info msg="connecting to shim 23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c" address="unix:///run/containerd/s/b91330fdbab1c72a9748fee387e55a16ddd104fd2975b2311dadc6318bdf9905" protocol=ttrpc version=3 Jan 23 19:04:20.090250 systemd[1]: Started cri-containerd-23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c.scope - libcontainer container 23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c. Jan 23 19:04:20.249844 containerd[1579]: time="2026-01-23T19:04:20.249669987Z" level=info msg="StartContainer for \"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\" returns successfully" Jan 23 19:04:20.916599 kubelet[2882]: I0123 19:04:20.908137 2882 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 19:04:21.081189 systemd[1]: Created slice kubepods-burstable-pod3324f826_d792_4f21_89b7_23fbc6f9ae9a.slice - libcontainer container kubepods-burstable-pod3324f826_d792_4f21_89b7_23fbc6f9ae9a.slice. Jan 23 19:04:21.119150 systemd[1]: Created slice kubepods-burstable-podd196c0cf_6f07_4d86_8d46_7b13faebe524.slice - libcontainer container kubepods-burstable-podd196c0cf_6f07_4d86_8d46_7b13faebe524.slice. Jan 23 19:04:21.121703 kubelet[2882]: I0123 19:04:21.121666 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqppz\" (UniqueName: \"kubernetes.io/projected/3324f826-d792-4f21-89b7-23fbc6f9ae9a-kube-api-access-dqppz\") pod \"coredns-668d6bf9bc-dqkvb\" (UID: \"3324f826-d792-4f21-89b7-23fbc6f9ae9a\") " pod="kube-system/coredns-668d6bf9bc-dqkvb" Jan 23 19:04:21.121922 kubelet[2882]: I0123 19:04:21.121902 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm4rc\" (UniqueName: \"kubernetes.io/projected/d196c0cf-6f07-4d86-8d46-7b13faebe524-kube-api-access-nm4rc\") pod \"coredns-668d6bf9bc-cw4bs\" (UID: \"d196c0cf-6f07-4d86-8d46-7b13faebe524\") " pod="kube-system/coredns-668d6bf9bc-cw4bs" Jan 23 19:04:21.122022 kubelet[2882]: I0123 19:04:21.122007 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3324f826-d792-4f21-89b7-23fbc6f9ae9a-config-volume\") pod \"coredns-668d6bf9bc-dqkvb\" (UID: \"3324f826-d792-4f21-89b7-23fbc6f9ae9a\") " pod="kube-system/coredns-668d6bf9bc-dqkvb" Jan 23 19:04:21.122092 kubelet[2882]: I0123 19:04:21.122079 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d196c0cf-6f07-4d86-8d46-7b13faebe524-config-volume\") pod \"coredns-668d6bf9bc-cw4bs\" (UID: \"d196c0cf-6f07-4d86-8d46-7b13faebe524\") " pod="kube-system/coredns-668d6bf9bc-cw4bs" Jan 23 19:04:21.398478 kubelet[2882]: E0123 19:04:21.397168 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:21.399078 containerd[1579]: time="2026-01-23T19:04:21.398898416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqkvb,Uid:3324f826-d792-4f21-89b7-23fbc6f9ae9a,Namespace:kube-system,Attempt:0,}" Jan 23 19:04:21.428862 kubelet[2882]: E0123 19:04:21.428348 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:21.431434 containerd[1579]: time="2026-01-23T19:04:21.430954822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cw4bs,Uid:d196c0cf-6f07-4d86-8d46-7b13faebe524,Namespace:kube-system,Attempt:0,}" Jan 23 19:04:21.828774 kubelet[2882]: E0123 19:04:21.822621 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:22.924719 kubelet[2882]: E0123 19:04:22.924568 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:24.085570 systemd-networkd[1472]: cilium_host: Link UP Jan 23 19:04:24.085900 systemd-networkd[1472]: cilium_net: Link UP Jan 23 19:04:24.086293 systemd-networkd[1472]: cilium_net: Gained carrier Jan 23 19:04:24.086728 systemd-networkd[1472]: cilium_host: Gained carrier Jan 23 19:04:24.252670 systemd-networkd[1472]: cilium_host: Gained IPv6LL Jan 23 19:04:24.449171 systemd-networkd[1472]: cilium_net: Gained IPv6LL Jan 23 19:04:24.461560 systemd-networkd[1472]: cilium_vxlan: Link UP Jan 23 19:04:24.461626 systemd-networkd[1472]: cilium_vxlan: Gained carrier Jan 23 19:04:25.032509 kubelet[2882]: E0123 19:04:25.031725 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:25.390785 kernel: NET: Registered PF_ALG protocol family Jan 23 19:04:26.024305 kubelet[2882]: E0123 19:04:26.024133 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:26.448244 systemd-networkd[1472]: cilium_vxlan: Gained IPv6LL Jan 23 19:04:27.288462 kubelet[2882]: E0123 19:04:27.277921 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:28.074889 systemd-networkd[1472]: lxc_health: Link UP Jan 23 19:04:28.089965 systemd-networkd[1472]: lxc_health: Gained carrier Jan 23 19:04:28.423038 systemd-networkd[1472]: lxc24ce51cfb585: Link UP Jan 23 19:04:28.433458 kernel: eth0: renamed from tmpb91f9 Jan 23 19:04:28.437943 systemd-networkd[1472]: lxc24ce51cfb585: Gained carrier Jan 23 19:04:28.693744 kubelet[2882]: E0123 19:04:28.692811 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:28.742974 systemd-networkd[1472]: lxc1e729a952806: Link UP Jan 23 19:04:28.761470 kernel: eth0: renamed from tmp4e622 Jan 23 19:04:28.767622 systemd-networkd[1472]: lxc1e729a952806: Gained carrier Jan 23 19:04:28.844323 kubelet[2882]: I0123 19:04:28.844057 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2g5ln" podStartSLOduration=17.806212023 podStartE2EDuration="1m17.844029438s" podCreationTimestamp="2026-01-23 19:03:11 +0000 UTC" firstStartedPulling="2026-01-23 19:03:13.66704286 +0000 UTC m=+11.928053827" lastFinishedPulling="2026-01-23 19:04:13.704860275 +0000 UTC m=+71.965871242" observedRunningTime="2026-01-23 19:04:21.895682371 +0000 UTC m=+80.156693358" watchObservedRunningTime="2026-01-23 19:04:28.844029438 +0000 UTC m=+87.105040415" Jan 23 19:04:28.990608 kubelet[2882]: E0123 19:04:28.990190 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:29.295102 systemd-networkd[1472]: lxc_health: Gained IPv6LL Jan 23 19:04:29.647034 systemd-networkd[1472]: lxc24ce51cfb585: Gained IPv6LL Jan 23 19:04:29.840284 systemd-networkd[1472]: lxc1e729a952806: Gained IPv6LL Jan 23 19:04:42.830564 kubelet[2882]: E0123 19:04:42.828022 2882 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.462s" Jan 23 19:04:42.945859 kubelet[2882]: E0123 19:04:42.945755 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:47.851193 kubelet[2882]: E0123 19:04:47.847366 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:04:51.388579 kubelet[2882]: E0123 19:04:51.388184 2882 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.628s" Jan 23 19:04:56.926064 sudo[1808]: pam_unix(sudo:session): session closed for user root Jan 23 19:04:56.941912 sshd[1807]: Connection closed by 10.0.0.1 port 41030 Jan 23 19:04:56.943859 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:57.011649 systemd[1]: sshd@8-10.0.0.46:22-10.0.0.1:41030.service: Deactivated successfully. Jan 23 19:04:57.026206 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 19:04:57.034792 systemd[1]: session-9.scope: Consumed 22.668s CPU time, 228.3M memory peak. Jan 23 19:04:57.051955 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Jan 23 19:04:57.065007 systemd-logind[1561]: Removed session 9. Jan 23 19:04:59.834085 containerd[1579]: time="2026-01-23T19:04:59.832030646Z" level=info msg="connecting to shim b91f967f411104c927f9db43647ca137ebb94085d9247a15f5f23b77c49a0173" address="unix:///run/containerd/s/120ba0e6b09df6fb1349c7dd8053e57fab6bc25244d4dee266960451b2bd9ea0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:04:59.972516 containerd[1579]: time="2026-01-23T19:04:59.971795581Z" level=info msg="connecting to shim 4e622c36184dbef2cd9a25a6142de1d327cff7cc18e99891c26c6d593c85c857" address="unix:///run/containerd/s/977fecd6b944d33f082b382e99a0b628a519ad5a5243ce6270fdf42fb36cf22e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:05:00.055839 systemd[1]: Started cri-containerd-b91f967f411104c927f9db43647ca137ebb94085d9247a15f5f23b77c49a0173.scope - libcontainer container b91f967f411104c927f9db43647ca137ebb94085d9247a15f5f23b77c49a0173. Jan 23 19:05:00.113789 systemd[1]: Started cri-containerd-4e622c36184dbef2cd9a25a6142de1d327cff7cc18e99891c26c6d593c85c857.scope - libcontainer container 4e622c36184dbef2cd9a25a6142de1d327cff7cc18e99891c26c6d593c85c857. Jan 23 19:05:00.148729 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:05:00.194728 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:05:00.327100 containerd[1579]: time="2026-01-23T19:05:00.326834902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqkvb,Uid:3324f826-d792-4f21-89b7-23fbc6f9ae9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b91f967f411104c927f9db43647ca137ebb94085d9247a15f5f23b77c49a0173\"" Jan 23 19:05:00.330183 kubelet[2882]: E0123 19:05:00.330083 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:00.339894 containerd[1579]: time="2026-01-23T19:05:00.339740521Z" level=info msg="CreateContainer within sandbox \"b91f967f411104c927f9db43647ca137ebb94085d9247a15f5f23b77c49a0173\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:05:00.433063 containerd[1579]: time="2026-01-23T19:05:00.429816615Z" level=info msg="Container 01e3fb4cee10478b9426d492482663cc507ca64ced3014440ddcd91c3a6f4df9: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:00.466545 containerd[1579]: time="2026-01-23T19:05:00.465737439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cw4bs,Uid:d196c0cf-6f07-4d86-8d46-7b13faebe524,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e622c36184dbef2cd9a25a6142de1d327cff7cc18e99891c26c6d593c85c857\"" Jan 23 19:05:00.472058 containerd[1579]: time="2026-01-23T19:05:00.468344378Z" level=info msg="CreateContainer within sandbox \"b91f967f411104c927f9db43647ca137ebb94085d9247a15f5f23b77c49a0173\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01e3fb4cee10478b9426d492482663cc507ca64ced3014440ddcd91c3a6f4df9\"" Jan 23 19:05:00.477052 kubelet[2882]: E0123 19:05:00.476917 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:00.498619 containerd[1579]: time="2026-01-23T19:05:00.496952147Z" level=info msg="StartContainer for \"01e3fb4cee10478b9426d492482663cc507ca64ced3014440ddcd91c3a6f4df9\"" Jan 23 19:05:00.517762 containerd[1579]: time="2026-01-23T19:05:00.515277718Z" level=info msg="CreateContainer within sandbox \"4e622c36184dbef2cd9a25a6142de1d327cff7cc18e99891c26c6d593c85c857\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:05:00.517762 containerd[1579]: time="2026-01-23T19:05:00.515318184Z" level=info msg="connecting to shim 01e3fb4cee10478b9426d492482663cc507ca64ced3014440ddcd91c3a6f4df9" address="unix:///run/containerd/s/120ba0e6b09df6fb1349c7dd8053e57fab6bc25244d4dee266960451b2bd9ea0" protocol=ttrpc version=3 Jan 23 19:05:00.609213 containerd[1579]: time="2026-01-23T19:05:00.605178948Z" level=info msg="Container 519f0b2437222c861b671f49866feeae1212eb10e1bcd228bf900010026d0c8d: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:00.694188 containerd[1579]: time="2026-01-23T19:05:00.685680798Z" level=info msg="CreateContainer within sandbox \"4e622c36184dbef2cd9a25a6142de1d327cff7cc18e99891c26c6d593c85c857\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"519f0b2437222c861b671f49866feeae1212eb10e1bcd228bf900010026d0c8d\"" Jan 23 19:05:00.694188 containerd[1579]: time="2026-01-23T19:05:00.687356462Z" level=info msg="StartContainer for \"519f0b2437222c861b671f49866feeae1212eb10e1bcd228bf900010026d0c8d\"" Jan 23 19:05:00.694188 containerd[1579]: time="2026-01-23T19:05:00.689243951Z" level=info msg="connecting to shim 519f0b2437222c861b671f49866feeae1212eb10e1bcd228bf900010026d0c8d" address="unix:///run/containerd/s/977fecd6b944d33f082b382e99a0b628a519ad5a5243ce6270fdf42fb36cf22e" protocol=ttrpc version=3 Jan 23 19:05:00.786295 systemd[1]: Started cri-containerd-01e3fb4cee10478b9426d492482663cc507ca64ced3014440ddcd91c3a6f4df9.scope - libcontainer container 01e3fb4cee10478b9426d492482663cc507ca64ced3014440ddcd91c3a6f4df9. Jan 23 19:05:00.875264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058218653.mount: Deactivated successfully. Jan 23 19:05:01.156877 systemd[1]: Started cri-containerd-519f0b2437222c861b671f49866feeae1212eb10e1bcd228bf900010026d0c8d.scope - libcontainer container 519f0b2437222c861b671f49866feeae1212eb10e1bcd228bf900010026d0c8d. Jan 23 19:05:01.325837 containerd[1579]: time="2026-01-23T19:05:01.325781885Z" level=info msg="StartContainer for \"01e3fb4cee10478b9426d492482663cc507ca64ced3014440ddcd91c3a6f4df9\" returns successfully" Jan 23 19:05:01.439543 containerd[1579]: time="2026-01-23T19:05:01.439145850Z" level=info msg="StartContainer for \"519f0b2437222c861b671f49866feeae1212eb10e1bcd228bf900010026d0c8d\" returns successfully" Jan 23 19:05:02.092629 kubelet[2882]: E0123 19:05:02.091678 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:02.146056 kubelet[2882]: E0123 19:05:02.144707 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:02.254874 kubelet[2882]: I0123 19:05:02.250675 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cw4bs" podStartSLOduration=117.25065046 podStartE2EDuration="1m57.25065046s" podCreationTimestamp="2026-01-23 19:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:05:02.237561208 +0000 UTC m=+120.498572195" watchObservedRunningTime="2026-01-23 19:05:02.25065046 +0000 UTC m=+120.511661427" Jan 23 19:05:03.157344 kubelet[2882]: E0123 19:05:03.153749 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:03.164640 kubelet[2882]: E0123 19:05:03.157911 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:03.251799 kubelet[2882]: I0123 19:05:03.250943 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dqkvb" podStartSLOduration=118.250922033 podStartE2EDuration="1m58.250922033s" podCreationTimestamp="2026-01-23 19:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:05:02.312065686 +0000 UTC m=+120.573076653" watchObservedRunningTime="2026-01-23 19:05:03.250922033 +0000 UTC m=+121.511933000" Jan 23 19:05:04.167787 kubelet[2882]: E0123 19:05:04.156997 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:04.175527 kubelet[2882]: E0123 19:05:04.175496 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:41.236769 kubelet[2882]: E0123 19:05:41.191718 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:41.677227 kubelet[2882]: E0123 19:05:41.675309 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:45.047541 kubelet[2882]: E0123 19:05:45.044328 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:51.151044 kubelet[2882]: E0123 19:05:51.116031 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:05:55.347242 update_engine[1569]: I20260123 19:05:55.180116 1569 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 19:05:56.453619 update_engine[1569]: I20260123 19:05:55.376884 1569 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 19:05:56.776627 update_engine[1569]: I20260123 19:05:56.489288 1569 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 19:05:56.836258 update_engine[1569]: I20260123 19:05:56.824155 1569 omaha_request_params.cc:62] Current group set to stable Jan 23 19:05:57.832782 update_engine[1569]: I20260123 19:05:57.092332 1569 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 19:05:57.832782 update_engine[1569]: I20260123 19:05:57.165819 1569 update_attempter.cc:643] Scheduling an action processor start. Jan 23 19:05:57.832782 update_engine[1569]: I20260123 19:05:57.166307 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 19:05:59.690157 update_engine[1569]: I20260123 19:05:59.245909 1569 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 19:05:59.690157 update_engine[1569]: I20260123 19:05:59.628996 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 19:05:59.690157 update_engine[1569]: I20260123 19:05:59.629683 1569 omaha_request_action.cc:272] Request: Jan 23 19:05:59.690157 update_engine[1569]: Jan 23 19:05:59.690157 update_engine[1569]: Jan 23 19:05:59.690157 update_engine[1569]: Jan 23 19:05:59.690157 update_engine[1569]: Jan 23 19:05:59.690157 update_engine[1569]: Jan 23 19:05:59.690157 update_engine[1569]: Jan 23 19:05:59.690157 update_engine[1569]: Jan 23 19:05:59.690157 update_engine[1569]: Jan 23 19:05:59.690157 update_engine[1569]: I20260123 19:05:59.630010 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:06:02.173785 update_engine[1569]: I20260123 19:06:01.879063 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:06:03.370121 locksmithd[1635]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 19:06:04.424490 update_engine[1569]: I20260123 19:06:03.963286 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:06:04.928928 update_engine[1569]: E20260123 19:06:04.788348 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:06:06.542276 update_engine[1569]: I20260123 19:06:06.380970 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 19:06:07.685020 systemd[1]: cri-containerd-55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315.scope: Deactivated successfully. Jan 23 19:06:07.692644 systemd[1]: cri-containerd-55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315.scope: Consumed 2.264s CPU time, 31.5M memory peak, 4K written to disk. Jan 23 19:06:07.758208 systemd[1]: cri-containerd-3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a.scope: Deactivated successfully. Jan 23 19:06:07.759122 systemd[1]: cri-containerd-3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a.scope: Consumed 21.562s CPU time, 54.5M memory peak, 3M read from disk. Jan 23 19:06:07.832873 containerd[1579]: time="2026-01-23T19:06:07.832496410Z" level=info msg="received container exit event container_id:\"3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a\" id:\"3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a\" pid:2731 exit_status:1 exited_at:{seconds:1769195167 nanos:827164535}" Jan 23 19:06:07.847489 containerd[1579]: time="2026-01-23T19:06:07.846622136Z" level=info msg="received container exit event container_id:\"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\" id:\"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\" pid:3432 exit_status:1 exited_at:{seconds:1769195167 nanos:838950312}" Jan 23 19:06:08.067551 kubelet[2882]: E0123 19:06:08.066795 2882 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.994s" Jan 23 19:06:08.072108 kubelet[2882]: E0123 19:06:08.072075 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:08.080112 kubelet[2882]: E0123 19:06:08.079167 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:08.229113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315-rootfs.mount: Deactivated successfully. Jan 23 19:06:08.243275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a-rootfs.mount: Deactivated successfully. Jan 23 19:06:12.860805 containerd[1579]: time="2026-01-23T19:06:12.860000144Z" level=info msg="StopContainer for \"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\" with timeout 30 (s)" Jan 23 19:06:13.162926 containerd[1579]: time="2026-01-23T19:06:13.056069993Z" level=info msg="Container to stop \"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:06:14.530540 containerd[1579]: time="2026-01-23T19:06:14.526774559Z" level=info msg="StopContainer for \"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\" returns successfully" Jan 23 19:06:14.747513 kubelet[2882]: E0123 19:06:14.747140 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:14.845685 containerd[1579]: time="2026-01-23T19:06:14.841244661Z" level=info msg="CreateContainer within sandbox \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 23 19:06:14.846029 kubelet[2882]: I0123 19:06:14.842296 2882 scope.go:117] "RemoveContainer" containerID="3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a" Jan 23 19:06:14.846029 kubelet[2882]: E0123 19:06:14.842590 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:14.873946 containerd[1579]: time="2026-01-23T19:06:14.873832590Z" level=info msg="CreateContainer within sandbox \"7a2d56124fd03e89c6b1861dcce43555d8c9e695d256a245f8987580f792cc8f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 19:06:14.998540 containerd[1579]: time="2026-01-23T19:06:14.991018502Z" level=info msg="Container 5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:06:15.024715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999448119.mount: Deactivated successfully. Jan 23 19:06:15.068568 containerd[1579]: time="2026-01-23T19:06:15.067286544Z" level=info msg="Container acc40ca6d3985bb41c4c0c9059e33cb9c2254aa5895fd41f83042ed200d62b04: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:06:15.079018 containerd[1579]: time="2026-01-23T19:06:15.076990237Z" level=info msg="CreateContainer within sandbox \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\"" Jan 23 19:06:15.079333 containerd[1579]: time="2026-01-23T19:06:15.079247268Z" level=info msg="StartContainer for \"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\"" Jan 23 19:06:15.085459 containerd[1579]: time="2026-01-23T19:06:15.085235775Z" level=info msg="connecting to shim 5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8" address="unix:///run/containerd/s/2b94258e37cfb2250d0574356897d885205dbee37bc0141adbbcf30b7d0333a7" protocol=ttrpc version=3 Jan 23 19:06:15.162787 containerd[1579]: time="2026-01-23T19:06:15.162575617Z" level=info msg="CreateContainer within sandbox \"7a2d56124fd03e89c6b1861dcce43555d8c9e695d256a245f8987580f792cc8f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"acc40ca6d3985bb41c4c0c9059e33cb9c2254aa5895fd41f83042ed200d62b04\"" Jan 23 19:06:15.185131 containerd[1579]: time="2026-01-23T19:06:15.185072060Z" level=info msg="StartContainer for \"acc40ca6d3985bb41c4c0c9059e33cb9c2254aa5895fd41f83042ed200d62b04\"" Jan 23 19:06:15.208481 containerd[1579]: time="2026-01-23T19:06:15.208326991Z" level=info msg="connecting to shim acc40ca6d3985bb41c4c0c9059e33cb9c2254aa5895fd41f83042ed200d62b04" address="unix:///run/containerd/s/a60b369ab9bedf134c0b38fb155f96b2d9d6c3b4713bd9c711c6b0c56c0bc6b0" protocol=ttrpc version=3 Jan 23 19:06:15.226944 systemd[1]: Started cri-containerd-5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8.scope - libcontainer container 5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8. Jan 23 19:06:16.477097 systemd[1]: Started cri-containerd-acc40ca6d3985bb41c4c0c9059e33cb9c2254aa5895fd41f83042ed200d62b04.scope - libcontainer container acc40ca6d3985bb41c4c0c9059e33cb9c2254aa5895fd41f83042ed200d62b04. Jan 23 19:06:16.683558 containerd[1579]: time="2026-01-23T19:06:16.683508115Z" level=info msg="StartContainer for \"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\" returns successfully" Jan 23 19:06:17.149046 update_engine[1569]: I20260123 19:06:17.074311 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:06:17.335038 update_engine[1569]: I20260123 19:06:17.334848 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:06:17.341014 update_engine[1569]: I20260123 19:06:17.340972 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:06:17.357538 update_engine[1569]: E20260123 19:06:17.357308 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:06:17.357954 update_engine[1569]: I20260123 19:06:17.357921 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 19:06:17.439250 containerd[1579]: time="2026-01-23T19:06:17.438741493Z" level=info msg="StartContainer for \"acc40ca6d3985bb41c4c0c9059e33cb9c2254aa5895fd41f83042ed200d62b04\" returns successfully" Jan 23 19:06:17.621807 kubelet[2882]: E0123 19:06:17.621165 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:17.650542 kubelet[2882]: E0123 19:06:17.649155 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:18.719814 kubelet[2882]: E0123 19:06:18.719169 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:19.158022 kubelet[2882]: E0123 19:06:19.157275 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:20.065362 kubelet[2882]: E0123 19:06:20.063192 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:22.028107 kubelet[2882]: E0123 19:06:22.024053 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:28.023006 update_engine[1569]: I20260123 19:06:28.022926 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:06:28.029863 update_engine[1569]: I20260123 19:06:28.029772 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:06:28.031307 update_engine[1569]: I20260123 19:06:28.031145 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:06:28.050235 update_engine[1569]: E20260123 19:06:28.049880 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:06:28.050235 update_engine[1569]: I20260123 19:06:28.050073 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 19:06:29.183667 kubelet[2882]: E0123 19:06:29.181921 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:06:37.999078 update_engine[1569]: I20260123 19:06:37.998996 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:06:38.004593 update_engine[1569]: I20260123 19:06:38.003022 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:06:38.027794 update_engine[1569]: I20260123 19:06:38.027668 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:06:38.050095 update_engine[1569]: E20260123 19:06:38.047801 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.047952 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.047971 1569 omaha_request_action.cc:617] Omaha request response: Jan 23 19:06:38.050095 update_engine[1569]: E20260123 19:06:38.048115 1569 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.048544 1569 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.048561 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.048570 1569 update_attempter.cc:306] Processing Done. Jan 23 19:06:38.050095 update_engine[1569]: E20260123 19:06:38.048648 1569 update_attempter.cc:619] Update failed. Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.048663 1569 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.048672 1569 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.048682 1569 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.048784 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.048913 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 19:06:38.050095 update_engine[1569]: I20260123 19:06:38.048933 1569 omaha_request_action.cc:272] Request: Jan 23 19:06:38.050095 update_engine[1569]: Jan 23 19:06:38.050095 update_engine[1569]: Jan 23 19:06:38.050095 update_engine[1569]: Jan 23 19:06:38.051314 update_engine[1569]: Jan 23 19:06:38.051314 update_engine[1569]: Jan 23 19:06:38.051314 update_engine[1569]: Jan 23 19:06:38.051314 update_engine[1569]: I20260123 19:06:38.048947 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:06:38.051314 update_engine[1569]: I20260123 19:06:38.048985 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:06:38.051314 update_engine[1569]: I20260123 19:06:38.049829 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:06:38.056809 locksmithd[1635]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 19:06:38.071914 update_engine[1569]: E20260123 19:06:38.071712 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:06:38.071914 update_engine[1569]: I20260123 19:06:38.071885 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 19:06:38.071914 update_engine[1569]: I20260123 19:06:38.071910 1569 omaha_request_action.cc:617] Omaha request response: Jan 23 19:06:38.072204 update_engine[1569]: I20260123 19:06:38.071925 1569 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:06:38.072204 update_engine[1569]: I20260123 19:06:38.071991 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:06:38.072204 update_engine[1569]: I20260123 19:06:38.072009 1569 update_attempter.cc:306] Processing Done. Jan 23 19:06:38.072204 update_engine[1569]: I20260123 19:06:38.072024 1569 update_attempter.cc:310] Error event sent. Jan 23 19:06:38.072204 update_engine[1569]: I20260123 19:06:38.072042 1569 update_check_scheduler.cc:74] Next update check in 40m24s Jan 23 19:06:38.076307 locksmithd[1635]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 19:07:04.038621 kubelet[2882]: E0123 19:07:04.037664 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:12.028574 kubelet[2882]: E0123 19:07:12.027013 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:17.348183 containerd[1579]: time="2026-01-23T19:07:17.348041553Z" level=warning msg="container event discarded" container=7a2d56124fd03e89c6b1861dcce43555d8c9e695d256a245f8987580f792cc8f type=CONTAINER_CREATED_EVENT Jan 23 19:07:17.358823 containerd[1579]: time="2026-01-23T19:07:17.358749333Z" level=warning msg="container event discarded" container=7a2d56124fd03e89c6b1861dcce43555d8c9e695d256a245f8987580f792cc8f type=CONTAINER_STARTED_EVENT Jan 23 19:07:17.588794 containerd[1579]: time="2026-01-23T19:07:17.586735966Z" level=warning msg="container event discarded" container=64d64eb3cb3273ceb0636d590db6956edd5e73d526cbb4ef7f3b6978201b582f type=CONTAINER_CREATED_EVENT Jan 23 19:07:17.588794 containerd[1579]: time="2026-01-23T19:07:17.586862949Z" level=warning msg="container event discarded" container=64d64eb3cb3273ceb0636d590db6956edd5e73d526cbb4ef7f3b6978201b582f type=CONTAINER_STARTED_EVENT Jan 23 19:07:17.630059 containerd[1579]: time="2026-01-23T19:07:17.622346233Z" level=warning msg="container event discarded" container=449374ba9aea84efb0197174e14879deb22ccfb5568298adba54842e2fae9da4 type=CONTAINER_CREATED_EVENT Jan 23 19:07:17.630059 containerd[1579]: time="2026-01-23T19:07:17.622658898Z" level=warning msg="container event discarded" container=449374ba9aea84efb0197174e14879deb22ccfb5568298adba54842e2fae9da4 type=CONTAINER_STARTED_EVENT Jan 23 19:07:17.768548 containerd[1579]: time="2026-01-23T19:07:17.768020829Z" level=warning msg="container event discarded" container=d0218c174c800d579ad07b88ae2e73dea500a9ee32028fcb1e9adc2e813977bd type=CONTAINER_CREATED_EVENT Jan 23 19:07:17.875927 containerd[1579]: time="2026-01-23T19:07:17.872643576Z" level=warning msg="container event discarded" container=3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a type=CONTAINER_CREATED_EVENT Jan 23 19:07:17.944271 containerd[1579]: time="2026-01-23T19:07:17.944011030Z" level=warning msg="container event discarded" container=08df4eea065b6f16f1deee503b0d4adaed4caaa6404ec2d4f32b44058a96a055 type=CONTAINER_CREATED_EVENT Jan 23 19:07:24.201885 containerd[1579]: time="2026-01-23T19:07:24.201255958Z" level=warning msg="container event discarded" container=d0218c174c800d579ad07b88ae2e73dea500a9ee32028fcb1e9adc2e813977bd type=CONTAINER_STARTED_EVENT Jan 23 19:07:24.201885 containerd[1579]: time="2026-01-23T19:07:24.201337850Z" level=warning msg="container event discarded" container=3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a type=CONTAINER_STARTED_EVENT Jan 23 19:07:24.337205 containerd[1579]: time="2026-01-23T19:07:24.336916722Z" level=warning msg="container event discarded" container=08df4eea065b6f16f1deee503b0d4adaed4caaa6404ec2d4f32b44058a96a055 type=CONTAINER_STARTED_EVENT Jan 23 19:07:25.026071 kubelet[2882]: E0123 19:07:25.025366 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:26.029827 kubelet[2882]: E0123 19:07:26.029213 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:30.053611 systemd[1]: Started sshd@9-10.0.0.46:22-10.0.0.1:55410.service - OpenSSH per-connection server daemon (10.0.0.1:55410). Jan 23 19:07:30.058715 kubelet[2882]: E0123 19:07:30.054238 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:30.821688 sshd[4481]: Accepted publickey for core from 10.0.0.1 port 55410 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:07:30.829818 sshd-session[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:30.877648 systemd-logind[1561]: New session 10 of user core. Jan 23 19:07:30.888820 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 19:07:31.801867 sshd[4484]: Connection closed by 10.0.0.1 port 55410 Jan 23 19:07:31.803796 sshd-session[4481]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:31.819822 systemd[1]: sshd@9-10.0.0.46:22-10.0.0.1:55410.service: Deactivated successfully. Jan 23 19:07:31.827200 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 19:07:31.856683 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Jan 23 19:07:31.877085 systemd-logind[1561]: Removed session 10. Jan 23 19:07:32.030658 kubelet[2882]: E0123 19:07:32.027254 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:35.025519 kubelet[2882]: E0123 19:07:35.025006 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:36.890855 systemd[1]: Started sshd@10-10.0.0.46:22-10.0.0.1:56548.service - OpenSSH per-connection server daemon (10.0.0.1:56548). Jan 23 19:07:37.230775 sshd[4502]: Accepted publickey for core from 10.0.0.1 port 56548 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:07:37.255575 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:37.285290 systemd-logind[1561]: New session 11 of user core. Jan 23 19:07:37.316837 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 19:07:37.914791 sshd[4505]: Connection closed by 10.0.0.1 port 56548 Jan 23 19:07:37.917705 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:37.949517 systemd[1]: sshd@10-10.0.0.46:22-10.0.0.1:56548.service: Deactivated successfully. Jan 23 19:07:37.954737 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 19:07:37.958330 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Jan 23 19:07:37.964173 systemd-logind[1561]: Removed session 11. Jan 23 19:07:42.987903 systemd[1]: Started sshd@11-10.0.0.46:22-10.0.0.1:56558.service - OpenSSH per-connection server daemon (10.0.0.1:56558). Jan 23 19:07:43.250510 sshd[4519]: Accepted publickey for core from 10.0.0.1 port 56558 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:07:43.257907 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:43.298668 systemd-logind[1561]: New session 12 of user core. Jan 23 19:07:43.322034 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 19:07:44.043598 sshd[4524]: Connection closed by 10.0.0.1 port 56558 Jan 23 19:07:44.048065 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:44.071318 systemd[1]: sshd@11-10.0.0.46:22-10.0.0.1:56558.service: Deactivated successfully. Jan 23 19:07:44.077933 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 19:07:44.081360 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Jan 23 19:07:44.089661 systemd-logind[1561]: Removed session 12. Jan 23 19:07:48.031660 kubelet[2882]: E0123 19:07:48.029131 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:07:49.128848 systemd[1]: Started sshd@12-10.0.0.46:22-10.0.0.1:35560.service - OpenSSH per-connection server daemon (10.0.0.1:35560). Jan 23 19:07:49.598912 sshd[4540]: Accepted publickey for core from 10.0.0.1 port 35560 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:07:49.618845 sshd-session[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:49.658652 systemd-logind[1561]: New session 13 of user core. Jan 23 19:07:49.713757 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 19:07:50.424752 sshd[4543]: Connection closed by 10.0.0.1 port 35560 Jan 23 19:07:50.425742 sshd-session[4540]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:50.446747 systemd[1]: sshd@12-10.0.0.46:22-10.0.0.1:35560.service: Deactivated successfully. Jan 23 19:07:50.464283 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 19:07:50.474294 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Jan 23 19:07:50.486643 systemd-logind[1561]: Removed session 13. Jan 23 19:07:55.495765 systemd[1]: Started sshd@13-10.0.0.46:22-10.0.0.1:42994.service - OpenSSH per-connection server daemon (10.0.0.1:42994). Jan 23 19:07:55.706756 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 42994 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:07:55.712276 sshd-session[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:55.752352 systemd-logind[1561]: New session 14 of user core. Jan 23 19:07:55.778618 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 19:07:56.642312 sshd[4561]: Connection closed by 10.0.0.1 port 42994 Jan 23 19:07:56.641290 sshd-session[4558]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:56.663538 systemd[1]: sshd@13-10.0.0.46:22-10.0.0.1:42994.service: Deactivated successfully. Jan 23 19:07:56.678269 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 19:07:56.718660 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Jan 23 19:07:56.727347 systemd-logind[1561]: Removed session 14. Jan 23 19:08:01.709956 systemd[1]: Started sshd@14-10.0.0.46:22-10.0.0.1:43006.service - OpenSSH per-connection server daemon (10.0.0.1:43006). Jan 23 19:08:02.053319 sshd[4575]: Accepted publickey for core from 10.0.0.1 port 43006 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:08:02.063166 sshd-session[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:08:02.115100 systemd-logind[1561]: New session 15 of user core. Jan 23 19:08:02.134342 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 19:08:02.913917 sshd[4578]: Connection closed by 10.0.0.1 port 43006 Jan 23 19:08:02.910922 sshd-session[4575]: pam_unix(sshd:session): session closed for user core Jan 23 19:08:02.964592 systemd[1]: sshd@14-10.0.0.46:22-10.0.0.1:43006.service: Deactivated successfully. Jan 23 19:08:02.996180 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 19:08:03.011188 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Jan 23 19:08:03.042600 systemd-logind[1561]: Removed session 15. Jan 23 19:08:08.735185 systemd[1]: Started sshd@15-10.0.0.46:22-10.0.0.1:36676.service - OpenSSH per-connection server daemon (10.0.0.1:36676). Jan 23 19:08:09.646574 containerd[1579]: time="2026-01-23T19:08:09.332218691Z" level=warning msg="container event discarded" container=cfa44d9972c65eed23ef3b15ab527839453a7154795416762540fdd43d8fbb80 type=CONTAINER_CREATED_EVENT Jan 23 19:08:09.646574 containerd[1579]: time="2026-01-23T19:08:09.644264155Z" level=warning msg="container event discarded" container=cfa44d9972c65eed23ef3b15ab527839453a7154795416762540fdd43d8fbb80 type=CONTAINER_STARTED_EVENT Jan 23 19:08:09.889171 containerd[1579]: time="2026-01-23T19:08:09.887088563Z" level=warning msg="container event discarded" container=0aa891201c21d7081be15c6d21dc6d9a3bafd9fdc7011b86203f8ff02c81c31d type=CONTAINER_CREATED_EVENT Jan 23 19:08:10.174958 kubelet[2882]: E0123 19:08:10.169550 2882 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.08s" Jan 23 19:08:10.960156 containerd[1579]: time="2026-01-23T19:08:10.951183397Z" level=warning msg="container event discarded" container=0aa891201c21d7081be15c6d21dc6d9a3bafd9fdc7011b86203f8ff02c81c31d type=CONTAINER_STARTED_EVENT Jan 23 19:08:11.319313 sshd[4595]: Accepted publickey for core from 10.0.0.1 port 36676 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:08:11.327004 sshd-session[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:08:11.535055 systemd-logind[1561]: New session 16 of user core. Jan 23 19:08:11.723834 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 19:08:12.884952 kubelet[2882]: E0123 19:08:12.880339 2882 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.488s" Jan 23 19:08:14.092179 containerd[1579]: time="2026-01-23T19:08:14.090671129Z" level=warning msg="container event discarded" container=653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f type=CONTAINER_CREATED_EVENT Jan 23 19:08:14.092179 containerd[1579]: time="2026-01-23T19:08:14.091681224Z" level=warning msg="container event discarded" container=653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f type=CONTAINER_STARTED_EVENT Jan 23 19:08:14.165892 kubelet[2882]: E0123 19:08:14.163032 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:08:14.339087 containerd[1579]: time="2026-01-23T19:08:14.335712787Z" level=warning msg="container event discarded" container=3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc type=CONTAINER_CREATED_EVENT Jan 23 19:08:14.339087 containerd[1579]: time="2026-01-23T19:08:14.335862377Z" level=warning msg="container event discarded" container=3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc type=CONTAINER_STARTED_EVENT Jan 23 19:08:14.666595 sshd[4598]: Connection closed by 10.0.0.1 port 36676 Jan 23 19:08:14.667286 sshd-session[4595]: pam_unix(sshd:session): session closed for user core Jan 23 19:08:14.692207 systemd[1]: sshd@15-10.0.0.46:22-10.0.0.1:36676.service: Deactivated successfully. Jan 23 19:08:14.714679 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 19:08:14.735502 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Jan 23 19:08:14.742649 systemd-logind[1561]: Removed session 16. Jan 23 19:08:19.727605 systemd[1]: Started sshd@16-10.0.0.46:22-10.0.0.1:49598.service - OpenSSH per-connection server daemon (10.0.0.1:49598). Jan 23 19:08:19.978985 sshd[4617]: Accepted publickey for core from 10.0.0.1 port 49598 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:08:19.983035 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:08:20.009523 systemd-logind[1561]: New session 17 of user core. Jan 23 19:08:20.034605 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 19:08:20.498558 sshd[4620]: Connection closed by 10.0.0.1 port 49598 Jan 23 19:08:20.503144 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Jan 23 19:08:20.520251 systemd[1]: sshd@16-10.0.0.46:22-10.0.0.1:49598.service: Deactivated successfully. Jan 23 19:08:20.531310 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 19:08:20.541793 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Jan 23 19:08:20.549834 systemd-logind[1561]: Removed session 17. Jan 23 19:08:22.033208 kubelet[2882]: E0123 19:08:22.030554 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:08:25.577077 systemd[1]: Started sshd@17-10.0.0.46:22-10.0.0.1:53950.service - OpenSSH per-connection server daemon (10.0.0.1:53950). Jan 23 19:08:25.864576 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 53950 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:08:25.868199 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:08:25.915583 systemd-logind[1561]: New session 18 of user core. Jan 23 19:08:25.933257 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 19:08:26.350314 sshd[4637]: Connection closed by 10.0.0.1 port 53950 Jan 23 19:08:26.348931 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Jan 23 19:08:26.380927 systemd[1]: sshd@17-10.0.0.46:22-10.0.0.1:53950.service: Deactivated successfully. Jan 23 19:08:26.386874 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 19:08:26.397314 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Jan 23 19:08:26.414206 systemd-logind[1561]: Removed session 18. Jan 23 19:08:31.387066 systemd[1]: Started sshd@18-10.0.0.46:22-10.0.0.1:53966.service - OpenSSH per-connection server daemon (10.0.0.1:53966). Jan 23 19:08:31.614295 sshd[4652]: Accepted publickey for core from 10.0.0.1 port 53966 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:08:31.631195 sshd-session[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:08:31.668352 systemd-logind[1561]: New session 19 of user core. Jan 23 19:08:31.710048 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 19:08:32.146076 sshd[4655]: Connection closed by 10.0.0.1 port 53966 Jan 23 19:08:32.146700 sshd-session[4652]: pam_unix(sshd:session): session closed for user core Jan 23 19:08:32.163778 systemd[1]: sshd@18-10.0.0.46:22-10.0.0.1:53966.service: Deactivated successfully. Jan 23 19:08:32.168223 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 19:08:32.189663 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Jan 23 19:08:32.215582 systemd-logind[1561]: Removed session 19. Jan 23 19:08:36.040454 kubelet[2882]: E0123 19:08:36.027775 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:08:37.193071 systemd[1]: Started sshd@19-10.0.0.46:22-10.0.0.1:45046.service - OpenSSH per-connection server daemon (10.0.0.1:45046). Jan 23 19:08:37.363521 sshd[4669]: Accepted publickey for core from 10.0.0.1 port 45046 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:08:37.364269 sshd-session[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:08:37.375675 systemd-logind[1561]: New session 20 of user core. Jan 23 19:08:37.381754 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 19:08:37.714292 sshd[4672]: Connection closed by 10.0.0.1 port 45046 Jan 23 19:08:37.715855 sshd-session[4669]: pam_unix(sshd:session): session closed for user core Jan 23 19:08:37.737345 systemd[1]: sshd@19-10.0.0.46:22-10.0.0.1:45046.service: Deactivated successfully. Jan 23 19:08:37.741521 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 19:08:37.752280 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Jan 23 19:08:37.754852 systemd-logind[1561]: Removed session 20. Jan 23 19:08:38.037034 kubelet[2882]: E0123 19:08:38.035079 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:08:42.750982 systemd[1]: Started sshd@20-10.0.0.46:22-10.0.0.1:45048.service - OpenSSH per-connection server daemon (10.0.0.1:45048). Jan 23 19:08:42.946972 sshd[4686]: Accepted publickey for core from 10.0.0.1 port 45048 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:08:42.954087 sshd-session[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:08:42.980181 systemd-logind[1561]: New session 21 of user core. Jan 23 19:08:42.996655 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 19:08:43.422244 sshd[4690]: Connection closed by 10.0.0.1 port 45048 Jan 23 19:08:43.426589 sshd-session[4686]: pam_unix(sshd:session): session closed for user core Jan 23 19:08:43.436677 systemd[1]: sshd@20-10.0.0.46:22-10.0.0.1:45048.service: Deactivated successfully. Jan 23 19:08:43.443042 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 19:08:43.452357 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Jan 23 19:08:43.457782 systemd-logind[1561]: Removed session 21. Jan 23 19:08:47.024670 kubelet[2882]: E0123 19:08:47.024294 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:08:48.047510 kubelet[2882]: E0123 19:08:48.046788 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:08:48.473279 systemd[1]: Started sshd@21-10.0.0.46:22-10.0.0.1:38400.service - OpenSSH per-connection server daemon (10.0.0.1:38400). Jan 23 19:08:48.661743 sshd[4706]: Accepted publickey for core from 10.0.0.1 port 38400 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:08:48.665058 sshd-session[4706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:08:48.716597 systemd-logind[1561]: New session 22 of user core. Jan 23 19:08:48.737045 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 19:08:49.271156 sshd[4709]: Connection closed by 10.0.0.1 port 38400 Jan 23 19:08:49.272751 sshd-session[4706]: pam_unix(sshd:session): session closed for user core Jan 23 19:08:49.284027 systemd[1]: sshd@21-10.0.0.46:22-10.0.0.1:38400.service: Deactivated successfully. Jan 23 19:08:49.288237 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 19:08:49.292845 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Jan 23 19:08:49.313950 systemd-logind[1561]: Removed session 22. Jan 23 19:08:52.024514 kubelet[2882]: E0123 19:08:52.024032 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:08:54.350028 systemd[1]: Started sshd@22-10.0.0.46:22-10.0.0.1:38412.service - OpenSSH per-connection server daemon (10.0.0.1:38412). Jan 23 19:08:54.727692 sshd[4725]: Accepted publickey for core from 10.0.0.1 port 38412 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:08:54.733340 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:08:54.779333 systemd-logind[1561]: New session 23 of user core. Jan 23 19:08:54.831067 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 19:08:55.410051 sshd[4728]: Connection closed by 10.0.0.1 port 38412 Jan 23 19:08:55.406947 sshd-session[4725]: pam_unix(sshd:session): session closed for user core Jan 23 19:08:55.434686 systemd[1]: sshd@22-10.0.0.46:22-10.0.0.1:38412.service: Deactivated successfully. Jan 23 19:08:55.449162 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 19:08:55.451326 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Jan 23 19:08:55.479322 systemd-logind[1561]: Removed session 23. Jan 23 19:09:00.466928 systemd[1]: Started sshd@23-10.0.0.46:22-10.0.0.1:43190.service - OpenSSH per-connection server daemon (10.0.0.1:43190). Jan 23 19:09:00.791206 sshd[4742]: Accepted publickey for core from 10.0.0.1 port 43190 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:00.801068 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:00.838094 systemd-logind[1561]: New session 24 of user core. Jan 23 19:09:00.882059 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 19:09:01.426941 sshd[4745]: Connection closed by 10.0.0.1 port 43190 Jan 23 19:09:01.427913 sshd-session[4742]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:01.447970 systemd[1]: sshd@23-10.0.0.46:22-10.0.0.1:43190.service: Deactivated successfully. Jan 23 19:09:01.454741 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 19:09:01.459152 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Jan 23 19:09:01.471079 systemd-logind[1561]: Removed session 24. Jan 23 19:09:06.471112 systemd[1]: Started sshd@24-10.0.0.46:22-10.0.0.1:49888.service - OpenSSH per-connection server daemon (10.0.0.1:49888). Jan 23 19:09:06.763754 sshd[4762]: Accepted publickey for core from 10.0.0.1 port 49888 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:06.778981 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:06.833583 systemd-logind[1561]: New session 25 of user core. Jan 23 19:09:06.878138 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 19:09:07.736202 sshd[4765]: Connection closed by 10.0.0.1 port 49888 Jan 23 19:09:07.743000 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:07.765952 systemd[1]: sshd@24-10.0.0.46:22-10.0.0.1:49888.service: Deactivated successfully. Jan 23 19:09:07.816632 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 19:09:07.828719 systemd-logind[1561]: Session 25 logged out. Waiting for processes to exit. Jan 23 19:09:07.848976 systemd-logind[1561]: Removed session 25. Jan 23 19:09:10.033119 kubelet[2882]: E0123 19:09:10.030279 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:09:12.813824 systemd[1]: Started sshd@25-10.0.0.46:22-10.0.0.1:49894.service - OpenSSH per-connection server daemon (10.0.0.1:49894). Jan 23 19:09:13.146706 sshd[4780]: Accepted publickey for core from 10.0.0.1 port 49894 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:13.164501 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:13.277974 systemd-logind[1561]: New session 26 of user core. Jan 23 19:09:13.304791 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 19:09:13.850796 containerd[1579]: time="2026-01-23T19:09:13.844168028Z" level=warning msg="container event discarded" container=68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85 type=CONTAINER_CREATED_EVENT Jan 23 19:09:14.142623 containerd[1579]: time="2026-01-23T19:09:14.142024340Z" level=warning msg="container event discarded" container=68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85 type=CONTAINER_STARTED_EVENT Jan 23 19:09:14.157320 sshd[4785]: Connection closed by 10.0.0.1 port 49894 Jan 23 19:09:14.158852 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:14.188811 systemd[1]: sshd@25-10.0.0.46:22-10.0.0.1:49894.service: Deactivated successfully. Jan 23 19:09:14.209287 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 19:09:14.226940 systemd-logind[1561]: Session 26 logged out. Waiting for processes to exit. Jan 23 19:09:14.244785 systemd-logind[1561]: Removed session 26. Jan 23 19:09:14.590104 containerd[1579]: time="2026-01-23T19:09:14.589530944Z" level=warning msg="container event discarded" container=68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85 type=CONTAINER_STOPPED_EVENT Jan 23 19:09:15.004029 containerd[1579]: time="2026-01-23T19:09:15.003929120Z" level=warning msg="container event discarded" container=c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68 type=CONTAINER_CREATED_EVENT Jan 23 19:09:15.339618 containerd[1579]: time="2026-01-23T19:09:15.335821210Z" level=warning msg="container event discarded" container=c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68 type=CONTAINER_STARTED_EVENT Jan 23 19:09:15.730923 containerd[1579]: time="2026-01-23T19:09:15.726533949Z" level=warning msg="container event discarded" container=c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68 type=CONTAINER_STOPPED_EVENT Jan 23 19:09:16.839831 containerd[1579]: time="2026-01-23T19:09:16.835713823Z" level=warning msg="container event discarded" container=2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a type=CONTAINER_CREATED_EVENT Jan 23 19:09:17.277995 containerd[1579]: time="2026-01-23T19:09:17.277856524Z" level=warning msg="container event discarded" container=2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a type=CONTAINER_STARTED_EVENT Jan 23 19:09:17.773362 containerd[1579]: time="2026-01-23T19:09:17.773267172Z" level=warning msg="container event discarded" container=2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a type=CONTAINER_STOPPED_EVENT Jan 23 19:09:18.237764 containerd[1579]: time="2026-01-23T19:09:18.237531234Z" level=warning msg="container event discarded" container=55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315 type=CONTAINER_CREATED_EVENT Jan 23 19:09:18.429889 containerd[1579]: time="2026-01-23T19:09:18.422907080Z" level=warning msg="container event discarded" container=55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315 type=CONTAINER_STARTED_EVENT Jan 23 19:09:18.865833 containerd[1579]: time="2026-01-23T19:09:18.864176958Z" level=warning msg="container event discarded" container=887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1 type=CONTAINER_CREATED_EVENT Jan 23 19:09:19.127194 containerd[1579]: time="2026-01-23T19:09:19.126553043Z" level=warning msg="container event discarded" container=887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1 type=CONTAINER_STARTED_EVENT Jan 23 19:09:19.273758 containerd[1579]: time="2026-01-23T19:09:19.272968661Z" level=warning msg="container event discarded" container=887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1 type=CONTAINER_STOPPED_EVENT Jan 23 19:09:19.299020 systemd[1]: Started sshd@26-10.0.0.46:22-10.0.0.1:58740.service - OpenSSH per-connection server daemon (10.0.0.1:58740). Jan 23 19:09:19.611648 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 58740 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:19.629351 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:19.668747 systemd-logind[1561]: New session 27 of user core. Jan 23 19:09:19.688955 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 19:09:19.973981 containerd[1579]: time="2026-01-23T19:09:19.973893205Z" level=warning msg="container event discarded" container=23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c type=CONTAINER_CREATED_EVENT Jan 23 19:09:20.254680 containerd[1579]: time="2026-01-23T19:09:20.253766483Z" level=warning msg="container event discarded" container=23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c type=CONTAINER_STARTED_EVENT Jan 23 19:09:20.485340 sshd[4803]: Connection closed by 10.0.0.1 port 58740 Jan 23 19:09:20.487885 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:20.546086 systemd[1]: sshd@26-10.0.0.46:22-10.0.0.1:58740.service: Deactivated successfully. Jan 23 19:09:20.560530 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 19:09:20.583983 systemd-logind[1561]: Session 27 logged out. Waiting for processes to exit. Jan 23 19:09:20.596216 systemd-logind[1561]: Removed session 27. Jan 23 19:09:25.574591 systemd[1]: Started sshd@27-10.0.0.46:22-10.0.0.1:57718.service - OpenSSH per-connection server daemon (10.0.0.1:57718). Jan 23 19:09:25.921209 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 57718 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:25.927597 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:25.971524 systemd-logind[1561]: New session 28 of user core. Jan 23 19:09:25.992748 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 19:09:26.743593 sshd[4821]: Connection closed by 10.0.0.1 port 57718 Jan 23 19:09:26.745076 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:26.770186 systemd[1]: sshd@27-10.0.0.46:22-10.0.0.1:57718.service: Deactivated successfully. Jan 23 19:09:26.814242 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 19:09:26.836939 systemd-logind[1561]: Session 28 logged out. Waiting for processes to exit. Jan 23 19:09:26.847974 systemd-logind[1561]: Removed session 28. Jan 23 19:09:27.030255 kubelet[2882]: E0123 19:09:27.029908 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:09:31.810980 systemd[1]: Started sshd@28-10.0.0.46:22-10.0.0.1:57724.service - OpenSSH per-connection server daemon (10.0.0.1:57724). Jan 23 19:09:31.984043 sshd[4837]: Accepted publickey for core from 10.0.0.1 port 57724 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:31.987304 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:32.026761 systemd-logind[1561]: New session 29 of user core. Jan 23 19:09:32.030301 kubelet[2882]: E0123 19:09:32.029517 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:09:32.041064 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 19:09:32.579565 sshd[4840]: Connection closed by 10.0.0.1 port 57724 Jan 23 19:09:32.585991 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:32.601836 systemd[1]: sshd@28-10.0.0.46:22-10.0.0.1:57724.service: Deactivated successfully. Jan 23 19:09:32.607611 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 19:09:32.619316 systemd-logind[1561]: Session 29 logged out. Waiting for processes to exit. Jan 23 19:09:32.638127 systemd-logind[1561]: Removed session 29. Jan 23 19:09:37.603208 systemd[1]: Started sshd@29-10.0.0.46:22-10.0.0.1:36760.service - OpenSSH per-connection server daemon (10.0.0.1:36760). Jan 23 19:09:37.739997 sshd[4856]: Accepted publickey for core from 10.0.0.1 port 36760 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:37.744795 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:37.769214 systemd-logind[1561]: New session 30 of user core. Jan 23 19:09:37.785923 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 19:09:38.241929 sshd[4859]: Connection closed by 10.0.0.1 port 36760 Jan 23 19:09:38.240642 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:38.257004 systemd[1]: sshd@29-10.0.0.46:22-10.0.0.1:36760.service: Deactivated successfully. Jan 23 19:09:38.267077 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 19:09:38.280144 systemd-logind[1561]: Session 30 logged out. Waiting for processes to exit. Jan 23 19:09:38.304739 systemd-logind[1561]: Removed session 30. Jan 23 19:09:43.291356 systemd[1]: Started sshd@30-10.0.0.46:22-10.0.0.1:36762.service - OpenSSH per-connection server daemon (10.0.0.1:36762). Jan 23 19:09:43.456093 sshd[4877]: Accepted publickey for core from 10.0.0.1 port 36762 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:43.466745 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:43.484194 systemd-logind[1561]: New session 31 of user core. Jan 23 19:09:43.505749 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 23 19:09:43.809572 sshd[4880]: Connection closed by 10.0.0.1 port 36762 Jan 23 19:09:43.811580 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:43.822240 systemd[1]: sshd@30-10.0.0.46:22-10.0.0.1:36762.service: Deactivated successfully. Jan 23 19:09:43.832519 systemd[1]: session-31.scope: Deactivated successfully. Jan 23 19:09:43.843295 systemd-logind[1561]: Session 31 logged out. Waiting for processes to exit. Jan 23 19:09:43.848286 systemd-logind[1561]: Removed session 31. Jan 23 19:09:48.890209 systemd[1]: Started sshd@31-10.0.0.46:22-10.0.0.1:51186.service - OpenSSH per-connection server daemon (10.0.0.1:51186). Jan 23 19:09:49.030959 kubelet[2882]: E0123 19:09:49.028839 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:09:49.073650 sshd[4895]: Accepted publickey for core from 10.0.0.1 port 51186 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:49.083253 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:49.127500 systemd-logind[1561]: New session 32 of user core. Jan 23 19:09:49.138319 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 23 19:09:49.513033 sshd[4898]: Connection closed by 10.0.0.1 port 51186 Jan 23 19:09:49.513949 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:49.541559 systemd[1]: sshd@31-10.0.0.46:22-10.0.0.1:51186.service: Deactivated successfully. Jan 23 19:09:49.553038 systemd[1]: session-32.scope: Deactivated successfully. Jan 23 19:09:49.556163 systemd-logind[1561]: Session 32 logged out. Waiting for processes to exit. Jan 23 19:09:49.570148 systemd[1]: Started sshd@32-10.0.0.46:22-10.0.0.1:51188.service - OpenSSH per-connection server daemon (10.0.0.1:51188). Jan 23 19:09:49.576658 systemd-logind[1561]: Removed session 32. Jan 23 19:09:49.767082 sshd[4913]: Accepted publickey for core from 10.0.0.1 port 51188 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:49.771604 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:49.836243 systemd-logind[1561]: New session 33 of user core. Jan 23 19:09:49.876928 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 23 19:09:50.756858 sshd[4916]: Connection closed by 10.0.0.1 port 51188 Jan 23 19:09:50.758173 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:50.794216 systemd[1]: sshd@32-10.0.0.46:22-10.0.0.1:51188.service: Deactivated successfully. Jan 23 19:09:50.809203 systemd[1]: session-33.scope: Deactivated successfully. Jan 23 19:09:50.821351 systemd-logind[1561]: Session 33 logged out. Waiting for processes to exit. Jan 23 19:09:50.828260 systemd[1]: Started sshd@33-10.0.0.46:22-10.0.0.1:51196.service - OpenSSH per-connection server daemon (10.0.0.1:51196). Jan 23 19:09:50.855247 systemd-logind[1561]: Removed session 33. Jan 23 19:09:51.141632 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 51196 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:51.144030 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:51.211321 systemd-logind[1561]: New session 34 of user core. Jan 23 19:09:51.251493 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 23 19:09:51.858771 sshd[4930]: Connection closed by 10.0.0.1 port 51196 Jan 23 19:09:51.859288 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:51.870861 systemd[1]: sshd@33-10.0.0.46:22-10.0.0.1:51196.service: Deactivated successfully. Jan 23 19:09:51.878597 systemd[1]: session-34.scope: Deactivated successfully. Jan 23 19:09:51.887525 systemd-logind[1561]: Session 34 logged out. Waiting for processes to exit. Jan 23 19:09:51.910321 systemd-logind[1561]: Removed session 34. Jan 23 19:09:54.027942 kubelet[2882]: E0123 19:09:54.026294 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:09:56.918876 systemd[1]: Started sshd@34-10.0.0.46:22-10.0.0.1:49818.service - OpenSSH per-connection server daemon (10.0.0.1:49818). Jan 23 19:09:57.257285 sshd[4944]: Accepted publickey for core from 10.0.0.1 port 49818 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:09:57.267628 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:09:57.346253 systemd-logind[1561]: New session 35 of user core. Jan 23 19:09:57.387914 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 23 19:09:57.899106 sshd[4948]: Connection closed by 10.0.0.1 port 49818 Jan 23 19:09:57.899919 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Jan 23 19:09:57.914837 systemd[1]: sshd@34-10.0.0.46:22-10.0.0.1:49818.service: Deactivated successfully. Jan 23 19:09:57.920170 systemd[1]: session-35.scope: Deactivated successfully. Jan 23 19:09:57.937925 systemd-logind[1561]: Session 35 logged out. Waiting for processes to exit. Jan 23 19:09:57.958946 systemd-logind[1561]: Removed session 35. Jan 23 19:10:00.343921 containerd[1579]: time="2026-01-23T19:10:00.340917452Z" level=warning msg="container event discarded" container=b91f967f411104c927f9db43647ca137ebb94085d9247a15f5f23b77c49a0173 type=CONTAINER_CREATED_EVENT Jan 23 19:10:00.343921 containerd[1579]: time="2026-01-23T19:10:00.341590932Z" level=warning msg="container event discarded" container=b91f967f411104c927f9db43647ca137ebb94085d9247a15f5f23b77c49a0173 type=CONTAINER_STARTED_EVENT Jan 23 19:10:00.476656 containerd[1579]: time="2026-01-23T19:10:00.476223726Z" level=warning msg="container event discarded" container=01e3fb4cee10478b9426d492482663cc507ca64ced3014440ddcd91c3a6f4df9 type=CONTAINER_CREATED_EVENT Jan 23 19:10:00.476656 containerd[1579]: time="2026-01-23T19:10:00.476290871Z" level=warning msg="container event discarded" container=4e622c36184dbef2cd9a25a6142de1d327cff7cc18e99891c26c6d593c85c857 type=CONTAINER_CREATED_EVENT Jan 23 19:10:00.476656 containerd[1579]: time="2026-01-23T19:10:00.476305008Z" level=warning msg="container event discarded" container=4e622c36184dbef2cd9a25a6142de1d327cff7cc18e99891c26c6d593c85c857 type=CONTAINER_STARTED_EVENT Jan 23 19:10:00.693056 containerd[1579]: time="2026-01-23T19:10:00.692233687Z" level=warning msg="container event discarded" container=519f0b2437222c861b671f49866feeae1212eb10e1bcd228bf900010026d0c8d type=CONTAINER_CREATED_EVENT Jan 23 19:10:01.326964 containerd[1579]: time="2026-01-23T19:10:01.325044597Z" level=warning msg="container event discarded" container=01e3fb4cee10478b9426d492482663cc507ca64ced3014440ddcd91c3a6f4df9 type=CONTAINER_STARTED_EVENT Jan 23 19:10:01.446812 containerd[1579]: time="2026-01-23T19:10:01.446645182Z" level=warning msg="container event discarded" container=519f0b2437222c861b671f49866feeae1212eb10e1bcd228bf900010026d0c8d type=CONTAINER_STARTED_EVENT Jan 23 19:10:02.946069 systemd[1]: Started sshd@35-10.0.0.46:22-10.0.0.1:49834.service - OpenSSH per-connection server daemon (10.0.0.1:49834). Jan 23 19:10:03.123027 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 49834 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:03.127318 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:03.170110 systemd-logind[1561]: New session 36 of user core. Jan 23 19:10:03.201132 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 23 19:10:03.677005 sshd[4966]: Connection closed by 10.0.0.1 port 49834 Jan 23 19:10:03.678168 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:03.698940 systemd[1]: sshd@35-10.0.0.46:22-10.0.0.1:49834.service: Deactivated successfully. Jan 23 19:10:03.704139 systemd[1]: session-36.scope: Deactivated successfully. Jan 23 19:10:03.708986 systemd-logind[1561]: Session 36 logged out. Waiting for processes to exit. Jan 23 19:10:03.718895 systemd-logind[1561]: Removed session 36. Jan 23 19:10:04.030610 kubelet[2882]: E0123 19:10:04.027986 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:10:08.041303 kubelet[2882]: E0123 19:10:08.036905 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:10:08.817262 systemd[1]: Started sshd@36-10.0.0.46:22-10.0.0.1:57808.service - OpenSSH per-connection server daemon (10.0.0.1:57808). Jan 23 19:10:09.161838 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 57808 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:09.170851 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:09.225257 systemd-logind[1561]: New session 37 of user core. Jan 23 19:10:09.240047 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 23 19:10:09.763063 sshd[4986]: Connection closed by 10.0.0.1 port 57808 Jan 23 19:10:09.765233 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:09.778578 systemd[1]: sshd@36-10.0.0.46:22-10.0.0.1:57808.service: Deactivated successfully. Jan 23 19:10:09.790031 systemd[1]: session-37.scope: Deactivated successfully. Jan 23 19:10:09.793272 systemd-logind[1561]: Session 37 logged out. Waiting for processes to exit. Jan 23 19:10:09.799966 systemd-logind[1561]: Removed session 37. Jan 23 19:10:14.783966 systemd[1]: Started sshd@37-10.0.0.46:22-10.0.0.1:56040.service - OpenSSH per-connection server daemon (10.0.0.1:56040). Jan 23 19:10:14.876936 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 56040 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:14.879798 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:14.895354 systemd-logind[1561]: New session 38 of user core. Jan 23 19:10:14.909794 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 23 19:10:15.145336 sshd[5005]: Connection closed by 10.0.0.1 port 56040 Jan 23 19:10:15.145869 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:15.163805 systemd[1]: sshd@37-10.0.0.46:22-10.0.0.1:56040.service: Deactivated successfully. Jan 23 19:10:15.169988 systemd[1]: session-38.scope: Deactivated successfully. Jan 23 19:10:15.172173 systemd-logind[1561]: Session 38 logged out. Waiting for processes to exit. Jan 23 19:10:15.175159 systemd-logind[1561]: Removed session 38. Jan 23 19:10:16.028485 kubelet[2882]: E0123 19:10:16.028231 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:10:20.166055 systemd[1]: Started sshd@38-10.0.0.46:22-10.0.0.1:56056.service - OpenSSH per-connection server daemon (10.0.0.1:56056). Jan 23 19:10:20.248928 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 56056 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:20.251437 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:20.259750 systemd-logind[1561]: New session 39 of user core. Jan 23 19:10:20.268734 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 23 19:10:20.477233 sshd[5021]: Connection closed by 10.0.0.1 port 56056 Jan 23 19:10:20.479888 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:20.505724 systemd[1]: sshd@38-10.0.0.46:22-10.0.0.1:56056.service: Deactivated successfully. Jan 23 19:10:20.509545 systemd[1]: session-39.scope: Deactivated successfully. Jan 23 19:10:20.512864 systemd-logind[1561]: Session 39 logged out. Waiting for processes to exit. Jan 23 19:10:20.516506 systemd[1]: Started sshd@39-10.0.0.46:22-10.0.0.1:56068.service - OpenSSH per-connection server daemon (10.0.0.1:56068). Jan 23 19:10:20.518564 systemd-logind[1561]: Removed session 39. Jan 23 19:10:20.599315 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 56068 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:20.602000 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:20.615153 systemd-logind[1561]: New session 40 of user core. Jan 23 19:10:20.622963 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 23 19:10:21.226485 sshd[5041]: Connection closed by 10.0.0.1 port 56068 Jan 23 19:10:21.227872 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:21.251858 systemd[1]: sshd@39-10.0.0.46:22-10.0.0.1:56068.service: Deactivated successfully. Jan 23 19:10:21.257718 systemd[1]: session-40.scope: Deactivated successfully. Jan 23 19:10:21.259857 systemd-logind[1561]: Session 40 logged out. Waiting for processes to exit. Jan 23 19:10:21.270889 systemd[1]: Started sshd@40-10.0.0.46:22-10.0.0.1:56074.service - OpenSSH per-connection server daemon (10.0.0.1:56074). Jan 23 19:10:21.273083 systemd-logind[1561]: Removed session 40. Jan 23 19:10:21.436760 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 56074 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:21.437927 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:21.452959 systemd-logind[1561]: New session 41 of user core. Jan 23 19:10:21.459755 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 23 19:10:22.037761 kubelet[2882]: E0123 19:10:22.035955 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:10:22.601439 sshd[5056]: Connection closed by 10.0.0.1 port 56074 Jan 23 19:10:22.601815 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:22.628318 systemd[1]: sshd@40-10.0.0.46:22-10.0.0.1:56074.service: Deactivated successfully. Jan 23 19:10:22.633854 systemd[1]: session-41.scope: Deactivated successfully. Jan 23 19:10:22.640580 systemd-logind[1561]: Session 41 logged out. Waiting for processes to exit. Jan 23 19:10:22.646933 systemd[1]: Started sshd@41-10.0.0.46:22-10.0.0.1:56076.service - OpenSSH per-connection server daemon (10.0.0.1:56076). Jan 23 19:10:22.656905 systemd-logind[1561]: Removed session 41. Jan 23 19:10:22.762096 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 56076 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:22.767169 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:22.783258 systemd-logind[1561]: New session 42 of user core. Jan 23 19:10:22.799630 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 23 19:10:23.391341 sshd[5078]: Connection closed by 10.0.0.1 port 56076 Jan 23 19:10:23.392996 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:23.413067 systemd[1]: sshd@41-10.0.0.46:22-10.0.0.1:56076.service: Deactivated successfully. Jan 23 19:10:23.419549 systemd[1]: session-42.scope: Deactivated successfully. Jan 23 19:10:23.425337 systemd-logind[1561]: Session 42 logged out. Waiting for processes to exit. Jan 23 19:10:23.431503 systemd[1]: Started sshd@42-10.0.0.46:22-10.0.0.1:56092.service - OpenSSH per-connection server daemon (10.0.0.1:56092). Jan 23 19:10:23.434540 systemd-logind[1561]: Removed session 42. Jan 23 19:10:23.538510 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 56092 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:23.541096 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:23.563625 systemd-logind[1561]: New session 43 of user core. Jan 23 19:10:23.576059 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 23 19:10:23.860626 sshd[5093]: Connection closed by 10.0.0.1 port 56092 Jan 23 19:10:23.861048 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:23.870343 systemd[1]: sshd@42-10.0.0.46:22-10.0.0.1:56092.service: Deactivated successfully. Jan 23 19:10:23.877105 systemd[1]: session-43.scope: Deactivated successfully. Jan 23 19:10:23.883084 systemd-logind[1561]: Session 43 logged out. Waiting for processes to exit. Jan 23 19:10:23.895502 systemd-logind[1561]: Removed session 43. Jan 23 19:10:28.900265 systemd[1]: Started sshd@43-10.0.0.46:22-10.0.0.1:54994.service - OpenSSH per-connection server daemon (10.0.0.1:54994). Jan 23 19:10:29.056298 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 54994 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:29.054522 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:29.081825 systemd-logind[1561]: New session 44 of user core. Jan 23 19:10:29.110530 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 23 19:10:29.462196 sshd[5110]: Connection closed by 10.0.0.1 port 54994 Jan 23 19:10:29.462861 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:29.473730 systemd[1]: sshd@43-10.0.0.46:22-10.0.0.1:54994.service: Deactivated successfully. Jan 23 19:10:29.479927 systemd[1]: session-44.scope: Deactivated successfully. Jan 23 19:10:29.482964 systemd-logind[1561]: Session 44 logged out. Waiting for processes to exit. Jan 23 19:10:29.488517 systemd-logind[1561]: Removed session 44. Jan 23 19:10:33.028911 kubelet[2882]: E0123 19:10:33.026861 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:10:34.520724 systemd[1]: Started sshd@44-10.0.0.46:22-10.0.0.1:51670.service - OpenSSH per-connection server daemon (10.0.0.1:51670). Jan 23 19:10:34.631297 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 51670 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:34.634298 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:34.646091 systemd-logind[1561]: New session 45 of user core. Jan 23 19:10:34.656146 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 23 19:10:34.881038 sshd[5127]: Connection closed by 10.0.0.1 port 51670 Jan 23 19:10:34.882473 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:34.892787 systemd[1]: sshd@44-10.0.0.46:22-10.0.0.1:51670.service: Deactivated successfully. Jan 23 19:10:34.906572 systemd[1]: session-45.scope: Deactivated successfully. Jan 23 19:10:34.910590 systemd-logind[1561]: Session 45 logged out. Waiting for processes to exit. Jan 23 19:10:34.919563 systemd-logind[1561]: Removed session 45. Jan 23 19:10:39.930502 systemd[1]: Started sshd@45-10.0.0.46:22-10.0.0.1:51684.service - OpenSSH per-connection server daemon (10.0.0.1:51684). Jan 23 19:10:40.127982 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 51684 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:40.131741 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:40.161825 systemd-logind[1561]: New session 46 of user core. Jan 23 19:10:40.185217 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 23 19:10:40.633846 sshd[5145]: Connection closed by 10.0.0.1 port 51684 Jan 23 19:10:40.635353 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:40.648817 systemd[1]: sshd@45-10.0.0.46:22-10.0.0.1:51684.service: Deactivated successfully. Jan 23 19:10:40.653008 systemd[1]: session-46.scope: Deactivated successfully. Jan 23 19:10:40.656266 systemd-logind[1561]: Session 46 logged out. Waiting for processes to exit. Jan 23 19:10:40.668766 systemd-logind[1561]: Removed session 46. Jan 23 19:10:45.652993 systemd[1]: Started sshd@46-10.0.0.46:22-10.0.0.1:50950.service - OpenSSH per-connection server daemon (10.0.0.1:50950). Jan 23 19:10:45.737853 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 50950 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:45.740022 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:45.749299 systemd-logind[1561]: New session 47 of user core. Jan 23 19:10:45.755937 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 23 19:10:45.961964 sshd[5166]: Connection closed by 10.0.0.1 port 50950 Jan 23 19:10:45.962783 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:45.977952 systemd[1]: sshd@46-10.0.0.46:22-10.0.0.1:50950.service: Deactivated successfully. Jan 23 19:10:45.981990 systemd[1]: session-47.scope: Deactivated successfully. Jan 23 19:10:45.985028 systemd-logind[1561]: Session 47 logged out. Waiting for processes to exit. Jan 23 19:10:45.989263 systemd-logind[1561]: Removed session 47. Jan 23 19:10:50.987972 systemd[1]: Started sshd@47-10.0.0.46:22-10.0.0.1:50966.service - OpenSSH per-connection server daemon (10.0.0.1:50966). Jan 23 19:10:51.107756 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 50966 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:51.110358 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:51.123334 systemd-logind[1561]: New session 48 of user core. Jan 23 19:10:51.136964 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 23 19:10:51.367230 sshd[5182]: Connection closed by 10.0.0.1 port 50966 Jan 23 19:10:51.367609 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:51.376016 systemd[1]: sshd@47-10.0.0.46:22-10.0.0.1:50966.service: Deactivated successfully. Jan 23 19:10:51.380356 systemd[1]: session-48.scope: Deactivated successfully. Jan 23 19:10:51.383046 systemd-logind[1561]: Session 48 logged out. Waiting for processes to exit. Jan 23 19:10:51.386197 systemd-logind[1561]: Removed session 48. Jan 23 19:10:55.024852 kubelet[2882]: E0123 19:10:55.024714 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:10:56.386913 systemd[1]: Started sshd@48-10.0.0.46:22-10.0.0.1:53252.service - OpenSSH per-connection server daemon (10.0.0.1:53252). Jan 23 19:10:56.470668 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 53252 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:56.473221 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:56.484087 systemd-logind[1561]: New session 49 of user core. Jan 23 19:10:56.498168 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 23 19:10:56.676135 sshd[5199]: Connection closed by 10.0.0.1 port 53252 Jan 23 19:10:56.676593 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Jan 23 19:10:56.695725 systemd[1]: sshd@48-10.0.0.46:22-10.0.0.1:53252.service: Deactivated successfully. Jan 23 19:10:56.699355 systemd[1]: session-49.scope: Deactivated successfully. Jan 23 19:10:56.702574 systemd-logind[1561]: Session 49 logged out. Waiting for processes to exit. Jan 23 19:10:56.707911 systemd[1]: Started sshd@49-10.0.0.46:22-10.0.0.1:53268.service - OpenSSH per-connection server daemon (10.0.0.1:53268). Jan 23 19:10:56.711298 systemd-logind[1561]: Removed session 49. Jan 23 19:10:56.795558 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 53268 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:10:56.797900 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:10:56.809994 systemd-logind[1561]: New session 50 of user core. Jan 23 19:10:56.818676 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 23 19:10:59.672999 containerd[1579]: time="2026-01-23T19:10:59.672794763Z" level=info msg="StopContainer for \"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\" with timeout 30 (s)" Jan 23 19:10:59.676829 containerd[1579]: time="2026-01-23T19:10:59.676793125Z" level=info msg="Stop container \"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\" with signal terminated" Jan 23 19:10:59.808677 systemd[1]: cri-containerd-5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8.scope: Deactivated successfully. Jan 23 19:10:59.840591 systemd[1]: cri-containerd-5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8.scope: Consumed 2.532s CPU time, 25.4M memory peak, 4K written to disk. Jan 23 19:11:00.488157 containerd[1579]: time="2026-01-23T19:11:00.487664686Z" level=info msg="received container exit event container_id:\"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\" id:\"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\" pid:4428 exited_at:{seconds:1769195459 nanos:836928574}" Jan 23 19:11:00.721708 containerd[1579]: time="2026-01-23T19:11:00.721051620Z" level=info msg="StopContainer for \"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\" with timeout 2 (s)" Jan 23 19:11:00.874641 containerd[1579]: time="2026-01-23T19:11:00.854328291Z" level=info msg="Stop container \"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\" with signal terminated" Jan 23 19:11:01.457800 containerd[1579]: time="2026-01-23T19:11:01.456780833Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:11:01.760643 systemd-networkd[1472]: lxc_health: Link DOWN Jan 23 19:11:01.763664 systemd-networkd[1472]: lxc_health: Lost carrier Jan 23 19:11:01.960974 systemd[1]: cri-containerd-23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c.scope: Deactivated successfully. Jan 23 19:11:01.963150 systemd[1]: cri-containerd-23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c.scope: Consumed 27.844s CPU time, 128.2M memory peak, 284K read from disk, 13.3M written to disk. Jan 23 19:11:02.138632 containerd[1579]: time="2026-01-23T19:11:02.138095255Z" level=info msg="received container exit event container_id:\"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\" id:\"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\" pid:3506 exited_at:{seconds:1769195462 nanos:134215574}" Jan 23 19:11:03.246032 sshd[5215]: Connection closed by 10.0.0.1 port 53268 Jan 23 19:11:03.255729 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Jan 23 19:11:03.270976 systemd[1]: Started sshd@50-10.0.0.46:22-10.0.0.1:53274.service - OpenSSH per-connection server daemon (10.0.0.1:53274). Jan 23 19:11:03.281575 systemd[1]: sshd@49-10.0.0.46:22-10.0.0.1:53268.service: Deactivated successfully. Jan 23 19:11:03.286885 systemd[1]: session-50.scope: Deactivated successfully. Jan 23 19:11:03.287220 systemd[1]: session-50.scope: Consumed 1.900s CPU time, 28.4M memory peak. Jan 23 19:11:03.307291 systemd-logind[1561]: Session 50 logged out. Waiting for processes to exit. Jan 23 19:11:03.313890 systemd-logind[1561]: Removed session 50. Jan 23 19:11:03.720729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8-rootfs.mount: Deactivated successfully. Jan 23 19:11:03.761668 containerd[1579]: time="2026-01-23T19:11:03.760024586Z" level=info msg="Kill container \"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\"" Jan 23 19:11:03.787311 containerd[1579]: time="2026-01-23T19:11:03.787150495Z" level=info msg="StopContainer for \"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\" returns successfully" Jan 23 19:11:03.857337 containerd[1579]: time="2026-01-23T19:11:03.856864381Z" level=info msg="StopPodSandbox for \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\"" Jan 23 19:11:03.861507 containerd[1579]: time="2026-01-23T19:11:03.861078231Z" level=info msg="Container to stop \"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:11:03.861981 containerd[1579]: time="2026-01-23T19:11:03.861846743Z" level=info msg="Container to stop \"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:11:03.865810 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 53274 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:11:03.870916 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:11:03.985926 systemd-logind[1561]: New session 51 of user core. Jan 23 19:11:03.987914 systemd[1]: cri-containerd-3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc.scope: Deactivated successfully. Jan 23 19:11:04.113670 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 23 19:11:04.146085 containerd[1579]: time="2026-01-23T19:11:04.145640549Z" level=info msg="received sandbox exit event container_id:\"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" id:\"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" exit_status:137 exited_at:{seconds:1769195464 nanos:113928803}" monitor_name=podsandbox Jan 23 19:11:04.251062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c-rootfs.mount: Deactivated successfully. Jan 23 19:11:04.370730 containerd[1579]: time="2026-01-23T19:11:04.370328897Z" level=info msg="StopContainer for \"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\" returns successfully" Jan 23 19:11:04.414718 containerd[1579]: time="2026-01-23T19:11:04.410110001Z" level=info msg="StopPodSandbox for \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\"" Jan 23 19:11:04.428732 containerd[1579]: time="2026-01-23T19:11:04.428557572Z" level=info msg="Container to stop \"2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:11:04.429213 containerd[1579]: time="2026-01-23T19:11:04.429185604Z" level=info msg="Container to stop \"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:11:04.429326 containerd[1579]: time="2026-01-23T19:11:04.429305415Z" level=info msg="Container to stop \"c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:11:04.429599 containerd[1579]: time="2026-01-23T19:11:04.429572550Z" level=info msg="Container to stop \"887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:11:04.429894 containerd[1579]: time="2026-01-23T19:11:04.429871193Z" level=info msg="Container to stop \"68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:11:04.436756 kubelet[2882]: I0123 19:11:04.435826 2882 scope.go:117] "RemoveContainer" containerID="55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315" Jan 23 19:11:04.509943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc-rootfs.mount: Deactivated successfully. Jan 23 19:11:04.524917 containerd[1579]: time="2026-01-23T19:11:04.524731189Z" level=info msg="RemoveContainer for \"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\"" Jan 23 19:11:04.530047 systemd[1]: cri-containerd-653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f.scope: Deactivated successfully. Jan 23 19:11:04.533060 containerd[1579]: time="2026-01-23T19:11:04.532833314Z" level=info msg="shim disconnected" id=3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc namespace=k8s.io Jan 23 19:11:04.533060 containerd[1579]: time="2026-01-23T19:11:04.532917129Z" level=warning msg="cleaning up after shim disconnected" id=3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc namespace=k8s.io Jan 23 19:11:04.533060 containerd[1579]: time="2026-01-23T19:11:04.532932649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 19:11:04.618106 containerd[1579]: time="2026-01-23T19:11:04.617766921Z" level=info msg="received sandbox exit event container_id:\"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" id:\"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" exit_status:137 exited_at:{seconds:1769195464 nanos:614932807}" monitor_name=podsandbox Jan 23 19:11:04.915126 containerd[1579]: time="2026-01-23T19:11:04.914975566Z" level=info msg="RemoveContainer for \"55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315\" returns successfully" Jan 23 19:11:05.018294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc-shm.mount: Deactivated successfully. Jan 23 19:11:05.025626 containerd[1579]: time="2026-01-23T19:11:05.024774317Z" level=info msg="received sandbox container exit event sandbox_id:\"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" exit_status:137 exited_at:{seconds:1769195464 nanos:113928803}" monitor_name=criService Jan 23 19:11:05.027643 containerd[1579]: time="2026-01-23T19:11:05.027594981Z" level=info msg="TearDown network for sandbox \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" successfully" Jan 23 19:11:05.027810 containerd[1579]: time="2026-01-23T19:11:05.027787668Z" level=info msg="StopPodSandbox for \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" returns successfully" Jan 23 19:11:05.163868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f-rootfs.mount: Deactivated successfully. Jan 23 19:11:05.185100 containerd[1579]: time="2026-01-23T19:11:05.184953971Z" level=info msg="shim disconnected" id=653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f namespace=k8s.io Jan 23 19:11:05.186615 containerd[1579]: time="2026-01-23T19:11:05.185920619Z" level=warning msg="cleaning up after shim disconnected" id=653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f namespace=k8s.io Jan 23 19:11:05.186615 containerd[1579]: time="2026-01-23T19:11:05.185957809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 19:11:05.282286 kubelet[2882]: I0123 19:11:05.281915 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwtlq\" (UniqueName: \"kubernetes.io/projected/e82389fa-abf8-4ec1-a948-7ebb9c7c3a00-kube-api-access-xwtlq\") pod \"e82389fa-abf8-4ec1-a948-7ebb9c7c3a00\" (UID: \"e82389fa-abf8-4ec1-a948-7ebb9c7c3a00\") " Jan 23 19:11:05.282286 kubelet[2882]: I0123 19:11:05.282194 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e82389fa-abf8-4ec1-a948-7ebb9c7c3a00-cilium-config-path\") pod \"e82389fa-abf8-4ec1-a948-7ebb9c7c3a00\" (UID: \"e82389fa-abf8-4ec1-a948-7ebb9c7c3a00\") " Jan 23 19:11:05.926631 systemd[1]: var-lib-kubelet-pods-e82389fa\x2dabf8\x2d4ec1\x2da948\x2d7ebb9c7c3a00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxwtlq.mount: Deactivated successfully. Jan 23 19:11:06.719267 kubelet[2882]: I0123 19:11:06.718804 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e82389fa-abf8-4ec1-a948-7ebb9c7c3a00-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e82389fa-abf8-4ec1-a948-7ebb9c7c3a00" (UID: "e82389fa-abf8-4ec1-a948-7ebb9c7c3a00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:11:06.726568 containerd[1579]: time="2026-01-23T19:11:06.724750626Z" level=info msg="received sandbox container exit event sandbox_id:\"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" exit_status:137 exited_at:{seconds:1769195464 nanos:614932807}" monitor_name=criService Jan 23 19:11:06.734794 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f-shm.mount: Deactivated successfully. Jan 23 19:11:06.736864 containerd[1579]: time="2026-01-23T19:11:06.736033980Z" level=info msg="TearDown network for sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" successfully" Jan 23 19:11:06.737028 containerd[1579]: time="2026-01-23T19:11:06.736997151Z" level=info msg="StopPodSandbox for \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" returns successfully" Jan 23 19:11:06.749818 kubelet[2882]: E0123 19:11:06.749716 2882 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:11:06.831318 kubelet[2882]: I0123 19:11:06.828061 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e82389fa-abf8-4ec1-a948-7ebb9c7c3a00-kube-api-access-xwtlq" (OuterVolumeSpecName: "kube-api-access-xwtlq") pod "e82389fa-abf8-4ec1-a948-7ebb9c7c3a00" (UID: "e82389fa-abf8-4ec1-a948-7ebb9c7c3a00"). InnerVolumeSpecName "kube-api-access-xwtlq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:11:06.831318 kubelet[2882]: I0123 19:11:06.847337 2882 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e82389fa-abf8-4ec1-a948-7ebb9c7c3a00-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:06.956336 kubelet[2882]: I0123 19:11:06.955987 2882 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwtlq\" (UniqueName: \"kubernetes.io/projected/e82389fa-abf8-4ec1-a948-7ebb9c7c3a00-kube-api-access-xwtlq\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:07.537662 kubelet[2882]: I0123 19:11:07.536312 2882 scope.go:117] "RemoveContainer" containerID="5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8" Jan 23 19:11:07.582542 systemd[1]: Removed slice kubepods-besteffort-pode82389fa_abf8_4ec1_a948_7ebb9c7c3a00.slice - libcontainer container kubepods-besteffort-pode82389fa_abf8_4ec1_a948_7ebb9c7c3a00.slice. Jan 23 19:11:07.582810 systemd[1]: kubepods-besteffort-pode82389fa_abf8_4ec1_a948_7ebb9c7c3a00.slice: Consumed 4.873s CPU time, 31.7M memory peak, 8K written to disk. Jan 23 19:11:07.643172 containerd[1579]: time="2026-01-23T19:11:07.643044120Z" level=info msg="RemoveContainer for \"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\"" Jan 23 19:11:07.667296 kubelet[2882]: I0123 19:11:07.666180 2882 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T19:11:07Z","lastTransitionTime":"2026-01-23T19:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 19:11:07.777001 containerd[1579]: time="2026-01-23T19:11:07.774708117Z" level=info msg="RemoveContainer for \"5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8\" returns successfully" Jan 23 19:11:07.822777 kubelet[2882]: I0123 19:11:07.809148 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea52bfa2-d943-454b-9545-cb748c071c83-hubble-tls\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.827814 kubelet[2882]: I0123 19:11:07.823919 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea52bfa2-d943-454b-9545-cb748c071c83-clustermesh-secrets\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.827814 kubelet[2882]: I0123 19:11:07.824046 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-run\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.827814 kubelet[2882]: I0123 19:11:07.824074 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-bpf-maps\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.827814 kubelet[2882]: I0123 19:11:07.824103 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-lib-modules\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.827814 kubelet[2882]: I0123 19:11:07.824137 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhv6k\" (UniqueName: \"kubernetes.io/projected/ea52bfa2-d943-454b-9545-cb748c071c83-kube-api-access-zhv6k\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.827814 kubelet[2882]: I0123 19:11:07.824165 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-host-proc-sys-net\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.828148 kubelet[2882]: I0123 19:11:07.824189 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-hostproc\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.828148 kubelet[2882]: I0123 19:11:07.824222 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-config-path\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.828148 kubelet[2882]: I0123 19:11:07.824333 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-host-proc-sys-kernel\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.839630 kubelet[2882]: I0123 19:11:07.824365 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-xtables-lock\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.854695 kubelet[2882]: I0123 19:11:07.851755 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-etc-cni-netd\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.854695 kubelet[2882]: I0123 19:11:07.851813 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cni-path\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.854695 kubelet[2882]: I0123 19:11:07.851836 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-cgroup\") pod \"ea52bfa2-d943-454b-9545-cb748c071c83\" (UID: \"ea52bfa2-d943-454b-9545-cb748c071c83\") " Jan 23 19:11:07.854695 kubelet[2882]: I0123 19:11:07.851927 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:11:07.854695 kubelet[2882]: I0123 19:11:07.851972 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:11:07.869717 kubelet[2882]: I0123 19:11:07.851993 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cni-path" (OuterVolumeSpecName: "cni-path") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:11:07.869717 kubelet[2882]: I0123 19:11:07.855137 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:11:07.869717 kubelet[2882]: I0123 19:11:07.867191 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:11:07.869717 kubelet[2882]: I0123 19:11:07.867265 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-hostproc" (OuterVolumeSpecName: "hostproc") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:11:07.880858 kubelet[2882]: I0123 19:11:07.880793 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:11:07.881165 kubelet[2882]: I0123 19:11:07.881135 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:11:07.881285 kubelet[2882]: I0123 19:11:07.881264 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:11:07.881582 kubelet[2882]: I0123 19:11:07.881354 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:11:08.441622 containerd[1579]: time="2026-01-23T19:11:08.378926533Z" level=warning msg="container event discarded" container=55725d8d205a17ed8ea7abd13122d88cfb561509cb0416e7ebb6d44517359315 type=CONTAINER_STOPPED_EVENT Jan 23 19:11:08.462907 containerd[1579]: time="2026-01-23T19:11:08.445787361Z" level=warning msg="container event discarded" container=3756fa507e79c383c0b910f2fc67fa8318bda471c7774f91d622b74c0c99dd4a type=CONTAINER_STOPPED_EVENT Jan 23 19:11:08.459680 systemd[1]: var-lib-kubelet-pods-ea52bfa2\x2dd943\x2d454b\x2d9545\x2dcb748c071c83-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 19:11:08.463235 kubelet[2882]: I0123 19:11:08.458011 2882 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.463235 kubelet[2882]: I0123 19:11:08.458067 2882 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.463235 kubelet[2882]: I0123 19:11:08.458081 2882 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.463235 kubelet[2882]: I0123 19:11:08.458092 2882 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.463235 kubelet[2882]: I0123 19:11:08.458105 2882 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.463235 kubelet[2882]: I0123 19:11:08.458118 2882 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.463235 kubelet[2882]: I0123 19:11:08.458135 2882 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.463235 kubelet[2882]: I0123 19:11:08.458145 2882 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.624240 kubelet[2882]: I0123 19:11:08.458158 2882 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.624240 kubelet[2882]: I0123 19:11:08.458168 2882 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea52bfa2-d943-454b-9545-cb748c071c83-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.624240 kubelet[2882]: I0123 19:11:08.458222 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea52bfa2-d943-454b-9545-cb748c071c83-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 19:11:08.648364 kubelet[2882]: I0123 19:11:08.647698 2882 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea52bfa2-d943-454b-9545-cb748c071c83-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.702076 kubelet[2882]: I0123 19:11:08.679261 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:11:08.690612 systemd[1]: var-lib-kubelet-pods-ea52bfa2\x2dd943\x2d454b\x2d9545\x2dcb748c071c83-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzhv6k.mount: Deactivated successfully. Jan 23 19:11:08.729259 systemd[1]: var-lib-kubelet-pods-ea52bfa2\x2dd943\x2d454b\x2d9545\x2dcb748c071c83-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 19:11:08.731246 kubelet[2882]: I0123 19:11:08.729359 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea52bfa2-d943-454b-9545-cb748c071c83-kube-api-access-zhv6k" (OuterVolumeSpecName: "kube-api-access-zhv6k") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "kube-api-access-zhv6k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:11:08.879228 kubelet[2882]: I0123 19:11:08.748655 2882 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhv6k\" (UniqueName: \"kubernetes.io/projected/ea52bfa2-d943-454b-9545-cb748c071c83-kube-api-access-zhv6k\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:08.879228 kubelet[2882]: I0123 19:11:08.748697 2882 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea52bfa2-d943-454b-9545-cb748c071c83-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:09.341653 kubelet[2882]: I0123 19:11:09.322562 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea52bfa2-d943-454b-9545-cb748c071c83-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ea52bfa2-d943-454b-9545-cb748c071c83" (UID: "ea52bfa2-d943-454b-9545-cb748c071c83"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:11:09.398709 kubelet[2882]: I0123 19:11:09.386232 2882 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea52bfa2-d943-454b-9545-cb748c071c83-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 23 19:11:09.404351 kubelet[2882]: E0123 19:11:09.401971 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-cw4bs" podUID="d196c0cf-6f07-4d86-8d46-7b13faebe524" Jan 23 19:11:09.467829 kubelet[2882]: E0123 19:11:09.466055 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dqkvb" podUID="3324f826-d792-4f21-89b7-23fbc6f9ae9a" Jan 23 19:11:09.665178 systemd[1]: Removed slice kubepods-burstable-podea52bfa2_d943_454b_9545_cb748c071c83.slice - libcontainer container kubepods-burstable-podea52bfa2_d943_454b_9545_cb748c071c83.slice. Jan 23 19:11:09.665681 systemd[1]: kubepods-burstable-podea52bfa2_d943_454b_9545_cb748c071c83.slice: Consumed 28.145s CPU time, 128.5M memory peak, 296K read from disk, 13.3M written to disk. Jan 23 19:11:09.678699 kubelet[2882]: I0123 19:11:09.675782 2882 scope.go:117] "RemoveContainer" containerID="23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c" Jan 23 19:11:09.788931 containerd[1579]: time="2026-01-23T19:11:09.786117632Z" level=info msg="RemoveContainer for \"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\"" Jan 23 19:11:09.963072 containerd[1579]: time="2026-01-23T19:11:09.942016785Z" level=info msg="RemoveContainer for \"23b61d12b578c6aee29848f632ff5774e43817e80e2020ce182dd5274c7fd01c\" returns successfully" Jan 23 19:11:09.974042 kubelet[2882]: I0123 19:11:09.969628 2882 scope.go:117] "RemoveContainer" containerID="887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1" Jan 23 19:11:10.074892 kubelet[2882]: I0123 19:11:10.074728 2882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e82389fa-abf8-4ec1-a948-7ebb9c7c3a00" path="/var/lib/kubelet/pods/e82389fa-abf8-4ec1-a948-7ebb9c7c3a00/volumes" Jan 23 19:11:10.076602 containerd[1579]: time="2026-01-23T19:11:10.076357693Z" level=info msg="RemoveContainer for \"887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1\"" Jan 23 19:11:10.131689 containerd[1579]: time="2026-01-23T19:11:10.117801196Z" level=info msg="RemoveContainer for \"887982c7ca1a7c24ec92297c718d22fd01bd85422962da8000f2638bcdb423c1\" returns successfully" Jan 23 19:11:10.286050 kubelet[2882]: I0123 19:11:10.234334 2882 scope.go:117] "RemoveContainer" containerID="2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a" Jan 23 19:11:10.299939 containerd[1579]: time="2026-01-23T19:11:10.299821346Z" level=info msg="RemoveContainer for \"2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a\"" Jan 23 19:11:10.547704 containerd[1579]: time="2026-01-23T19:11:10.541638558Z" level=info msg="RemoveContainer for \"2cb4708be8b6052eb63f8c8895dd9ef60e3ade2dc5e65b575913f5f1b80a712a\" returns successfully" Jan 23 19:11:10.549327 kubelet[2882]: I0123 19:11:10.545995 2882 scope.go:117] "RemoveContainer" containerID="c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68" Jan 23 19:11:10.556734 containerd[1579]: time="2026-01-23T19:11:10.556622648Z" level=info msg="RemoveContainer for \"c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68\"" Jan 23 19:11:10.574676 containerd[1579]: time="2026-01-23T19:11:10.572334850Z" level=info msg="RemoveContainer for \"c71d417d2b7f4e8aaa35322cae0559df14684052399c9681b59ec7e4679b8d68\" returns successfully" Jan 23 19:11:10.574967 kubelet[2882]: I0123 19:11:10.573801 2882 scope.go:117] "RemoveContainer" containerID="68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85" Jan 23 19:11:10.654693 containerd[1579]: time="2026-01-23T19:11:10.654167571Z" level=info msg="RemoveContainer for \"68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85\"" Jan 23 19:11:10.951900 containerd[1579]: time="2026-01-23T19:11:10.949912353Z" level=info msg="RemoveContainer for \"68c110b9b0619736c55c6c066f57a8b59056dc9e8d2a7d143e412a637b939b85\" returns successfully" Jan 23 19:11:11.235579 kubelet[2882]: E0123 19:11:11.226235 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dqkvb" podUID="3324f826-d792-4f21-89b7-23fbc6f9ae9a" Jan 23 19:11:11.235579 kubelet[2882]: E0123 19:11:11.229830 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-cw4bs" podUID="d196c0cf-6f07-4d86-8d46-7b13faebe524" Jan 23 19:11:11.625367 containerd[1579]: time="2026-01-23T19:11:11.586204276Z" level=info msg="StopPodSandbox for \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\"" Jan 23 19:11:11.683618 containerd[1579]: time="2026-01-23T19:11:11.681206439Z" level=info msg="TearDown network for sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" successfully" Jan 23 19:11:11.685906 containerd[1579]: time="2026-01-23T19:11:11.685826368Z" level=info msg="StopPodSandbox for \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" returns successfully" Jan 23 19:11:11.717949 containerd[1579]: time="2026-01-23T19:11:11.715564353Z" level=info msg="RemovePodSandbox for \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\"" Jan 23 19:11:11.717949 containerd[1579]: time="2026-01-23T19:11:11.715770785Z" level=info msg="Forcibly stopping sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\"" Jan 23 19:11:11.717949 containerd[1579]: time="2026-01-23T19:11:11.716292891Z" level=info msg="TearDown network for sandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" successfully" Jan 23 19:11:11.744554 containerd[1579]: time="2026-01-23T19:11:11.742848697Z" level=info msg="Ensure that sandbox 653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f in task-service has been cleanup successfully" Jan 23 19:11:11.758157 kubelet[2882]: E0123 19:11:11.758036 2882 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:11:11.770600 containerd[1579]: time="2026-01-23T19:11:11.769583605Z" level=info msg="RemovePodSandbox \"653bc392095b482ea5ac8f93bde4ca7da05a169fe54c67dbfc162d2335ff951f\" returns successfully" Jan 23 19:11:11.779231 containerd[1579]: time="2026-01-23T19:11:11.777886839Z" level=info msg="StopPodSandbox for \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\"" Jan 23 19:11:11.779231 containerd[1579]: time="2026-01-23T19:11:11.778150757Z" level=info msg="TearDown network for sandbox \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" successfully" Jan 23 19:11:11.779231 containerd[1579]: time="2026-01-23T19:11:11.778178268Z" level=info msg="StopPodSandbox for \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" returns successfully" Jan 23 19:11:11.780950 containerd[1579]: time="2026-01-23T19:11:11.780769713Z" level=info msg="RemovePodSandbox for \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\"" Jan 23 19:11:11.783549 containerd[1579]: time="2026-01-23T19:11:11.781358973Z" level=info msg="Forcibly stopping sandbox \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\"" Jan 23 19:11:11.786638 containerd[1579]: time="2026-01-23T19:11:11.786306177Z" level=info msg="TearDown network for sandbox \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" successfully" Jan 23 19:11:11.821790 containerd[1579]: time="2026-01-23T19:11:11.817781546Z" level=info msg="Ensure that sandbox 3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc in task-service has been cleanup successfully" Jan 23 19:11:11.842065 containerd[1579]: time="2026-01-23T19:11:11.841954634Z" level=info msg="RemovePodSandbox \"3edb1bdb075792be8e10e758da0ac85ba1897909eacf4dcc30a0cf7c575817bc\" returns successfully" Jan 23 19:11:12.235192 kubelet[2882]: I0123 19:11:12.233300 2882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea52bfa2-d943-454b-9545-cb748c071c83" path="/var/lib/kubelet/pods/ea52bfa2-d943-454b-9545-cb748c071c83/volumes" Jan 23 19:11:12.885013 sshd[5310]: Connection closed by 10.0.0.1 port 53274 Jan 23 19:11:12.891273 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jan 23 19:11:12.965703 systemd[1]: sshd@50-10.0.0.46:22-10.0.0.1:53274.service: Deactivated successfully. Jan 23 19:11:13.034200 systemd[1]: session-51.scope: Deactivated successfully. Jan 23 19:11:13.037008 systemd[1]: session-51.scope: Consumed 2.680s CPU time, 25M memory peak. Jan 23 19:11:13.051884 kubelet[2882]: E0123 19:11:13.048073 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-cw4bs" podUID="d196c0cf-6f07-4d86-8d46-7b13faebe524" Jan 23 19:11:13.051824 systemd-logind[1561]: Session 51 logged out. Waiting for processes to exit. Jan 23 19:11:13.054982 kubelet[2882]: E0123 19:11:13.049911 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dqkvb" podUID="3324f826-d792-4f21-89b7-23fbc6f9ae9a" Jan 23 19:11:13.190331 systemd[1]: Started sshd@51-10.0.0.46:22-10.0.0.1:39662.service - OpenSSH per-connection server daemon (10.0.0.1:39662). Jan 23 19:11:13.244316 systemd-logind[1561]: Removed session 51. Jan 23 19:11:13.641179 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 39662 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:11:13.674927 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:11:13.722311 kubelet[2882]: I0123 19:11:13.712890 2882 memory_manager.go:355] "RemoveStaleState removing state" podUID="ea52bfa2-d943-454b-9545-cb748c071c83" containerName="cilium-agent" Jan 23 19:11:13.722311 kubelet[2882]: I0123 19:11:13.712993 2882 memory_manager.go:355] "RemoveStaleState removing state" podUID="e82389fa-abf8-4ec1-a948-7ebb9c7c3a00" containerName="cilium-operator" Jan 23 19:11:13.722311 kubelet[2882]: I0123 19:11:13.713006 2882 memory_manager.go:355] "RemoveStaleState removing state" podUID="e82389fa-abf8-4ec1-a948-7ebb9c7c3a00" containerName="cilium-operator" Jan 23 19:11:13.734357 kubelet[2882]: I0123 19:11:13.727680 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8715cb7-29ca-495e-b965-366d17458c19-hostproc\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.734357 kubelet[2882]: I0123 19:11:13.727776 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8715cb7-29ca-495e-b965-366d17458c19-etc-cni-netd\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.734357 kubelet[2882]: I0123 19:11:13.727807 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8715cb7-29ca-495e-b965-366d17458c19-lib-modules\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.734357 kubelet[2882]: I0123 19:11:13.727826 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8715cb7-29ca-495e-b965-366d17458c19-hubble-tls\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.734357 kubelet[2882]: I0123 19:11:13.727854 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8715cb7-29ca-495e-b965-366d17458c19-cilium-run\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.734357 kubelet[2882]: I0123 19:11:13.727875 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8715cb7-29ca-495e-b965-366d17458c19-clustermesh-secrets\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.771302 kubelet[2882]: I0123 19:11:13.727900 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8715cb7-29ca-495e-b965-366d17458c19-cilium-cgroup\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.771302 kubelet[2882]: I0123 19:11:13.727921 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8715cb7-29ca-495e-b965-366d17458c19-cni-path\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.771302 kubelet[2882]: I0123 19:11:13.727947 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8715cb7-29ca-495e-b965-366d17458c19-cilium-config-path\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.771302 kubelet[2882]: I0123 19:11:13.727970 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8715cb7-29ca-495e-b965-366d17458c19-host-proc-sys-kernel\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.771302 kubelet[2882]: I0123 19:11:13.727993 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8715cb7-29ca-495e-b965-366d17458c19-cilium-ipsec-secrets\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.771934 kubelet[2882]: I0123 19:11:13.728016 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8715cb7-29ca-495e-b965-366d17458c19-host-proc-sys-net\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.771934 kubelet[2882]: I0123 19:11:13.728035 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8715cb7-29ca-495e-b965-366d17458c19-xtables-lock\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.771934 kubelet[2882]: I0123 19:11:13.728059 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8715cb7-29ca-495e-b965-366d17458c19-bpf-maps\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.771934 kubelet[2882]: I0123 19:11:13.728081 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x2nf\" (UniqueName: \"kubernetes.io/projected/a8715cb7-29ca-495e-b965-366d17458c19-kube-api-access-2x2nf\") pod \"cilium-xh2fm\" (UID: \"a8715cb7-29ca-495e-b965-366d17458c19\") " pod="kube-system/cilium-xh2fm" Jan 23 19:11:13.774297 systemd-logind[1561]: New session 52 of user core. Jan 23 19:11:13.814148 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 23 19:11:13.843602 systemd[1]: Created slice kubepods-burstable-poda8715cb7_29ca_495e_b965_366d17458c19.slice - libcontainer container kubepods-burstable-poda8715cb7_29ca_495e_b965_366d17458c19.slice. Jan 23 19:11:14.009891 sshd[5395]: Connection closed by 10.0.0.1 port 39662 Jan 23 19:11:13.968928 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Jan 23 19:11:14.069328 systemd[1]: sshd@51-10.0.0.46:22-10.0.0.1:39662.service: Deactivated successfully. Jan 23 19:11:14.075995 systemd[1]: session-52.scope: Deactivated successfully. Jan 23 19:11:14.217951 systemd-logind[1561]: Session 52 logged out. Waiting for processes to exit. Jan 23 19:11:14.341283 kubelet[2882]: E0123 19:11:14.226916 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:14.289189 systemd[1]: Started sshd@52-10.0.0.46:22-10.0.0.1:39678.service - OpenSSH per-connection server daemon (10.0.0.1:39678). Jan 23 19:11:14.349783 systemd-logind[1561]: Removed session 52. Jan 23 19:11:14.350833 containerd[1579]: time="2026-01-23T19:11:14.348625496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xh2fm,Uid:a8715cb7-29ca-495e-b965-366d17458c19,Namespace:kube-system,Attempt:0,}" Jan 23 19:11:14.470173 containerd[1579]: time="2026-01-23T19:11:14.470038630Z" level=info msg="connecting to shim b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d" address="unix:///run/containerd/s/db3c1f6f7212747b9f344811245aee2489905c0de9e4451b7a16a00251fa4bb4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:11:14.525071 sshd[5404]: Accepted publickey for core from 10.0.0.1 port 39678 ssh2: RSA SHA256:1T3TiuV9+iHFNenbFHnrePL/ypOLKKwPcEsPHcu1ttE Jan 23 19:11:14.528750 sshd-session[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:11:14.697960 systemd-logind[1561]: New session 53 of user core. Jan 23 19:11:14.716774 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 23 19:11:14.867548 systemd[1]: Started cri-containerd-b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d.scope - libcontainer container b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d. Jan 23 19:11:15.080009 kubelet[2882]: E0123 19:11:15.078546 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-cw4bs" podUID="d196c0cf-6f07-4d86-8d46-7b13faebe524" Jan 23 19:11:15.082923 kubelet[2882]: E0123 19:11:15.081555 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dqkvb" podUID="3324f826-d792-4f21-89b7-23fbc6f9ae9a" Jan 23 19:11:15.085641 containerd[1579]: time="2026-01-23T19:11:15.085075802Z" level=warning msg="container event discarded" container=5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8 type=CONTAINER_CREATED_EVENT Jan 23 19:11:15.166717 containerd[1579]: time="2026-01-23T19:11:15.166589859Z" level=warning msg="container event discarded" container=acc40ca6d3985bb41c4c0c9059e33cb9c2254aa5895fd41f83042ed200d62b04 type=CONTAINER_CREATED_EVENT Jan 23 19:11:15.290061 containerd[1579]: time="2026-01-23T19:11:15.289754868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xh2fm,Uid:a8715cb7-29ca-495e-b965-366d17458c19,Namespace:kube-system,Attempt:0,} returns sandbox id \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\"" Jan 23 19:11:15.317307 kubelet[2882]: E0123 19:11:15.302328 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:15.326060 containerd[1579]: time="2026-01-23T19:11:15.326012863Z" level=info msg="CreateContainer within sandbox \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 19:11:15.377131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount741783351.mount: Deactivated successfully. Jan 23 19:11:15.417579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216456901.mount: Deactivated successfully. Jan 23 19:11:15.417874 containerd[1579]: time="2026-01-23T19:11:15.417701114Z" level=info msg="Container 52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:11:15.444197 containerd[1579]: time="2026-01-23T19:11:15.444048378Z" level=info msg="CreateContainer within sandbox \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc\"" Jan 23 19:11:15.446516 containerd[1579]: time="2026-01-23T19:11:15.446350637Z" level=info msg="StartContainer for \"52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc\"" Jan 23 19:11:15.465008 containerd[1579]: time="2026-01-23T19:11:15.464759114Z" level=info msg="connecting to shim 52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc" address="unix:///run/containerd/s/db3c1f6f7212747b9f344811245aee2489905c0de9e4451b7a16a00251fa4bb4" protocol=ttrpc version=3 Jan 23 19:11:15.557982 systemd[1]: Started cri-containerd-52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc.scope - libcontainer container 52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc. Jan 23 19:11:15.767615 containerd[1579]: time="2026-01-23T19:11:15.767305503Z" level=info msg="StartContainer for \"52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc\" returns successfully" Jan 23 19:11:15.842039 systemd[1]: cri-containerd-52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc.scope: Deactivated successfully. Jan 23 19:11:15.855526 containerd[1579]: time="2026-01-23T19:11:15.854357371Z" level=info msg="received container exit event container_id:\"52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc\" id:\"52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc\" pid:5472 exited_at:{seconds:1769195475 nanos:849767969}" Jan 23 19:11:15.990056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52bfb000da4249fe358fdcee7b2c5785484852bd7bf3992a8577f8db9f1d55dc-rootfs.mount: Deactivated successfully. Jan 23 19:11:16.429122 kubelet[2882]: E0123 19:11:16.429083 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:16.457249 containerd[1579]: time="2026-01-23T19:11:16.455021826Z" level=info msg="CreateContainer within sandbox \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 19:11:16.518502 containerd[1579]: time="2026-01-23T19:11:16.518000109Z" level=info msg="Container 25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:11:16.519127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579887997.mount: Deactivated successfully. Jan 23 19:11:16.546519 containerd[1579]: time="2026-01-23T19:11:16.546240460Z" level=info msg="CreateContainer within sandbox \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1\"" Jan 23 19:11:16.550145 containerd[1579]: time="2026-01-23T19:11:16.550109909Z" level=info msg="StartContainer for \"25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1\"" Jan 23 19:11:16.555362 containerd[1579]: time="2026-01-23T19:11:16.555322722Z" level=info msg="connecting to shim 25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1" address="unix:///run/containerd/s/db3c1f6f7212747b9f344811245aee2489905c0de9e4451b7a16a00251fa4bb4" protocol=ttrpc version=3 Jan 23 19:11:16.632253 systemd[1]: Started cri-containerd-25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1.scope - libcontainer container 25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1. Jan 23 19:11:16.691725 containerd[1579]: time="2026-01-23T19:11:16.691562185Z" level=warning msg="container event discarded" container=5a16e5977dc1b732c4fd2936a601c5d549cc19bbbec391a0fe6f4601673b9cd8 type=CONTAINER_STARTED_EVENT Jan 23 19:11:16.755976 containerd[1579]: time="2026-01-23T19:11:16.755926305Z" level=info msg="StartContainer for \"25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1\" returns successfully" Jan 23 19:11:16.761705 kubelet[2882]: E0123 19:11:16.761358 2882 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:11:16.791104 systemd[1]: cri-containerd-25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1.scope: Deactivated successfully. Jan 23 19:11:16.814052 containerd[1579]: time="2026-01-23T19:11:16.812605160Z" level=info msg="received container exit event container_id:\"25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1\" id:\"25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1\" pid:5520 exited_at:{seconds:1769195476 nanos:799021573}" Jan 23 19:11:16.952756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25e7a413f25f5f786cffbbdd2d1b04b9b9613afc48c7c1f346acfed34da9b0a1-rootfs.mount: Deactivated successfully. Jan 23 19:11:17.024532 kubelet[2882]: E0123 19:11:17.024217 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-cw4bs" podUID="d196c0cf-6f07-4d86-8d46-7b13faebe524" Jan 23 19:11:17.030516 kubelet[2882]: E0123 19:11:17.026346 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dqkvb" podUID="3324f826-d792-4f21-89b7-23fbc6f9ae9a" Jan 23 19:11:17.405995 containerd[1579]: time="2026-01-23T19:11:17.397861400Z" level=warning msg="container event discarded" container=acc40ca6d3985bb41c4c0c9059e33cb9c2254aa5895fd41f83042ed200d62b04 type=CONTAINER_STARTED_EVENT Jan 23 19:11:17.454941 kubelet[2882]: E0123 19:11:17.453292 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:17.472117 containerd[1579]: time="2026-01-23T19:11:17.471771467Z" level=info msg="CreateContainer within sandbox \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 19:11:17.610190 containerd[1579]: time="2026-01-23T19:11:17.607107387Z" level=info msg="Container 1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:11:17.634855 containerd[1579]: time="2026-01-23T19:11:17.634632731Z" level=info msg="CreateContainer within sandbox \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043\"" Jan 23 19:11:17.636656 containerd[1579]: time="2026-01-23T19:11:17.636620138Z" level=info msg="StartContainer for \"1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043\"" Jan 23 19:11:17.639028 containerd[1579]: time="2026-01-23T19:11:17.638990774Z" level=info msg="connecting to shim 1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043" address="unix:///run/containerd/s/db3c1f6f7212747b9f344811245aee2489905c0de9e4451b7a16a00251fa4bb4" protocol=ttrpc version=3 Jan 23 19:11:17.708156 systemd[1]: Started cri-containerd-1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043.scope - libcontainer container 1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043. Jan 23 19:11:17.960631 containerd[1579]: time="2026-01-23T19:11:17.960205868Z" level=info msg="StartContainer for \"1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043\" returns successfully" Jan 23 19:11:17.973843 systemd[1]: cri-containerd-1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043.scope: Deactivated successfully. Jan 23 19:11:17.984130 containerd[1579]: time="2026-01-23T19:11:17.980027187Z" level=info msg="received container exit event container_id:\"1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043\" id:\"1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043\" pid:5566 exited_at:{seconds:1769195477 nanos:979323735}" Jan 23 19:11:18.091075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1565316d90d94141d552260470488aa7e7bf26c1ddc44d6f49747d28a1b49043-rootfs.mount: Deactivated successfully. Jan 23 19:11:18.498693 kubelet[2882]: E0123 19:11:18.497763 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:18.521009 containerd[1579]: time="2026-01-23T19:11:18.520084778Z" level=info msg="CreateContainer within sandbox \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 19:11:18.634138 containerd[1579]: time="2026-01-23T19:11:18.633520141Z" level=info msg="Container 760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:11:18.719211 containerd[1579]: time="2026-01-23T19:11:18.719105213Z" level=info msg="CreateContainer within sandbox \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764\"" Jan 23 19:11:18.722501 containerd[1579]: time="2026-01-23T19:11:18.721328706Z" level=info msg="StartContainer for \"760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764\"" Jan 23 19:11:18.723139 containerd[1579]: time="2026-01-23T19:11:18.722887269Z" level=info msg="connecting to shim 760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764" address="unix:///run/containerd/s/db3c1f6f7212747b9f344811245aee2489905c0de9e4451b7a16a00251fa4bb4" protocol=ttrpc version=3 Jan 23 19:11:18.788969 systemd[1]: Started cri-containerd-760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764.scope - libcontainer container 760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764. Jan 23 19:11:19.022237 systemd[1]: cri-containerd-760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764.scope: Deactivated successfully. Jan 23 19:11:19.025342 kubelet[2882]: E0123 19:11:19.025209 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:19.025342 kubelet[2882]: E0123 19:11:19.025571 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dqkvb" podUID="3324f826-d792-4f21-89b7-23fbc6f9ae9a" Jan 23 19:11:19.026321 containerd[1579]: time="2026-01-23T19:11:19.026210901Z" level=info msg="received container exit event container_id:\"760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764\" id:\"760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764\" pid:5604 exited_at:{seconds:1769195479 nanos:23867005}" Jan 23 19:11:19.028560 kubelet[2882]: E0123 19:11:19.028524 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-cw4bs" podUID="d196c0cf-6f07-4d86-8d46-7b13faebe524" Jan 23 19:11:19.030675 containerd[1579]: time="2026-01-23T19:11:19.030595372Z" level=info msg="StartContainer for \"760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764\" returns successfully" Jan 23 19:11:19.106077 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-760649e54273e1a68193131450b08d0444c7573cace6af1218b69989ba97b764-rootfs.mount: Deactivated successfully. Jan 23 19:11:19.530203 kubelet[2882]: E0123 19:11:19.527263 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:19.555500 containerd[1579]: time="2026-01-23T19:11:19.546706170Z" level=info msg="CreateContainer within sandbox \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 19:11:19.588601 containerd[1579]: time="2026-01-23T19:11:19.586540410Z" level=info msg="Container 49fce4173df3bb6c8e833a11e048cb36b32c1985e32353d812d9bc1741be482e: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:11:19.617017 containerd[1579]: time="2026-01-23T19:11:19.616845575Z" level=info msg="CreateContainer within sandbox \"b982c38e289ef8a79f45910ef75092aaa154f62f179b9344a8b35dc45d321d6d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"49fce4173df3bb6c8e833a11e048cb36b32c1985e32353d812d9bc1741be482e\"" Jan 23 19:11:19.618804 containerd[1579]: time="2026-01-23T19:11:19.618623041Z" level=info msg="StartContainer for \"49fce4173df3bb6c8e833a11e048cb36b32c1985e32353d812d9bc1741be482e\"" Jan 23 19:11:19.620881 containerd[1579]: time="2026-01-23T19:11:19.620760635Z" level=info msg="connecting to shim 49fce4173df3bb6c8e833a11e048cb36b32c1985e32353d812d9bc1741be482e" address="unix:///run/containerd/s/db3c1f6f7212747b9f344811245aee2489905c0de9e4451b7a16a00251fa4bb4" protocol=ttrpc version=3 Jan 23 19:11:19.736479 systemd[1]: Started cri-containerd-49fce4173df3bb6c8e833a11e048cb36b32c1985e32353d812d9bc1741be482e.scope - libcontainer container 49fce4173df3bb6c8e833a11e048cb36b32c1985e32353d812d9bc1741be482e. Jan 23 19:11:20.073639 containerd[1579]: time="2026-01-23T19:11:20.070105723Z" level=info msg="StartContainer for \"49fce4173df3bb6c8e833a11e048cb36b32c1985e32353d812d9bc1741be482e\" returns successfully" Jan 23 19:11:21.024937 kubelet[2882]: E0123 19:11:21.024584 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-cw4bs" podUID="d196c0cf-6f07-4d86-8d46-7b13faebe524" Jan 23 19:11:21.025879 kubelet[2882]: E0123 19:11:21.025175 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dqkvb" podUID="3324f826-d792-4f21-89b7-23fbc6f9ae9a" Jan 23 19:11:21.585541 kubelet[2882]: E0123 19:11:21.573189 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:22.092086 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 19:11:22.588802 kubelet[2882]: E0123 19:11:22.580783 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:23.025230 kubelet[2882]: E0123 19:11:23.025109 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:23.025749 kubelet[2882]: E0123 19:11:23.025675 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:28.050997 kubelet[2882]: E0123 19:11:28.050103 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:28.057172 kubelet[2882]: E0123 19:11:28.056903 2882 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41930->127.0.0.1:38229: write tcp 127.0.0.1:41930->127.0.0.1:38229: write: broken pipe Jan 23 19:11:37.922665 kubelet[2882]: E0123 19:11:37.918293 2882 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.572s" Jan 23 19:11:40.951889 kubelet[2882]: E0123 19:11:40.948902 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:41.589167 systemd-networkd[1472]: lxc_health: Link UP Jan 23 19:11:41.592901 systemd-networkd[1472]: lxc_health: Gained carrier Jan 23 19:11:42.231474 kubelet[2882]: E0123 19:11:42.228263 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:42.310862 kubelet[2882]: I0123 19:11:42.310692 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xh2fm" podStartSLOduration=29.310669241 podStartE2EDuration="29.310669241s" podCreationTimestamp="2026-01-23 19:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:11:21.715207356 +0000 UTC m=+499.976218353" watchObservedRunningTime="2026-01-23 19:11:42.310669241 +0000 UTC m=+520.571680207" Jan 23 19:11:43.023262 kubelet[2882]: E0123 19:11:43.021313 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:43.182713 systemd-networkd[1472]: lxc_health: Gained IPv6LL Jan 23 19:11:44.154506 kubelet[2882]: E0123 19:11:44.152348 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:11:56.750991 sshd[5431]: Connection closed by 10.0.0.1 port 39678 Jan 23 19:11:56.752615 sshd-session[5404]: pam_unix(sshd:session): session closed for user core Jan 23 19:11:56.825780 systemd[1]: sshd@52-10.0.0.46:22-10.0.0.1:39678.service: Deactivated successfully. Jan 23 19:11:56.838686 systemd[1]: session-53.scope: Deactivated successfully. Jan 23 19:11:56.840668 systemd[1]: session-53.scope: Consumed 1.567s CPU time, 26.6M memory peak. Jan 23 19:11:56.849558 systemd-logind[1561]: Session 53 logged out. Waiting for processes to exit. Jan 23 19:11:56.859571 systemd-logind[1561]: Removed session 53.