Jan 20 02:14:01.719571 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 02:14:01.719857 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:14:01.723050 kernel: BIOS-provided physical RAM map: Jan 20 02:14:01.723066 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 02:14:01.723074 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 02:14:01.723082 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 02:14:01.723091 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 02:14:01.723099 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 02:14:01.723107 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 02:14:01.723115 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 02:14:01.723124 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 20 02:14:01.723135 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 02:14:01.723148 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 02:14:01.723156 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 02:14:01.723166 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 02:14:01.723174 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 02:14:01.723183 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 02:14:01.723194 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 02:14:01.723203 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 02:14:01.723211 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 02:14:01.723219 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 02:14:01.723228 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 02:14:01.723239 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 02:14:01.723249 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 02:14:01.723257 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 02:14:01.723265 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 02:14:01.723274 kernel: NX (Execute Disable) protection: active Jan 20 02:14:01.723282 kernel: APIC: Static calls initialized Jan 20 02:14:01.723294 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 20 02:14:01.723303 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 20 02:14:01.723311 kernel: extended physical RAM map: Jan 20 02:14:01.723320 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 02:14:01.723329 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 02:14:01.723340 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 02:14:01.723350 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 02:14:01.723359 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 02:14:01.723367 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 02:14:01.723376 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 02:14:01.723385 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 20 02:14:01.723397 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 20 02:14:01.723410 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 20 02:14:01.723419 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 20 02:14:01.723429 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 20 02:14:01.723441 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 02:14:01.723453 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 02:14:01.723462 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 02:14:01.723471 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 02:14:01.723480 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 02:14:01.723489 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 02:14:01.723497 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 02:14:01.723506 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 02:14:01.723515 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 02:14:01.723525 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 02:14:01.723536 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 02:14:01.723546 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 02:14:01.723558 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 02:14:01.723567 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 02:14:01.723576 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 02:14:01.723585 kernel: efi: EFI v2.7 by EDK II Jan 20 02:14:01.723594 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 20 02:14:01.723602 kernel: random: crng init done Jan 20 02:14:01.723611 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 20 02:14:01.723621 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 20 02:14:01.723633 kernel: secureboot: Secure boot disabled Jan 20 02:14:01.723642 kernel: SMBIOS 2.8 present. Jan 20 02:14:01.723651 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 20 02:14:01.723664 kernel: DMI: Memory slots populated: 1/1 Jan 20 02:14:01.723672 kernel: Hypervisor detected: KVM Jan 20 02:14:01.723681 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 02:14:01.723690 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 02:14:01.723699 kernel: kvm-clock: using sched offset of 50105604764 cycles Jan 20 02:14:01.723758 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 02:14:01.723858 kernel: tsc: Detected 2445.426 MHz processor Jan 20 02:14:01.723868 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 02:14:01.723878 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 02:14:01.723887 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 02:14:01.723896 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 02:14:01.723909 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 02:14:01.723919 kernel: Using GB pages for direct mapping Jan 20 02:14:01.726571 kernel: ACPI: Early table checksum verification disabled Jan 20 02:14:01.726582 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 02:14:01.726593 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 02:14:01.726603 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:14:01.726615 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:14:01.726625 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 02:14:01.726641 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:14:01.726650 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:14:01.726660 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:14:01.726669 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:14:01.726678 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 02:14:01.726688 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 02:14:01.726697 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 02:14:01.726707 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 02:14:01.727185 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 02:14:01.727199 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 02:14:01.727209 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 02:14:01.727219 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 02:14:01.727230 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 02:14:01.727242 kernel: No NUMA configuration found Jan 20 02:14:01.727251 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 20 02:14:01.727261 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 20 02:14:01.727270 kernel: Zone ranges: Jan 20 02:14:01.727280 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 02:14:01.727294 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 20 02:14:01.727303 kernel: Normal empty Jan 20 02:14:01.727312 kernel: Device empty Jan 20 02:14:01.727321 kernel: Movable zone start for each node Jan 20 02:14:01.727330 kernel: Early memory node ranges Jan 20 02:14:01.727339 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 02:14:01.727349 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 02:14:01.727359 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 02:14:01.727368 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 20 02:14:01.727384 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 20 02:14:01.727395 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 20 02:14:01.727404 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 20 02:14:01.727413 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 20 02:14:01.727423 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 20 02:14:01.727432 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 02:14:01.727452 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 02:14:01.727464 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 02:14:01.727474 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 02:14:01.727483 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 20 02:14:01.727494 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 20 02:14:01.727504 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 20 02:14:01.727520 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 20 02:14:01.727532 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 20 02:14:01.727541 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 02:14:01.727551 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 02:14:01.727560 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 02:14:01.727573 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 02:14:01.727583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 02:14:01.727593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 02:14:01.727602 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 02:14:01.727612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 02:14:01.727623 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 02:14:01.727635 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 02:14:01.727646 kernel: TSC deadline timer available Jan 20 02:14:01.727655 kernel: CPU topo: Max. logical packages: 1 Jan 20 02:14:01.727668 kernel: CPU topo: Max. logical dies: 1 Jan 20 02:14:01.727678 kernel: CPU topo: Max. dies per package: 1 Jan 20 02:14:01.727687 kernel: CPU topo: Max. threads per core: 1 Jan 20 02:14:01.727697 kernel: CPU topo: Num. cores per package: 4 Jan 20 02:14:01.727706 kernel: CPU topo: Num. threads per package: 4 Jan 20 02:14:01.727715 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 02:14:01.727725 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 02:14:01.727734 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 02:14:01.727744 kernel: kvm-guest: setup PV sched yield Jan 20 02:14:01.727758 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 20 02:14:01.733323 kernel: Booting paravirtualized kernel on KVM Jan 20 02:14:01.733336 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 02:14:01.733346 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 02:14:01.733356 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 02:14:01.733365 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 02:14:01.733375 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 02:14:01.733385 kernel: kvm-guest: PV spinlocks enabled Jan 20 02:14:01.733398 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 02:14:01.733416 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:14:01.733426 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 02:14:01.733436 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 02:14:01.733446 kernel: Fallback order for Node 0: 0 Jan 20 02:14:01.733456 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 20 02:14:01.733466 kernel: Policy zone: DMA32 Jan 20 02:14:01.733477 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 02:14:01.733488 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 02:14:01.733505 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 02:14:01.733515 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 02:14:01.733524 kernel: Dynamic Preempt: voluntary Jan 20 02:14:01.733534 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 02:14:01.733545 kernel: rcu: RCU event tracing is enabled. Jan 20 02:14:01.733555 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 02:14:01.733568 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 02:14:01.733581 kernel: Rude variant of Tasks RCU enabled. Jan 20 02:14:01.733591 kernel: Tracing variant of Tasks RCU enabled. Jan 20 02:14:01.733600 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 02:14:01.733615 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 02:14:01.733624 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:14:01.733634 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:14:01.733646 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:14:01.733657 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 02:14:01.733669 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 02:14:01.733679 kernel: Console: colour dummy device 80x25 Jan 20 02:14:01.733688 kernel: printk: legacy console [ttyS0] enabled Jan 20 02:14:01.733701 kernel: ACPI: Core revision 20240827 Jan 20 02:14:01.733711 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 02:14:01.733722 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 02:14:01.733732 kernel: x2apic enabled Jan 20 02:14:01.733743 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 02:14:01.733755 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 02:14:01.733860 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 02:14:01.733875 kernel: kvm-guest: setup PV IPIs Jan 20 02:14:01.733885 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 02:14:01.733900 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:14:01.733910 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 02:14:01.737703 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 02:14:01.738431 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 02:14:01.738445 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 02:14:01.738457 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 02:14:01.738467 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 02:14:01.738477 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 02:14:01.738486 kernel: Speculative Store Bypass: Vulnerable Jan 20 02:14:01.738502 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 02:14:01.738513 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 02:14:01.738522 kernel: active return thunk: srso_alias_return_thunk Jan 20 02:14:01.738532 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 02:14:01.738545 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 02:14:01.738556 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 02:14:01.738567 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 02:14:01.738577 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 02:14:01.738586 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 02:14:01.738600 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 02:14:01.738610 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 02:14:01.738619 kernel: Freeing SMP alternatives memory: 32K Jan 20 02:14:01.738629 kernel: pid_max: default: 32768 minimum: 301 Jan 20 02:14:01.738639 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 02:14:01.738652 kernel: landlock: Up and running. Jan 20 02:14:01.738662 kernel: SELinux: Initializing. Jan 20 02:14:01.738671 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:14:01.738681 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:14:01.738695 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 02:14:01.738705 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 02:14:01.739320 kernel: signal: max sigframe size: 1776 Jan 20 02:14:01.739335 kernel: rcu: Hierarchical SRCU implementation. Jan 20 02:14:01.739348 kernel: rcu: Max phase no-delay instances is 400. Jan 20 02:14:01.739358 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 02:14:01.739368 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 02:14:01.739377 kernel: smp: Bringing up secondary CPUs ... Jan 20 02:14:01.739391 kernel: smpboot: x86: Booting SMP configuration: Jan 20 02:14:01.739401 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 02:14:01.739410 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 02:14:01.739420 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 02:14:01.739432 kernel: Memory: 2414472K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145388K reserved, 0K cma-reserved) Jan 20 02:14:01.739444 kernel: devtmpfs: initialized Jan 20 02:14:01.739454 kernel: x86/mm: Memory block size: 128MB Jan 20 02:14:01.739464 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 02:14:01.739473 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 02:14:01.739486 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 20 02:14:01.739496 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 02:14:01.739506 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 20 02:14:01.739516 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 02:14:01.739525 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 02:14:01.739538 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 02:14:01.739550 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 02:14:01.739562 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 02:14:01.739572 kernel: audit: initializing netlink subsys (disabled) Jan 20 02:14:01.739585 kernel: audit: type=2000 audit(1768875206.167:1): state=initialized audit_enabled=0 res=1 Jan 20 02:14:01.739595 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 02:14:01.739605 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 02:14:01.739615 kernel: cpuidle: using governor menu Jan 20 02:14:01.739624 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 02:14:01.739634 kernel: dca service started, version 1.12.1 Jan 20 02:14:01.739645 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 20 02:14:01.739658 kernel: PCI: Using configuration type 1 for base access Jan 20 02:14:01.739668 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 02:14:01.739681 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 02:14:01.739691 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 02:14:01.739701 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 02:14:01.739711 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 02:14:01.739720 kernel: ACPI: Added _OSI(Module Device) Jan 20 02:14:01.739729 kernel: ACPI: Added _OSI(Processor Device) Jan 20 02:14:01.739739 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 02:14:01.739752 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 02:14:01.740593 kernel: ACPI: Interpreter enabled Jan 20 02:14:01.740613 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 02:14:01.740626 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 02:14:01.740637 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 02:14:01.740647 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 02:14:01.740656 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 02:14:01.740666 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 02:14:01.747389 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 02:14:01.747604 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 02:14:01.747888 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 02:14:01.747906 kernel: PCI host bridge to bus 0000:00 Jan 20 02:14:01.751273 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 02:14:01.751436 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 02:14:01.753195 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 02:14:01.753377 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 20 02:14:01.753596 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 20 02:14:01.753760 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 20 02:14:01.754086 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 02:14:01.754673 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 02:14:01.755025 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 02:14:01.755193 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 20 02:14:01.755411 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 20 02:14:01.755582 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 02:14:01.755740 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 02:14:01.756070 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 16601 usecs Jan 20 02:14:01.756317 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 02:14:01.756483 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 20 02:14:01.756652 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 20 02:14:01.757014 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 20 02:14:01.757198 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 02:14:01.757368 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 20 02:14:01.757539 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 20 02:14:01.757711 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 20 02:14:01.758062 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 02:14:01.758238 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 20 02:14:01.758412 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 20 02:14:01.758577 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 20 02:14:01.759504 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 20 02:14:01.759700 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 02:14:01.760025 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 02:14:01.760461 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 02:14:01.760632 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 20 02:14:01.760990 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 20 02:14:01.761229 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 02:14:01.761401 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 20 02:14:01.761417 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 02:14:01.761427 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 02:14:01.761437 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 02:14:01.761447 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 02:14:01.761456 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 02:14:01.761471 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 02:14:01.761484 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 02:14:01.761495 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 02:14:01.761505 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 02:14:01.761514 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 02:14:01.761524 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 02:14:01.761533 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 02:14:01.761543 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 02:14:01.761552 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 02:14:01.761566 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 02:14:01.761576 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 02:14:01.761588 kernel: iommu: Default domain type: Translated Jan 20 02:14:01.761600 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 02:14:01.761609 kernel: efivars: Registered efivars operations Jan 20 02:14:01.761620 kernel: PCI: Using ACPI for IRQ routing Jan 20 02:14:01.761630 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 02:14:01.761640 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 02:14:01.761649 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 20 02:14:01.761663 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 20 02:14:01.761672 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 20 02:14:01.761683 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 20 02:14:01.761695 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 20 02:14:01.761705 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 20 02:14:01.761715 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 20 02:14:01.762193 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 02:14:01.762361 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 02:14:01.762525 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 02:14:01.762540 kernel: vgaarb: loaded Jan 20 02:14:01.762551 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 02:14:01.762563 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 02:14:01.762575 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 02:14:01.762584 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 02:14:01.762594 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 02:14:01.762605 kernel: pnp: PnP ACPI init Jan 20 02:14:01.762987 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 20 02:14:01.763015 kernel: pnp: PnP ACPI: found 6 devices Jan 20 02:14:01.763027 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 02:14:01.763038 kernel: NET: Registered PF_INET protocol family Jan 20 02:14:01.763048 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 02:14:01.763057 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 02:14:01.763068 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 02:14:01.763102 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 02:14:01.763116 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 02:14:01.763129 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 02:14:01.763139 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:14:01.763149 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:14:01.763159 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 02:14:01.763173 kernel: NET: Registered PF_XDP protocol family Jan 20 02:14:01.763345 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 20 02:14:01.763516 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 20 02:14:01.763685 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 02:14:01.767184 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 02:14:01.767348 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 02:14:01.767508 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 20 02:14:01.769294 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 20 02:14:01.769464 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 20 02:14:01.769485 kernel: PCI: CLS 0 bytes, default 64 Jan 20 02:14:01.769500 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:14:01.769513 kernel: Initialise system trusted keyrings Jan 20 02:14:01.769526 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 02:14:01.769546 kernel: Key type asymmetric registered Jan 20 02:14:01.769556 kernel: Asymmetric key parser 'x509' registered Jan 20 02:14:01.769566 kernel: hrtimer: interrupt took 5227929 ns Jan 20 02:14:01.769577 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 02:14:01.769587 kernel: io scheduler mq-deadline registered Jan 20 02:14:01.769599 kernel: io scheduler kyber registered Jan 20 02:14:01.769611 kernel: io scheduler bfq registered Jan 20 02:14:01.769623 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 02:14:01.769635 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 02:14:01.769650 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 02:14:01.769663 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 02:14:01.769674 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 02:14:01.769691 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 02:14:01.769701 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 02:14:01.769714 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 02:14:01.769724 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 02:14:01.770170 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 02:14:01.770701 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 02:14:01.771094 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 02:14:01.771261 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T02:13:56 UTC (1768875236) Jan 20 02:14:01.771414 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 20 02:14:01.771430 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 02:14:01.771448 kernel: efifb: probing for efifb Jan 20 02:14:01.771462 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 20 02:14:01.771472 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 20 02:14:01.771482 kernel: efifb: scrolling: redraw Jan 20 02:14:01.771492 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 02:14:01.771502 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 02:14:01.771512 kernel: fb0: EFI VGA frame buffer device Jan 20 02:14:01.771522 kernel: pstore: Using crash dump compression: deflate Jan 20 02:14:01.771532 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 02:14:01.774122 kernel: NET: Registered PF_INET6 protocol family Jan 20 02:14:01.774134 kernel: Segment Routing with IPv6 Jan 20 02:14:01.774145 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 02:14:01.774156 kernel: NET: Registered PF_PACKET protocol family Jan 20 02:14:01.774169 kernel: Key type dns_resolver registered Jan 20 02:14:01.774181 kernel: IPI shorthand broadcast: enabled Jan 20 02:14:01.774192 kernel: sched_clock: Marking stable (23237159380, 5694779770)->(32793807156, -3861868006) Jan 20 02:14:01.774202 kernel: registered taskstats version 1 Jan 20 02:14:01.774212 kernel: Loading compiled-in X.509 certificates Jan 20 02:14:01.774222 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 02:14:01.774239 kernel: Demotion targets for Node 0: null Jan 20 02:14:01.774251 kernel: Key type .fscrypt registered Jan 20 02:14:01.774261 kernel: Key type fscrypt-provisioning registered Jan 20 02:14:01.774271 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 02:14:01.775316 kernel: ima: Allocated hash algorithm: sha1 Jan 20 02:14:01.775330 kernel: ima: No architecture policies found Jan 20 02:14:01.775340 kernel: clk: Disabling unused clocks Jan 20 02:14:01.775352 kernel: Warning: unable to open an initial console. Jan 20 02:14:01.775370 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 02:14:01.775380 kernel: Write protecting the kernel read-only data: 40960k Jan 20 02:14:01.775391 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 02:14:01.775401 kernel: Run /init as init process Jan 20 02:14:01.775411 kernel: with arguments: Jan 20 02:14:01.775422 kernel: /init Jan 20 02:14:01.775435 kernel: with environment: Jan 20 02:14:01.776115 kernel: HOME=/ Jan 20 02:14:01.776126 kernel: TERM=linux Jan 20 02:14:01.776191 systemd[1]: Successfully made /usr/ read-only. Jan 20 02:14:01.776209 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:14:01.776221 systemd[1]: Detected virtualization kvm. Jan 20 02:14:01.776232 systemd[1]: Detected architecture x86-64. Jan 20 02:14:01.776243 systemd[1]: Running in initrd. Jan 20 02:14:01.776254 systemd[1]: No hostname configured, using default hostname. Jan 20 02:14:01.776266 systemd[1]: Hostname set to . Jan 20 02:14:01.776281 systemd[1]: Initializing machine ID from VM UUID. Jan 20 02:14:01.776293 systemd[1]: Queued start job for default target initrd.target. Jan 20 02:14:01.776304 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:14:01.776316 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:14:01.776329 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 02:14:01.776341 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:14:01.776352 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 02:14:01.776368 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 02:14:01.776381 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 02:14:01.776393 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 02:14:01.776405 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:14:01.776416 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:14:01.776427 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:14:01.776439 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:14:01.776451 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:14:01.776465 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:14:01.776477 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:14:01.776488 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:14:01.776500 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 02:14:01.776511 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 02:14:01.776523 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:14:01.776534 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:14:01.776546 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:14:01.776557 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:14:01.776572 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 02:14:01.776586 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:14:01.776597 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 02:14:01.776608 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 02:14:01.776618 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 02:14:01.776630 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:14:01.776640 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:14:01.776652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:14:01.776670 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 02:14:01.776682 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:14:01.776692 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 02:14:01.776703 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:14:01.776844 systemd-journald[203]: Collecting audit messages is disabled. Jan 20 02:14:01.776878 systemd-journald[203]: Journal started Jan 20 02:14:01.776980 systemd-journald[203]: Runtime Journal (/run/log/journal/e2507be48a7a4a1b976d38b860abec85) is 6M, max 48.1M, 42.1M free. Jan 20 02:14:01.758628 systemd-modules-load[205]: Inserted module 'overlay' Jan 20 02:14:01.817009 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:14:01.859321 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:14:01.896636 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:14:01.960540 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 02:14:01.994181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:14:02.106149 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:14:02.356196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:14:02.361391 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 02:14:02.491177 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:14:02.641194 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:14:02.665294 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 02:14:02.915031 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:14:03.245382 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 02:14:03.289475 kernel: Bridge firewalling registered Jan 20 02:14:03.294378 systemd-modules-load[205]: Inserted module 'br_netfilter' Jan 20 02:14:03.310345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:14:03.371368 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:14:03.496685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:14:03.549257 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:14:03.799750 systemd-resolved[326]: Positive Trust Anchors: Jan 20 02:14:03.816356 systemd-resolved[326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:14:03.816428 systemd-resolved[326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:14:03.841477 systemd-resolved[326]: Defaulting to hostname 'linux'. Jan 20 02:14:03.862119 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:14:03.976482 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:14:04.126255 kernel: SCSI subsystem initialized Jan 20 02:14:04.172440 kernel: Loading iSCSI transport class v2.0-870. Jan 20 02:14:04.296440 kernel: iscsi: registered transport (tcp) Jan 20 02:14:04.463472 kernel: iscsi: registered transport (qla4xxx) Jan 20 02:14:04.463887 kernel: QLogic iSCSI HBA Driver Jan 20 02:14:04.758753 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:14:04.921058 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:14:04.987485 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:14:05.928400 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 02:14:05.979960 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 02:14:06.554120 kernel: raid6: avx2x4 gen() 4374 MB/s Jan 20 02:14:06.584502 kernel: raid6: avx2x2 gen() 8733 MB/s Jan 20 02:14:06.624224 kernel: raid6: avx2x1 gen() 4519 MB/s Jan 20 02:14:06.624625 kernel: raid6: using algorithm avx2x2 gen() 8733 MB/s Jan 20 02:14:06.664353 kernel: raid6: .... xor() 2711 MB/s, rmw enabled Jan 20 02:14:06.664617 kernel: raid6: using avx2x2 recovery algorithm Jan 20 02:14:06.910656 kernel: xor: automatically using best checksumming function avx Jan 20 02:14:09.110312 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 02:14:09.209388 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:14:09.254610 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:14:09.495942 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 20 02:14:09.565492 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:14:09.626948 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 02:14:09.911405 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Jan 20 02:14:10.251117 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:14:10.307559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:14:10.914756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:14:11.052488 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 02:14:11.653940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:14:11.656689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:14:11.723271 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:14:11.787573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:14:11.801909 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:14:11.850001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:14:11.854414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:14:11.939835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:14:12.260433 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 02:14:12.260562 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 02:14:12.307967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:14:12.359280 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 02:14:12.389345 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 02:14:12.389427 kernel: GPT:9289727 != 19775487 Jan 20 02:14:12.389448 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 02:14:12.395208 kernel: GPT:9289727 != 19775487 Jan 20 02:14:12.395272 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 02:14:12.407338 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:14:12.604327 kernel: libata version 3.00 loaded. Jan 20 02:14:13.171285 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 02:14:13.248393 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 02:14:13.254626 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 02:14:13.366307 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:14:13.417456 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 02:14:13.570745 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 02:14:13.571271 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 02:14:13.571299 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 02:14:13.571532 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 02:14:13.571746 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 02:14:13.450377 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 02:14:13.710967 kernel: scsi host0: ahci Jan 20 02:14:13.745530 kernel: scsi host1: ahci Jan 20 02:14:13.747993 kernel: scsi host2: ahci Jan 20 02:14:13.748265 kernel: scsi host3: ahci Jan 20 02:14:13.752509 kernel: scsi host4: ahci Jan 20 02:14:13.567475 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 02:14:13.778853 kernel: scsi host5: ahci Jan 20 02:14:13.795656 disk-uuid[560]: Primary Header is updated. Jan 20 02:14:13.795656 disk-uuid[560]: Secondary Entries is updated. Jan 20 02:14:13.795656 disk-uuid[560]: Secondary Header is updated. Jan 20 02:14:14.017620 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Jan 20 02:14:14.017722 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Jan 20 02:14:14.017739 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Jan 20 02:14:14.017753 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Jan 20 02:14:14.017843 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Jan 20 02:14:14.017863 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Jan 20 02:14:14.017877 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:14:14.017890 kernel: AES CTR mode by8 optimization enabled Jan 20 02:14:14.017905 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:14:14.181862 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 02:14:14.182228 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 02:14:14.214657 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 02:14:14.264612 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 02:14:14.285855 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 02:14:14.317858 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:14:14.317925 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 02:14:14.317943 kernel: ata3.00: applying bridge limits Jan 20 02:14:14.346993 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 02:14:14.364951 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:14:14.365123 kernel: ata3.00: configured for UDMA/100 Jan 20 02:14:14.400759 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 02:14:14.952259 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 02:14:14.952635 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 02:14:14.954230 disk-uuid[561]: Warning: The kernel is still using the old partition table. Jan 20 02:14:14.954230 disk-uuid[561]: The new table will be used at the next reboot or after you Jan 20 02:14:14.954230 disk-uuid[561]: run partprobe(8) or kpartx(8) Jan 20 02:14:14.954230 disk-uuid[561]: The operation has completed successfully. Jan 20 02:14:15.075264 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 02:14:16.239612 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 02:14:16.239908 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 02:14:16.298870 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 02:14:16.321267 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 02:14:16.336644 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:14:16.373343 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:14:16.400917 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:14:16.443305 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 02:14:16.513320 sh[650]: Success Jan 20 02:14:16.680854 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:14:16.882062 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 02:14:16.882304 kernel: device-mapper: uevent: version 1.0.3 Jan 20 02:14:16.920296 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 02:14:17.170620 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 02:14:17.347344 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:14:17.368940 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 02:14:17.438353 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 02:14:17.502286 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (669) Jan 20 02:14:17.540637 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 02:14:17.541141 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:14:17.626390 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 02:14:17.626489 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 02:14:17.651744 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 02:14:17.680017 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:14:17.689317 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 02:14:17.694255 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 02:14:17.713014 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 02:14:18.002397 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (704) Jan 20 02:14:18.051703 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:14:18.051831 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:14:18.129693 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:14:18.129833 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:14:18.165335 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:14:18.199570 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 02:14:18.221726 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 02:14:21.310493 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:14:21.436883 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:14:21.564217 ignition[762]: Ignition 2.22.0 Jan 20 02:14:21.564299 ignition[762]: Stage: fetch-offline Jan 20 02:14:21.564354 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:14:21.564370 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:14:21.564646 ignition[762]: parsed url from cmdline: "" Jan 20 02:14:21.564653 ignition[762]: no config URL provided Jan 20 02:14:21.564706 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 02:14:21.564724 ignition[762]: no config at "/usr/lib/ignition/user.ign" Jan 20 02:14:21.564846 ignition[762]: op(1): [started] loading QEMU firmware config module Jan 20 02:14:21.564856 ignition[762]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 02:14:21.668746 ignition[762]: op(1): [finished] loading QEMU firmware config module Jan 20 02:14:22.206002 systemd-networkd[843]: lo: Link UP Jan 20 02:14:22.206044 systemd-networkd[843]: lo: Gained carrier Jan 20 02:14:22.265250 systemd-networkd[843]: Enumeration completed Jan 20 02:14:22.266240 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:14:22.290897 systemd[1]: Reached target network.target - Network. Jan 20 02:14:22.295330 systemd-networkd[843]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:14:22.295337 systemd-networkd[843]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:14:22.352956 systemd-networkd[843]: eth0: Link UP Jan 20 02:14:22.361717 systemd-networkd[843]: eth0: Gained carrier Jan 20 02:14:22.361744 systemd-networkd[843]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:14:22.690357 systemd-networkd[843]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:14:23.303002 ignition[762]: parsing config with SHA512: 2fd1c29c25e651e01a554c5b35c2fb814ff0cbc95d11fe405c84b9166b56df0e3b3f0534c02a96d9c5124e1ffff8108197f85a269079b43282f6d6d848d38f86 Jan 20 02:14:23.818071 systemd-networkd[843]: eth0: Gained IPv6LL Jan 20 02:14:23.918879 unknown[762]: fetched base config from "system" Jan 20 02:14:23.918895 unknown[762]: fetched user config from "qemu" Jan 20 02:14:23.951626 ignition[762]: fetch-offline: fetch-offline passed Jan 20 02:14:23.951835 ignition[762]: Ignition finished successfully Jan 20 02:14:23.990511 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:14:24.064903 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 02:14:24.111025 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 02:14:25.055682 ignition[851]: Ignition 2.22.0 Jan 20 02:14:25.055695 ignition[851]: Stage: kargs Jan 20 02:14:25.056028 ignition[851]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:14:25.056049 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:14:26.304031 ignition[851]: kargs: kargs passed Jan 20 02:14:26.304304 ignition[851]: Ignition finished successfully Jan 20 02:14:26.354938 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 02:14:26.392283 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 02:14:27.005901 ignition[859]: Ignition 2.22.0 Jan 20 02:14:27.023865 ignition[859]: Stage: disks Jan 20 02:14:27.024172 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:14:27.024189 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:14:27.116095 ignition[859]: disks: disks passed Jan 20 02:14:27.144575 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 02:14:27.116184 ignition[859]: Ignition finished successfully Jan 20 02:14:27.158422 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 02:14:27.158670 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 02:14:27.158731 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:14:27.158923 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:14:27.159013 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:14:27.383875 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 02:14:27.600312 systemd-fsck[869]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 02:14:27.654005 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 02:14:27.791019 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 02:14:29.970857 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 02:14:29.983124 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 02:14:30.001221 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 02:14:30.060498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:14:30.142950 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 02:14:30.159192 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 02:14:30.278300 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (878) Jan 20 02:14:30.278350 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:14:30.278368 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:14:30.159367 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 02:14:30.159410 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:14:30.346546 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 02:14:30.373917 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 02:14:30.409076 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:14:30.409156 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:14:30.454210 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:14:30.853949 initrd-setup-root[902]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 02:14:30.961211 initrd-setup-root[909]: cut: /sysroot/etc/group: No such file or directory Jan 20 02:14:31.047354 initrd-setup-root[916]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 02:14:31.109357 initrd-setup-root[923]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 02:14:32.278720 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 02:14:32.288589 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 02:14:32.351543 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 02:14:32.403929 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 02:14:32.438997 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:14:32.709633 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 02:14:32.982367 ignition[990]: INFO : Ignition 2.22.0 Jan 20 02:14:32.982367 ignition[990]: INFO : Stage: mount Jan 20 02:14:32.982367 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:14:32.982367 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:14:32.982367 ignition[990]: INFO : mount: mount passed Jan 20 02:14:32.982367 ignition[990]: INFO : Ignition finished successfully Jan 20 02:14:33.066589 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 02:14:33.115568 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 02:14:33.275245 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:14:33.488562 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1004) Jan 20 02:14:33.533946 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:14:33.534038 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:14:33.686970 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:14:33.689053 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:14:33.756522 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:14:34.122839 ignition[1020]: INFO : Ignition 2.22.0 Jan 20 02:14:34.122839 ignition[1020]: INFO : Stage: files Jan 20 02:14:34.165174 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:14:34.165174 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:14:34.216888 ignition[1020]: DEBUG : files: compiled without relabeling support, skipping Jan 20 02:14:34.286434 ignition[1020]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 02:14:34.286434 ignition[1020]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 02:14:34.365546 ignition[1020]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 02:14:34.365546 ignition[1020]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 02:14:34.365546 ignition[1020]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 02:14:34.365546 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 02:14:34.365546 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 02:14:34.331080 unknown[1020]: wrote ssh authorized keys file for user: core Jan 20 02:14:34.867218 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 02:14:36.545920 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 02:14:36.545920 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 02:14:36.545920 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 20 02:14:36.973229 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 02:14:39.350636 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 02:14:39.350636 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 20 02:14:39.397264 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 02:14:39.397264 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:14:39.397264 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:14:39.397264 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:14:39.397264 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:14:39.397264 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:14:39.397264 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:14:39.710673 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:14:39.710673 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:14:39.710673 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:14:39.710673 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:14:39.710673 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:14:39.710673 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 02:14:40.354388 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 20 02:14:44.220322 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 02:14:44.220322 ignition[1020]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 20 02:14:44.309066 ignition[1020]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:14:44.345613 ignition[1020]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:14:44.345613 ignition[1020]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 20 02:14:44.345613 ignition[1020]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 20 02:14:44.345613 ignition[1020]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:14:44.345613 ignition[1020]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:14:44.345613 ignition[1020]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 20 02:14:44.345613 ignition[1020]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 02:14:44.645018 ignition[1020]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:14:44.682203 ignition[1020]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:14:44.682203 ignition[1020]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 02:14:44.682203 ignition[1020]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 20 02:14:44.682203 ignition[1020]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 02:14:44.682203 ignition[1020]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:14:44.774674 ignition[1020]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:14:44.774674 ignition[1020]: INFO : files: files passed Jan 20 02:14:44.774674 ignition[1020]: INFO : Ignition finished successfully Jan 20 02:14:44.771084 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 02:14:44.900198 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 02:14:44.950169 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 02:14:45.048464 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 02:14:45.048673 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 02:14:45.125140 initrd-setup-root-after-ignition[1050]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 02:14:45.147525 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:14:45.147525 initrd-setup-root-after-ignition[1052]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:14:45.200990 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:14:45.250587 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:14:45.276010 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 02:14:45.289973 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 02:14:45.563978 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 02:14:45.567121 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 02:14:45.608465 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 02:14:45.616585 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 02:14:45.616739 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 02:14:45.626707 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 02:14:45.811638 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:14:45.842941 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 02:14:45.953151 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:14:45.968211 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:14:45.969689 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 02:14:46.010521 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 02:14:46.010880 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:14:46.057255 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 02:14:46.075860 systemd[1]: Stopped target basic.target - Basic System. Jan 20 02:14:46.078663 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 02:14:46.079967 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:14:46.083737 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 02:14:46.101130 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:14:46.215439 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 02:14:46.258737 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:14:46.269004 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 02:14:46.269129 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 02:14:46.269246 systemd[1]: Stopped target swap.target - Swaps. Jan 20 02:14:46.274557 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 02:14:46.274914 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:14:46.368211 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:14:46.380635 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:14:46.388956 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 02:14:46.391364 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:14:46.447754 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 02:14:46.448152 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 02:14:46.468197 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 02:14:46.474621 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:14:46.526110 systemd[1]: Stopped target paths.target - Path Units. Jan 20 02:14:46.537614 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 02:14:46.565730 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:14:46.595089 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 02:14:46.613859 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 02:14:46.622106 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 02:14:46.622528 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:14:46.623717 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 02:14:46.624008 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:14:46.664308 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 02:14:46.664595 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:14:46.677168 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 02:14:46.677355 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 02:14:46.787937 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 02:14:46.800877 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 02:14:46.801134 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:14:46.891526 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 02:14:46.913465 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 02:14:46.913732 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:14:46.939932 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 02:14:46.940185 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:14:47.034314 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 02:14:47.034553 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 02:14:47.099144 ignition[1076]: INFO : Ignition 2.22.0 Jan 20 02:14:47.099144 ignition[1076]: INFO : Stage: umount Jan 20 02:14:47.099144 ignition[1076]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:14:47.099144 ignition[1076]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:14:47.099144 ignition[1076]: INFO : umount: umount passed Jan 20 02:14:47.099144 ignition[1076]: INFO : Ignition finished successfully Jan 20 02:14:47.171758 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 02:14:47.186689 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 02:14:47.186965 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 02:14:47.208903 systemd[1]: Stopped target network.target - Network. Jan 20 02:14:47.224679 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 02:14:47.224911 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 02:14:47.272679 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 02:14:47.273589 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 02:14:47.304328 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 02:14:47.304668 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 02:14:47.328369 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 02:14:47.328562 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 02:14:47.342924 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 02:14:47.355150 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 02:14:47.408183 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 02:14:47.411648 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 02:14:47.439104 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 02:14:47.439290 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 02:14:47.486088 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 02:14:47.488511 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 02:14:47.488846 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 02:14:47.534286 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 02:14:47.540609 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 02:14:47.547371 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 02:14:47.547524 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:14:47.617136 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 02:14:47.619457 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 02:14:47.665995 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 02:14:47.685739 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 02:14:47.685980 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:14:47.698587 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 02:14:47.698707 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:14:47.725635 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 02:14:47.725742 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 02:14:47.739908 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 02:14:47.740038 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:14:47.786122 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:14:47.921164 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 02:14:47.923480 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:14:47.924272 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 02:14:47.924619 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:14:47.971967 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 02:14:47.972977 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 02:14:47.983245 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 02:14:47.983385 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:14:48.038474 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 02:14:48.039632 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:14:48.068713 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 02:14:48.071313 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 02:14:48.089894 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 02:14:48.090025 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:14:48.093932 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 02:14:48.144710 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 02:14:48.144986 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:14:48.196210 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 02:14:48.196316 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:14:48.241947 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:14:48.243110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:14:48.302678 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 02:14:48.302975 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 02:14:48.303059 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:14:48.304095 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 02:14:48.307616 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 02:14:48.338627 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 02:14:48.338946 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 02:14:48.368547 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 02:14:48.380945 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 02:14:48.517333 systemd[1]: Switching root. Jan 20 02:14:48.644977 systemd-journald[203]: Journal stopped Jan 20 02:14:54.935172 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 20 02:14:54.935335 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 02:14:54.935361 kernel: SELinux: policy capability open_perms=1 Jan 20 02:14:54.935378 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 02:14:54.935406 kernel: SELinux: policy capability always_check_network=0 Jan 20 02:14:54.935424 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 02:14:54.935444 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 02:14:54.935521 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 02:14:54.935597 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 02:14:54.935619 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 02:14:54.935635 kernel: audit: type=1403 audit(1768875289.406:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 02:14:54.935655 systemd[1]: Successfully loaded SELinux policy in 250.863ms. Jan 20 02:14:54.935687 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 39.310ms. Jan 20 02:14:54.935709 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:14:54.935729 systemd[1]: Detected virtualization kvm. Jan 20 02:14:54.935746 systemd[1]: Detected architecture x86-64. Jan 20 02:14:54.935836 systemd[1]: Detected first boot. Jan 20 02:14:54.935896 systemd[1]: Initializing machine ID from VM UUID. Jan 20 02:14:54.935921 zram_generator::config[1121]: No configuration found. Jan 20 02:14:54.935945 kernel: Guest personality initialized and is inactive Jan 20 02:14:54.935961 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 02:14:54.935977 kernel: Initialized host personality Jan 20 02:14:54.935992 kernel: NET: Registered PF_VSOCK protocol family Jan 20 02:14:54.936008 systemd[1]: Populated /etc with preset unit settings. Jan 20 02:14:54.936026 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 02:14:54.936045 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 02:14:54.936104 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 02:14:54.936124 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 02:14:54.936142 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 02:14:54.936161 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 02:14:54.936180 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 02:14:54.936197 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 02:14:54.936212 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 02:14:54.936228 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 02:14:54.936299 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 02:14:54.936317 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 02:14:54.936332 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:14:54.936348 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:14:54.936366 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 02:14:54.936385 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 02:14:54.936401 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 02:14:54.943262 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:14:54.943292 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 02:14:54.943313 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:14:54.943331 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:14:54.943348 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 02:14:54.943366 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 02:14:54.943384 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 02:14:54.943403 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 02:14:54.943420 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:14:54.943438 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:14:54.945595 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:14:54.945669 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:14:54.945691 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 02:14:54.945752 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 02:14:54.945854 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 02:14:54.945871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:14:54.945890 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:14:54.945909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:14:54.945926 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 02:14:54.945989 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 02:14:54.946011 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 02:14:54.946034 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 02:14:54.946049 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:14:54.946064 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 02:14:54.946081 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 02:14:54.946099 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 02:14:54.946115 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 02:14:54.946180 systemd[1]: Reached target machines.target - Containers. Jan 20 02:14:54.946200 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 02:14:54.946216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:14:54.946232 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:14:54.946247 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 02:14:54.946265 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:14:54.946279 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:14:54.946294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:14:54.946310 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 02:14:54.946368 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:14:54.946389 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 02:14:54.946408 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 02:14:54.946424 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 02:14:54.946439 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 02:14:54.946512 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 02:14:54.946531 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:14:54.946550 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:14:54.946643 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:14:54.946661 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:14:54.946677 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 02:14:54.946695 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 02:14:54.946711 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:14:54.946884 systemd-journald[1206]: Collecting audit messages is disabled. Jan 20 02:14:54.946920 systemd-journald[1206]: Journal started Jan 20 02:14:54.946949 systemd-journald[1206]: Runtime Journal (/run/log/journal/e2507be48a7a4a1b976d38b860abec85) is 6M, max 48.1M, 42.1M free. Jan 20 02:14:52.477169 systemd[1]: Queued start job for default target multi-user.target. Jan 20 02:14:52.508555 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 02:14:52.512182 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 02:14:52.513646 systemd[1]: systemd-journald.service: Consumed 2.496s CPU time. Jan 20 02:14:54.978163 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 02:14:54.978267 systemd[1]: Stopped verity-setup.service. Jan 20 02:14:54.978295 kernel: ACPI: bus type drm_connector registered Jan 20 02:14:55.031603 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:14:55.070959 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:14:55.071053 kernel: loop: module loaded Jan 20 02:14:55.095197 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 02:14:55.111060 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 02:14:55.129283 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 02:14:55.147093 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 02:14:55.163164 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 02:14:55.186907 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 02:14:55.208210 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 02:14:55.227389 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:14:55.269257 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 02:14:55.270295 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 02:14:55.300554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:14:55.301441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:14:55.325028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:14:55.325669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:14:55.345694 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:14:55.368422 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:14:55.400417 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 02:14:55.428911 kernel: fuse: init (API version 7.41) Jan 20 02:14:55.429999 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:14:55.430305 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:14:55.455035 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:14:55.461936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:14:55.488530 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 02:14:55.488902 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 02:14:55.511540 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 02:14:55.615280 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:14:55.646334 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 02:14:55.686122 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 02:14:55.722594 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 02:14:55.729063 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:14:55.747259 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 02:14:55.776640 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 02:14:55.793977 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:14:55.810727 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 02:14:55.838961 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 02:14:55.861584 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:14:55.872106 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 02:14:55.885565 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:14:55.897681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:14:55.980311 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 02:14:55.998427 systemd-journald[1206]: Time spent on flushing to /var/log/journal/e2507be48a7a4a1b976d38b860abec85 is 114.903ms for 1070 entries. Jan 20 02:14:55.998427 systemd-journald[1206]: System Journal (/var/log/journal/e2507be48a7a4a1b976d38b860abec85) is 8M, max 195.6M, 187.6M free. Jan 20 02:14:56.365667 systemd-journald[1206]: Received client request to flush runtime journal. Jan 20 02:14:56.366165 kernel: loop0: detected capacity change from 0 to 110984 Jan 20 02:14:56.091660 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 02:14:56.162257 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:14:56.211186 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 02:14:56.251158 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 02:14:56.280400 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 02:14:56.356879 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 02:14:56.422933 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 02:14:56.450891 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 02:14:56.509146 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:14:56.667252 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 02:14:56.681429 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 02:14:56.741732 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 02:14:56.809356 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 02:14:56.832922 kernel: loop1: detected capacity change from 0 to 224512 Jan 20 02:14:56.840340 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:14:57.008638 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 20 02:14:57.008695 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 20 02:14:57.031760 kernel: loop2: detected capacity change from 0 to 128560 Jan 20 02:14:57.033620 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:14:57.149146 kernel: loop3: detected capacity change from 0 to 110984 Jan 20 02:14:57.299417 kernel: loop4: detected capacity change from 0 to 224512 Jan 20 02:14:57.395387 kernel: loop5: detected capacity change from 0 to 128560 Jan 20 02:14:57.486394 (sd-merge)[1265]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 02:14:57.487416 (sd-merge)[1265]: Merged extensions into '/usr'. Jan 20 02:14:57.512107 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 02:14:57.512159 systemd[1]: Reloading... Jan 20 02:14:59.313921 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2107603837 wd_nsec: 2107602878 Jan 20 02:14:59.530403 zram_generator::config[1287]: No configuration found. Jan 20 02:15:00.493566 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 02:15:00.642005 systemd[1]: Reloading finished in 3127 ms. Jan 20 02:15:00.734388 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 02:15:00.751593 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 02:15:00.763508 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 02:15:00.832674 systemd[1]: Starting ensure-sysext.service... Jan 20 02:15:00.860213 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:15:00.926162 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:15:01.001180 systemd[1]: Reload requested from client PID 1329 ('systemctl') (unit ensure-sysext.service)... Jan 20 02:15:01.001200 systemd[1]: Reloading... Jan 20 02:15:01.049966 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:15:01.050061 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:15:01.054579 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:15:01.060423 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 02:15:01.073348 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 02:15:01.075289 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Jan 20 02:15:01.079662 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Jan 20 02:15:01.112446 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:15:01.112465 systemd-tmpfiles[1330]: Skipping /boot Jan 20 02:15:01.164396 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:15:01.164414 systemd-tmpfiles[1330]: Skipping /boot Jan 20 02:15:01.222411 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 20 02:15:01.504000 zram_generator::config[1361]: No configuration found. Jan 20 02:15:05.673907 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 02:15:05.776021 kernel: ACPI: button: Power Button [PWRF] Jan 20 02:15:05.961899 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 02:15:06.161836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:15:06.191029 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 02:15:06.191388 systemd[1]: Reloading finished in 5187 ms. Jan 20 02:15:06.305095 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:15:06.406924 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:15:06.674862 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 02:15:06.715888 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 02:15:06.716269 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 02:15:06.770073 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:15:06.939552 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:15:09.425549 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 02:15:09.460025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:15:09.480920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:15:09.505267 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:15:09.612938 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:15:09.655213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:15:09.670363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:15:09.682256 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 02:15:09.713142 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:15:09.751109 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 02:15:09.903858 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:15:10.012027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:15:10.080073 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 02:15:10.101400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:15:10.120738 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:15:10.146179 augenrules[1480]: No rules Jan 20 02:15:10.147424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:15:10.151520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:15:10.242880 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:15:10.245105 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:15:10.286897 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:15:10.287927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:15:10.320121 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:15:10.376595 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:15:10.422061 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:15:10.459365 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:15:10.570057 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 02:15:10.790624 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 02:15:10.838153 systemd[1]: Finished ensure-sysext.service. Jan 20 02:15:10.924173 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:15:10.924491 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:15:10.958559 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 02:15:10.985545 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 02:15:11.111141 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 02:15:11.140603 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 02:15:11.282505 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 02:15:11.422537 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 02:15:11.449171 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 02:15:11.845702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:15:12.245486 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 02:15:13.603358 systemd-networkd[1474]: lo: Link UP Jan 20 02:15:13.603412 systemd-networkd[1474]: lo: Gained carrier Jan 20 02:15:13.618585 systemd-networkd[1474]: Enumeration completed Jan 20 02:15:13.623410 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:15:13.632632 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:15:13.632642 systemd-networkd[1474]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:15:13.648869 systemd-networkd[1474]: eth0: Link UP Jan 20 02:15:13.654557 systemd-networkd[1474]: eth0: Gained carrier Jan 20 02:15:13.654845 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:15:13.661033 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 02:15:13.683555 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 02:15:13.765219 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 02:15:13.794471 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 02:15:13.836929 systemd-networkd[1474]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:15:13.839145 systemd-timesyncd[1495]: Network configuration changed, trying to establish connection. Jan 20 02:15:14.283608 systemd-timesyncd[1495]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 02:15:14.283697 systemd-timesyncd[1495]: Initial clock synchronization to Tue 2026-01-20 02:15:14.283475 UTC. Jan 20 02:15:14.286116 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 02:15:14.425399 systemd-resolved[1476]: Positive Trust Anchors: Jan 20 02:15:14.426739 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:15:14.430299 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:15:14.487962 systemd-resolved[1476]: Defaulting to hostname 'linux'. Jan 20 02:15:14.504171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:15:14.526745 systemd[1]: Reached target network.target - Network. Jan 20 02:15:14.555685 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:15:14.579733 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:15:14.593180 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 02:15:14.617556 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 02:15:14.639520 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 02:15:14.661337 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 02:15:14.694999 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 02:15:14.716851 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 02:15:14.734675 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 02:15:14.734791 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:15:14.755145 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:15:14.772656 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 02:15:14.796503 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 02:15:14.820378 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 02:15:14.844230 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 02:15:14.874222 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 02:15:14.978522 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 02:15:15.010871 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 02:15:15.056262 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 02:15:15.081901 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:15:15.113845 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:15:15.135395 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:15:15.137195 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:15:15.145302 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 02:15:15.170269 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 02:15:15.317373 systemd-networkd[1474]: eth0: Gained IPv6LL Jan 20 02:15:15.328346 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 02:15:15.378215 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 02:15:15.409305 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 02:15:15.417992 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 02:15:15.427687 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 02:15:15.492248 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 02:15:15.522190 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 02:15:15.531091 jq[1522]: false Jan 20 02:15:15.557357 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 02:15:15.596445 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing passwd entry cache Jan 20 02:15:15.597121 oslogin_cache_refresh[1524]: Refreshing passwd entry cache Jan 20 02:15:15.609803 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 02:15:15.708119 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting users, quitting Jan 20 02:15:15.708119 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:15:15.708119 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing group entry cache Jan 20 02:15:15.706791 oslogin_cache_refresh[1524]: Failure getting users, quitting Jan 20 02:15:15.706828 oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:15:15.707135 oslogin_cache_refresh[1524]: Refreshing group entry cache Jan 20 02:15:15.726970 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting groups, quitting Jan 20 02:15:15.726970 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:15:15.724751 oslogin_cache_refresh[1524]: Failure getting groups, quitting Jan 20 02:15:15.724775 oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:15:15.730380 extend-filesystems[1523]: Found /dev/vda6 Jan 20 02:15:15.764531 extend-filesystems[1523]: Found /dev/vda9 Jan 20 02:15:15.774409 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 02:15:15.808866 extend-filesystems[1523]: Checking size of /dev/vda9 Jan 20 02:15:15.821394 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 02:15:15.824667 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 02:15:15.833387 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 02:15:15.882268 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 02:15:15.992241 extend-filesystems[1523]: Resized partition /dev/vda9 Jan 20 02:15:16.026660 extend-filesystems[1549]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 02:15:16.059465 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 02:15:16.126508 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 02:15:16.138165 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 02:15:16.209719 jq[1541]: true Jan 20 02:15:16.224481 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 02:15:16.229240 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 02:15:16.244572 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 02:15:16.267214 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 02:15:16.288522 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 02:15:16.315862 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 02:15:16.395610 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 02:15:16.402723 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 02:15:16.592402 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 02:15:16.619467 update_engine[1538]: I20260120 02:15:16.611382 1538 main.cc:92] Flatcar Update Engine starting Jan 20 02:15:16.641485 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 02:15:16.692816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:15:16.768138 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 02:15:16.794495 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 02:15:16.827214 jq[1552]: true Jan 20 02:15:16.895109 extend-filesystems[1549]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 02:15:16.895109 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 02:15:16.895109 extend-filesystems[1549]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 02:15:17.111421 extend-filesystems[1523]: Resized filesystem in /dev/vda9 Jan 20 02:15:17.144241 sshd_keygen[1540]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 02:15:16.916894 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 02:15:16.918558 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 02:15:17.217191 (ntainerd)[1567]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 02:15:17.328239 tar[1551]: linux-amd64/LICENSE Jan 20 02:15:17.328239 tar[1551]: linux-amd64/helm Jan 20 02:15:17.641258 dbus-daemon[1520]: [system] SELinux support is enabled Jan 20 02:15:17.814772 update_engine[1538]: I20260120 02:15:17.698096 1538 update_check_scheduler.cc:74] Next update check in 3m15s Jan 20 02:15:17.646877 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 02:15:17.721521 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 02:15:17.731456 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 02:15:17.840757 systemd[1]: Started update-engine.service - Update Engine. Jan 20 02:15:17.914594 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 02:15:18.165856 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 02:15:18.166589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 02:15:18.166671 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 02:15:18.211747 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 02:15:18.211847 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 02:15:18.285227 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 02:15:18.309170 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Jan 20 02:15:18.361625 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 02:15:18.461694 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 02:15:18.513427 systemd-logind[1533]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 02:15:18.513461 systemd-logind[1533]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 02:15:18.514777 systemd-logind[1533]: New seat seat0. Jan 20 02:15:18.624282 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 02:15:18.683382 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 02:15:19.338331 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 02:15:19.931314 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 02:15:19.931912 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 02:15:20.013511 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 02:15:20.091107 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 02:15:20.665570 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 02:15:20.731809 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 02:15:22.297753 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 02:15:22.310126 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 02:15:26.400192 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 02:15:26.487418 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:41472.service - OpenSSH per-connection server daemon (10.0.0.1:41472). Jan 20 02:15:26.740162 containerd[1567]: time="2026-01-20T02:15:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 02:15:26.742408 containerd[1567]: time="2026-01-20T02:15:26.742366427Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 02:15:26.767170 containerd[1567]: time="2026-01-20T02:15:26.766544587Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="76.423µs" Jan 20 02:15:26.767170 containerd[1567]: time="2026-01-20T02:15:26.766596644Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 02:15:26.767170 containerd[1567]: time="2026-01-20T02:15:26.766653560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 02:15:26.767170 containerd[1567]: time="2026-01-20T02:15:26.766941728Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 02:15:26.767170 containerd[1567]: time="2026-01-20T02:15:26.766964631Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 02:15:26.769580 containerd[1567]: time="2026-01-20T02:15:26.769538886Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:15:26.769888 containerd[1567]: time="2026-01-20T02:15:26.769861208Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:15:26.769974 containerd[1567]: time="2026-01-20T02:15:26.769956074Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:15:26.770617 containerd[1567]: time="2026-01-20T02:15:26.770582213Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:15:26.770707 containerd[1567]: time="2026-01-20T02:15:26.770688151Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:15:26.770796 containerd[1567]: time="2026-01-20T02:15:26.770772989Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:15:26.770863 containerd[1567]: time="2026-01-20T02:15:26.770844983Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 02:15:26.771198 containerd[1567]: time="2026-01-20T02:15:26.771170591Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 02:15:26.771871 containerd[1567]: time="2026-01-20T02:15:26.771845471Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:15:26.771988 containerd[1567]: time="2026-01-20T02:15:26.771966568Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:15:26.773172 containerd[1567]: time="2026-01-20T02:15:26.772160108Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 02:15:26.773172 containerd[1567]: time="2026-01-20T02:15:26.772282968Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 02:15:26.773172 containerd[1567]: time="2026-01-20T02:15:26.772913164Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 02:15:26.773700 containerd[1567]: time="2026-01-20T02:15:26.773581281Z" level=info msg="metadata content store policy set" policy=shared Jan 20 02:15:26.809293 containerd[1567]: time="2026-01-20T02:15:26.809234679Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 02:15:26.809681 containerd[1567]: time="2026-01-20T02:15:26.809653731Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 02:15:26.810560 containerd[1567]: time="2026-01-20T02:15:26.810533354Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 02:15:26.810728 containerd[1567]: time="2026-01-20T02:15:26.810706557Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 02:15:26.810809 containerd[1567]: time="2026-01-20T02:15:26.810793319Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 02:15:26.810874 containerd[1567]: time="2026-01-20T02:15:26.810859872Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 02:15:26.810940 containerd[1567]: time="2026-01-20T02:15:26.810926958Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 02:15:26.812730 containerd[1567]: time="2026-01-20T02:15:26.812701310Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 02:15:26.812833 containerd[1567]: time="2026-01-20T02:15:26.812811395Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 02:15:26.812914 containerd[1567]: time="2026-01-20T02:15:26.812897516Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 02:15:26.812987 containerd[1567]: time="2026-01-20T02:15:26.812968959Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 02:15:26.813201 containerd[1567]: time="2026-01-20T02:15:26.813167209Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 02:15:26.813472 containerd[1567]: time="2026-01-20T02:15:26.813450568Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 02:15:26.816203 containerd[1567]: time="2026-01-20T02:15:26.813642897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 02:15:26.816203 containerd[1567]: time="2026-01-20T02:15:26.813839825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 02:15:26.816203 containerd[1567]: time="2026-01-20T02:15:26.813871123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 02:15:26.816203 containerd[1567]: time="2026-01-20T02:15:26.813887304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 02:15:26.816203 containerd[1567]: time="2026-01-20T02:15:26.813900519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 02:15:26.816203 containerd[1567]: time="2026-01-20T02:15:26.813915095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 02:15:26.818156 containerd[1567]: time="2026-01-20T02:15:26.813999423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 02:15:26.818156 containerd[1567]: time="2026-01-20T02:15:26.817824743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 02:15:26.818156 containerd[1567]: time="2026-01-20T02:15:26.817847616Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 02:15:26.818156 containerd[1567]: time="2026-01-20T02:15:26.817903460Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 02:15:26.818615 containerd[1567]: time="2026-01-20T02:15:26.818590442Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 02:15:26.818741 containerd[1567]: time="2026-01-20T02:15:26.818722719Z" level=info msg="Start snapshots syncer" Jan 20 02:15:26.822202 containerd[1567]: time="2026-01-20T02:15:26.821382944Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 02:15:26.822497 containerd[1567]: time="2026-01-20T02:15:26.822450998Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 02:15:26.823384 containerd[1567]: time="2026-01-20T02:15:26.823358722Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 02:15:26.823805 containerd[1567]: time="2026-01-20T02:15:26.823778917Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 02:15:26.825243 containerd[1567]: time="2026-01-20T02:15:26.825218484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 02:15:26.825527 containerd[1567]: time="2026-01-20T02:15:26.825503696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 02:15:26.826163 containerd[1567]: time="2026-01-20T02:15:26.825603181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 02:15:26.826163 containerd[1567]: time="2026-01-20T02:15:26.825625724Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 02:15:26.826163 containerd[1567]: time="2026-01-20T02:15:26.825690154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 02:15:26.826163 containerd[1567]: time="2026-01-20T02:15:26.825706354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 02:15:26.826163 containerd[1567]: time="2026-01-20T02:15:26.825749395Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 02:15:26.826163 containerd[1567]: time="2026-01-20T02:15:26.825783167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 02:15:26.826163 containerd[1567]: time="2026-01-20T02:15:26.825797344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 02:15:26.826163 containerd[1567]: time="2026-01-20T02:15:26.825811049Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 02:15:26.826919 containerd[1567]: time="2026-01-20T02:15:26.826894482Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:15:26.828742 containerd[1567]: time="2026-01-20T02:15:26.826993687Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:15:26.828840 containerd[1567]: time="2026-01-20T02:15:26.828819064Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:15:26.828929 containerd[1567]: time="2026-01-20T02:15:26.828907449Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:15:26.828999 containerd[1567]: time="2026-01-20T02:15:26.828981527Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 02:15:26.829251 containerd[1567]: time="2026-01-20T02:15:26.829229050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 02:15:26.829351 containerd[1567]: time="2026-01-20T02:15:26.829331470Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 02:15:26.829856 containerd[1567]: time="2026-01-20T02:15:26.829832626Z" level=info msg="runtime interface created" Jan 20 02:15:26.831170 containerd[1567]: time="2026-01-20T02:15:26.829921222Z" level=info msg="created NRI interface" Jan 20 02:15:26.831507 containerd[1567]: time="2026-01-20T02:15:26.831243109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 02:15:26.831507 containerd[1567]: time="2026-01-20T02:15:26.831274818Z" level=info msg="Connect containerd service" Jan 20 02:15:26.831507 containerd[1567]: time="2026-01-20T02:15:26.831358634Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 02:15:26.838238 containerd[1567]: time="2026-01-20T02:15:26.838199273Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:15:27.077265 kernel: kvm_amd: TSC scaling supported Jan 20 02:15:27.077493 kernel: kvm_amd: Nested Virtualization enabled Jan 20 02:15:27.077560 kernel: kvm_amd: Nested Paging enabled Jan 20 02:15:27.105347 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 02:15:27.105479 kernel: kvm_amd: PMU virtualization is disabled Jan 20 02:15:27.366558 tar[1551]: linux-amd64/README.md Jan 20 02:15:27.388240 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 41472 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:15:27.410728 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:15:27.433643 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 02:15:27.456478 containerd[1567]: time="2026-01-20T02:15:27.456371052Z" level=info msg="Start subscribing containerd event" Jan 20 02:15:27.478422 containerd[1567]: time="2026-01-20T02:15:27.456565264Z" level=info msg="Start recovering state" Jan 20 02:15:27.478422 containerd[1567]: time="2026-01-20T02:15:27.456937590Z" level=info msg="Start event monitor" Jan 20 02:15:27.478422 containerd[1567]: time="2026-01-20T02:15:27.456961274Z" level=info msg="Start cni network conf syncer for default" Jan 20 02:15:27.478422 containerd[1567]: time="2026-01-20T02:15:27.456975560Z" level=info msg="Start streaming server" Jan 20 02:15:27.478422 containerd[1567]: time="2026-01-20T02:15:27.457171816Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 02:15:27.478422 containerd[1567]: time="2026-01-20T02:15:27.457189269Z" level=info msg="runtime interface starting up..." Jan 20 02:15:27.478422 containerd[1567]: time="2026-01-20T02:15:27.457199338Z" level=info msg="starting plugins..." Jan 20 02:15:27.478422 containerd[1567]: time="2026-01-20T02:15:27.457225226Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 02:15:27.478422 containerd[1567]: time="2026-01-20T02:15:27.470678417Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 02:15:27.478422 containerd[1567]: time="2026-01-20T02:15:27.470793221Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 02:15:27.468240 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 02:15:27.484523 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 02:15:27.496964 containerd[1567]: time="2026-01-20T02:15:27.490585835Z" level=info msg="containerd successfully booted in 0.752697s" Jan 20 02:15:27.517686 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 02:15:27.610382 systemd-logind[1533]: New session 1 of user core. Jan 20 02:15:27.645949 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 02:15:27.664315 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 02:15:28.019910 (systemd)[1660]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 02:15:28.193992 systemd-logind[1533]: New session c1 of user core. Jan 20 02:15:30.316329 systemd[1660]: Queued start job for default target default.target. Jan 20 02:15:30.363746 systemd[1660]: Created slice app.slice - User Application Slice. Jan 20 02:15:30.363788 systemd[1660]: Reached target paths.target - Paths. Jan 20 02:15:30.363852 systemd[1660]: Reached target timers.target - Timers. Jan 20 02:15:30.368725 systemd[1660]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 02:15:30.528514 systemd[1660]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 02:15:30.528698 systemd[1660]: Reached target sockets.target - Sockets. Jan 20 02:15:30.528784 systemd[1660]: Reached target basic.target - Basic System. Jan 20 02:15:30.528908 systemd[1660]: Reached target default.target - Main User Target. Jan 20 02:15:30.528973 systemd[1660]: Startup finished in 2.284s. Jan 20 02:15:30.535147 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 02:15:30.827760 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 02:15:31.214982 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:41608.service - OpenSSH per-connection server daemon (10.0.0.1:41608). Jan 20 02:15:31.793496 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 41608 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:15:31.802325 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:15:31.831754 systemd-logind[1533]: New session 2 of user core. Jan 20 02:15:31.955868 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 02:15:32.164277 sshd[1674]: Connection closed by 10.0.0.1 port 41608 Jan 20 02:15:32.165483 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Jan 20 02:15:32.244198 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:41608.service: Deactivated successfully. Jan 20 02:15:32.269289 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 02:15:32.280447 systemd-logind[1533]: Session 2 logged out. Waiting for processes to exit. Jan 20 02:15:32.355239 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:41610.service - OpenSSH per-connection server daemon (10.0.0.1:41610). Jan 20 02:15:32.364818 systemd-logind[1533]: Removed session 2. Jan 20 02:15:33.804698 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 41610 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:15:33.824919 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:15:34.506982 systemd-logind[1533]: New session 3 of user core. Jan 20 02:15:34.533387 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 02:15:34.595783 kernel: EDAC MC: Ver: 3.0.0 Jan 20 02:15:34.751630 sshd[1683]: Connection closed by 10.0.0.1 port 41610 Jan 20 02:15:34.753858 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jan 20 02:15:34.798082 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:41610.service: Deactivated successfully. Jan 20 02:15:34.808540 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 02:15:34.822599 systemd-logind[1533]: Session 3 logged out. Waiting for processes to exit. Jan 20 02:15:34.835460 systemd-logind[1533]: Removed session 3. Jan 20 02:15:35.834541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:15:35.835523 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 02:15:35.843569 systemd[1]: Startup finished in 23.903s (kernel) + 51.363s (initrd) + 46.255s (userspace) = 2min 1.522s. Jan 20 02:15:35.887913 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:15:37.831994 kubelet[1694]: E0120 02:15:37.830621 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:15:37.839681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:15:37.839932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:15:37.840583 systemd[1]: kubelet.service: Consumed 5.495s CPU time, 267.6M memory peak. Jan 20 02:15:45.069942 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:58270.service - OpenSSH per-connection server daemon (10.0.0.1:58270). Jan 20 02:15:45.567292 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 58270 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:15:45.597552 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:15:45.799178 systemd-logind[1533]: New session 4 of user core. Jan 20 02:15:45.822933 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 02:15:46.035462 sshd[1706]: Connection closed by 10.0.0.1 port 58270 Jan 20 02:15:46.041520 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jan 20 02:15:46.164722 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:58274.service - OpenSSH per-connection server daemon (10.0.0.1:58274). Jan 20 02:15:46.172473 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:58270.service: Deactivated successfully. Jan 20 02:15:46.187816 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 02:15:46.206920 systemd-logind[1533]: Session 4 logged out. Waiting for processes to exit. Jan 20 02:15:46.238187 systemd-logind[1533]: Removed session 4. Jan 20 02:15:46.424655 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 58274 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:15:46.434899 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:15:46.523484 systemd-logind[1533]: New session 5 of user core. Jan 20 02:15:46.554989 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 02:15:46.720684 sshd[1715]: Connection closed by 10.0.0.1 port 58274 Jan 20 02:15:46.723622 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jan 20 02:15:46.764997 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:58274.service: Deactivated successfully. Jan 20 02:15:46.780878 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 02:15:46.792277 systemd-logind[1533]: Session 5 logged out. Waiting for processes to exit. Jan 20 02:15:46.816487 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:58294.service - OpenSSH per-connection server daemon (10.0.0.1:58294). Jan 20 02:15:46.821184 systemd-logind[1533]: Removed session 5. Jan 20 02:15:47.074519 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 58294 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:15:47.077601 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:15:47.123526 systemd-logind[1533]: New session 6 of user core. Jan 20 02:15:47.143642 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 02:15:47.362588 sshd[1724]: Connection closed by 10.0.0.1 port 58294 Jan 20 02:15:47.366827 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jan 20 02:15:47.406945 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:58294.service: Deactivated successfully. Jan 20 02:15:47.424705 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 02:15:47.440304 systemd-logind[1533]: Session 6 logged out. Waiting for processes to exit. Jan 20 02:15:47.474544 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:58310.service - OpenSSH per-connection server daemon (10.0.0.1:58310). Jan 20 02:15:47.481323 systemd-logind[1533]: Removed session 6. Jan 20 02:15:47.740140 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 58310 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:15:47.744678 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:15:47.829243 systemd-logind[1533]: New session 7 of user core. Jan 20 02:15:47.853263 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 02:15:47.867417 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 02:15:47.879795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:15:48.090780 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 02:15:48.092637 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:15:48.174794 sudo[1735]: pam_unix(sudo:session): session closed for user root Jan 20 02:15:48.198212 sshd[1734]: Connection closed by 10.0.0.1 port 58310 Jan 20 02:15:48.200523 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jan 20 02:15:48.258728 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:58312.service - OpenSSH per-connection server daemon (10.0.0.1:58312). Jan 20 02:15:48.259723 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:58310.service: Deactivated successfully. Jan 20 02:15:48.287169 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 02:15:48.313390 systemd-logind[1533]: Session 7 logged out. Waiting for processes to exit. Jan 20 02:15:48.337705 systemd-logind[1533]: Removed session 7. Jan 20 02:15:48.655476 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 58312 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:15:48.661670 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:15:48.699522 systemd-logind[1533]: New session 8 of user core. Jan 20 02:15:48.720449 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 02:15:49.003948 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 02:15:49.014946 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:15:49.111762 sudo[1748]: pam_unix(sudo:session): session closed for user root Jan 20 02:15:49.199197 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 02:15:49.203396 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:15:49.360469 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:15:49.801420 augenrules[1770]: No rules Jan 20 02:15:49.822110 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:15:49.822790 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:15:49.843469 sudo[1747]: pam_unix(sudo:session): session closed for user root Jan 20 02:15:49.888327 sshd[1746]: Connection closed by 10.0.0.1 port 58312 Jan 20 02:15:49.884725 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 20 02:15:49.937468 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:58312.service: Deactivated successfully. Jan 20 02:15:49.952695 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 02:15:49.978213 systemd-logind[1533]: Session 8 logged out. Waiting for processes to exit. Jan 20 02:15:50.008957 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:58328.service - OpenSSH per-connection server daemon (10.0.0.1:58328). Jan 20 02:15:50.044817 systemd-logind[1533]: Removed session 8. Jan 20 02:15:50.216843 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 58328 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:15:50.221603 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:15:50.276804 systemd-logind[1533]: New session 9 of user core. Jan 20 02:15:50.304789 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 02:15:50.445819 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 02:15:50.451245 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:15:51.435198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:15:51.841273 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:15:55.888811 kubelet[1799]: E0120 02:15:55.888156 1799 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:15:55.921854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:15:55.922281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:15:55.930292 systemd[1]: kubelet.service: Consumed 3.381s CPU time, 110.8M memory peak. Jan 20 02:16:01.181829 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 02:16:01.228629 (dockerd)[1819]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 02:16:03.342979 update_engine[1538]: I20260120 02:16:03.332701 1538 update_attempter.cc:509] Updating boot flags... Jan 20 02:16:04.443594 dockerd[1819]: time="2026-01-20T02:16:04.441273007Z" level=info msg="Starting up" Jan 20 02:16:04.461969 dockerd[1819]: time="2026-01-20T02:16:04.458224603Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 02:16:05.296223 dockerd[1819]: time="2026-01-20T02:16:05.295899301Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 02:16:05.901935 dockerd[1819]: time="2026-01-20T02:16:05.887212106Z" level=info msg="Loading containers: start." Jan 20 02:16:06.145130 kernel: Initializing XFRM netlink socket Jan 20 02:16:06.169405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 02:16:06.243977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:16:09.258679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:16:09.318522 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:16:10.664227 kubelet[1910]: E0120 02:16:10.656358 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:16:10.676542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:16:10.676840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:16:10.681295 systemd[1]: kubelet.service: Consumed 1.461s CPU time, 110M memory peak. Jan 20 02:16:12.435195 systemd-networkd[1474]: docker0: Link UP Jan 20 02:16:12.568739 dockerd[1819]: time="2026-01-20T02:16:12.542303545Z" level=info msg="Loading containers: done." Jan 20 02:16:12.798900 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1732737317-merged.mount: Deactivated successfully. Jan 20 02:16:12.808101 dockerd[1819]: time="2026-01-20T02:16:12.807963521Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 02:16:12.815811 dockerd[1819]: time="2026-01-20T02:16:12.814111401Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 02:16:12.815811 dockerd[1819]: time="2026-01-20T02:16:12.814836442Z" level=info msg="Initializing buildkit" Jan 20 02:16:13.383261 dockerd[1819]: time="2026-01-20T02:16:13.378842102Z" level=info msg="Completed buildkit initialization" Jan 20 02:16:13.471578 dockerd[1819]: time="2026-01-20T02:16:13.448690458Z" level=info msg="Daemon has completed initialization" Jan 20 02:16:13.471578 dockerd[1819]: time="2026-01-20T02:16:13.451792346Z" level=info msg="API listen on /run/docker.sock" Jan 20 02:16:13.480178 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 02:16:16.923945 containerd[1567]: time="2026-01-20T02:16:16.922783748Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 02:16:18.796697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2405071400.mount: Deactivated successfully. Jan 20 02:16:21.090999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 02:16:21.158001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:16:24.188079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:16:24.236834 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:16:27.584946 kubelet[2105]: E0120 02:16:27.578715 2105 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:16:27.606877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:16:27.612772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:16:27.619464 systemd[1]: kubelet.service: Consumed 2.589s CPU time, 112.2M memory peak. Jan 20 02:16:36.680897 containerd[1567]: time="2026-01-20T02:16:36.678127458Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 20 02:16:36.680897 containerd[1567]: time="2026-01-20T02:16:36.680246336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:16:36.698378 containerd[1567]: time="2026-01-20T02:16:36.698315177Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:16:36.724732 containerd[1567]: time="2026-01-20T02:16:36.722558740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:16:36.732703 containerd[1567]: time="2026-01-20T02:16:36.732645811Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 19.809400477s" Jan 20 02:16:36.736347 containerd[1567]: time="2026-01-20T02:16:36.732976317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 02:16:36.767779 containerd[1567]: time="2026-01-20T02:16:36.766261434Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 02:16:37.682097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 02:16:37.706831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:16:41.974492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:16:42.021716 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:16:44.495204 kubelet[2149]: E0120 02:16:44.488745 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:16:44.739580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:16:44.833647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:16:44.891148 systemd[1]: kubelet.service: Consumed 1.694s CPU time, 108.6M memory peak. Jan 20 02:16:55.095318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 02:16:55.171239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:17:01.883501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:17:01.984228 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:17:02.636782 kubelet[2171]: E0120 02:17:02.636222 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:17:02.645386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:17:02.645825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:17:02.658202 systemd[1]: kubelet.service: Consumed 1.874s CPU time, 110.3M memory peak. Jan 20 02:17:04.474340 containerd[1567]: time="2026-01-20T02:17:04.444746862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:04.487891 containerd[1567]: time="2026-01-20T02:17:04.487484281Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 20 02:17:04.493441 containerd[1567]: time="2026-01-20T02:17:04.491469931Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:04.509613 containerd[1567]: time="2026-01-20T02:17:04.507398819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:04.509613 containerd[1567]: time="2026-01-20T02:17:04.508530920Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 27.742215847s" Jan 20 02:17:04.509613 containerd[1567]: time="2026-01-20T02:17:04.508572427Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 02:17:04.520091 containerd[1567]: time="2026-01-20T02:17:04.519564848Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 02:17:12.981514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 02:17:13.026372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:17:18.159314 containerd[1567]: time="2026-01-20T02:17:18.135582432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:18.236533 containerd[1567]: time="2026-01-20T02:17:18.235832723Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 20 02:17:18.271249 containerd[1567]: time="2026-01-20T02:17:18.269926693Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:18.298500 containerd[1567]: time="2026-01-20T02:17:18.297232635Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 13.777612154s" Jan 20 02:17:18.298500 containerd[1567]: time="2026-01-20T02:17:18.297295802Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 02:17:18.300368 containerd[1567]: time="2026-01-20T02:17:18.298926823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:18.315375 containerd[1567]: time="2026-01-20T02:17:18.315328722Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 02:17:19.679502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:17:19.742967 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:17:23.725363 kubelet[2191]: E0120 02:17:23.670414 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:17:23.963319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:17:23.971899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:17:23.995844 systemd[1]: kubelet.service: Consumed 4.439s CPU time, 110.6M memory peak. Jan 20 02:17:31.146999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669631625.mount: Deactivated successfully. Jan 20 02:17:34.318770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 02:17:34.367144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:17:41.426910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:17:42.299794 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:17:44.142426 kubelet[2217]: E0120 02:17:44.141622 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:17:44.168719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:17:44.176542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:17:44.178368 systemd[1]: kubelet.service: Consumed 3.458s CPU time, 108.1M memory peak. Jan 20 02:17:48.330294 containerd[1567]: time="2026-01-20T02:17:48.329487146Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 20 02:17:48.366557 containerd[1567]: time="2026-01-20T02:17:48.332234752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:48.435783 containerd[1567]: time="2026-01-20T02:17:48.427515307Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:48.993825 containerd[1567]: time="2026-01-20T02:17:48.992750553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:49.293509 containerd[1567]: time="2026-01-20T02:17:49.222229784Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 30.906662209s" Jan 20 02:17:49.293509 containerd[1567]: time="2026-01-20T02:17:49.222418414Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 02:17:49.674173 containerd[1567]: time="2026-01-20T02:17:49.673911095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 02:17:50.917794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702470210.mount: Deactivated successfully. Jan 20 02:17:54.413207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 02:17:54.427911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:17:56.127784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:17:56.186532 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:17:56.641921 kubelet[2285]: E0120 02:17:56.641202 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:17:56.681597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:17:56.681872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:17:56.689938 systemd[1]: kubelet.service: Consumed 500ms CPU time, 112.5M memory peak. Jan 20 02:17:59.693330 containerd[1567]: time="2026-01-20T02:17:59.692869077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:59.705702 containerd[1567]: time="2026-01-20T02:17:59.705451622Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 20 02:17:59.711798 containerd[1567]: time="2026-01-20T02:17:59.711659567Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:59.723899 containerd[1567]: time="2026-01-20T02:17:59.723437465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:17:59.728793 containerd[1567]: time="2026-01-20T02:17:59.728651425Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 10.044815837s" Jan 20 02:17:59.728793 containerd[1567]: time="2026-01-20T02:17:59.728755519Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 02:17:59.802634 containerd[1567]: time="2026-01-20T02:17:59.793487754Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 02:18:01.062979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282581070.mount: Deactivated successfully. Jan 20 02:18:01.132630 containerd[1567]: time="2026-01-20T02:18:01.129669835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:18:01.166673 containerd[1567]: time="2026-01-20T02:18:01.166158711Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 02:18:01.175477 containerd[1567]: time="2026-01-20T02:18:01.172963843Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:18:01.198945 containerd[1567]: time="2026-01-20T02:18:01.192790354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:18:01.198945 containerd[1567]: time="2026-01-20T02:18:01.198588402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.404813152s" Jan 20 02:18:01.202274 containerd[1567]: time="2026-01-20T02:18:01.200297462Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 02:18:01.222531 containerd[1567]: time="2026-01-20T02:18:01.219917811Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 02:18:03.176835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266664466.mount: Deactivated successfully. Jan 20 02:18:06.930794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 02:18:06.957917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:18:07.986474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:18:08.091648 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:18:09.311483 kubelet[2359]: E0120 02:18:09.307835 2359 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:18:09.355659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:18:09.355919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:18:09.363857 systemd[1]: kubelet.service: Consumed 622ms CPU time, 110.2M memory peak. Jan 20 02:18:19.423944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 02:18:19.883897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:18:25.031824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:18:25.180822 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:18:25.937513 kubelet[2376]: E0120 02:18:25.931449 2376 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:18:25.957765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:18:25.966478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:18:25.973718 systemd[1]: kubelet.service: Consumed 1.721s CPU time, 110.8M memory peak. Jan 20 02:18:28.325716 containerd[1567]: time="2026-01-20T02:18:28.314314581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:28.392944 containerd[1567]: time="2026-01-20T02:18:28.376644976Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 20 02:18:28.416460 containerd[1567]: time="2026-01-20T02:18:28.409691247Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:28.643573 containerd[1567]: time="2026-01-20T02:18:28.528449344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:28.881461 containerd[1567]: time="2026-01-20T02:18:28.880486867Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 27.660519695s" Jan 20 02:18:28.881461 containerd[1567]: time="2026-01-20T02:18:28.880625364Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 02:18:33.411365 update_engine[1538]: I20260120 02:18:33.385449 1538 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 02:18:33.411365 update_engine[1538]: I20260120 02:18:33.385525 1538 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 02:18:33.411365 update_engine[1538]: I20260120 02:18:33.385900 1538 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 02:18:33.411365 update_engine[1538]: I20260120 02:18:33.395941 1538 omaha_request_params.cc:62] Current group set to stable Jan 20 02:18:33.449513 update_engine[1538]: I20260120 02:18:33.437949 1538 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 02:18:33.449513 update_engine[1538]: I20260120 02:18:33.437995 1538 update_attempter.cc:643] Scheduling an action processor start. Jan 20 02:18:33.449513 update_engine[1538]: I20260120 02:18:33.438102 1538 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:18:33.449513 update_engine[1538]: I20260120 02:18:33.438274 1538 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 02:18:33.449513 update_engine[1538]: I20260120 02:18:33.438452 1538 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:18:33.449513 update_engine[1538]: I20260120 02:18:33.438472 1538 omaha_request_action.cc:272] Request: Jan 20 02:18:33.449513 update_engine[1538]: Jan 20 02:18:33.449513 update_engine[1538]: Jan 20 02:18:33.449513 update_engine[1538]: Jan 20 02:18:33.449513 update_engine[1538]: Jan 20 02:18:33.449513 update_engine[1538]: Jan 20 02:18:33.449513 update_engine[1538]: Jan 20 02:18:33.449513 update_engine[1538]: Jan 20 02:18:33.449513 update_engine[1538]: Jan 20 02:18:33.449513 update_engine[1538]: I20260120 02:18:33.438484 1538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:18:33.463921 update_engine[1538]: I20260120 02:18:33.463863 1538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:18:33.478933 update_engine[1538]: I20260120 02:18:33.478312 1538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:18:33.486122 locksmithd[1608]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 02:18:33.542338 update_engine[1538]: E20260120 02:18:33.535576 1538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:18:33.542338 update_engine[1538]: I20260120 02:18:33.539534 1538 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 02:18:36.074304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 02:18:36.159756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:18:38.472733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:18:38.531872 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:18:39.815922 kubelet[2419]: E0120 02:18:39.815323 2419 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:18:39.839564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:18:39.839885 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:18:39.850259 systemd[1]: kubelet.service: Consumed 847ms CPU time, 110.2M memory peak. Jan 20 02:18:43.343603 update_engine[1538]: I20260120 02:18:43.342764 1538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:18:43.371197 update_engine[1538]: I20260120 02:18:43.351486 1538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:18:43.371197 update_engine[1538]: I20260120 02:18:43.367431 1538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:18:43.394799 update_engine[1538]: E20260120 02:18:43.391222 1538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:18:43.394799 update_engine[1538]: I20260120 02:18:43.391407 1538 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 02:18:47.624744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:18:47.625092 systemd[1]: kubelet.service: Consumed 847ms CPU time, 110.2M memory peak. Jan 20 02:18:47.653818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:18:48.087131 systemd[1]: Reload requested from client PID 2436 ('systemctl') (unit session-9.scope)... Jan 20 02:18:48.087153 systemd[1]: Reloading... Jan 20 02:18:49.080470 zram_generator::config[2479]: No configuration found. Jan 20 02:18:50.103454 systemd[1]: Reloading finished in 1994 ms. Jan 20 02:18:50.484902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:18:50.501780 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:18:50.513350 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:18:50.517771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:18:50.517879 systemd[1]: kubelet.service: Consumed 424ms CPU time, 98.2M memory peak. Jan 20 02:18:50.534742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:18:51.968617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:18:52.004136 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:18:52.444698 kubelet[2528]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:18:52.460206 kubelet[2528]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:18:52.460206 kubelet[2528]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:18:52.460206 kubelet[2528]: I0120 02:18:52.449861 2528 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:18:53.341360 update_engine[1538]: I20260120 02:18:53.335932 1538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:18:53.341360 update_engine[1538]: I20260120 02:18:53.340543 1538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:18:53.353961 update_engine[1538]: I20260120 02:18:53.350582 1538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:18:53.367687 update_engine[1538]: E20260120 02:18:53.367204 1538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:18:53.367687 update_engine[1538]: I20260120 02:18:53.367406 1538 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 02:18:55.067732 kubelet[2528]: I0120 02:18:55.066891 2528 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 02:18:55.067732 kubelet[2528]: I0120 02:18:55.066947 2528 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:18:55.089483 kubelet[2528]: I0120 02:18:55.085538 2528 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 02:18:55.276558 kubelet[2528]: E0120 02:18:55.276235 2528 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:55.288939 kubelet[2528]: I0120 02:18:55.281358 2528 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:18:55.452536 kubelet[2528]: I0120 02:18:55.451874 2528 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:18:55.534741 kubelet[2528]: I0120 02:18:55.532171 2528 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:18:55.534741 kubelet[2528]: I0120 02:18:55.532697 2528 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:18:55.534741 kubelet[2528]: I0120 02:18:55.532741 2528 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:18:55.534741 kubelet[2528]: I0120 02:18:55.533176 2528 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:18:55.535735 kubelet[2528]: I0120 02:18:55.533192 2528 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 02:18:55.535735 kubelet[2528]: I0120 02:18:55.533463 2528 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:18:55.563909 kubelet[2528]: I0120 02:18:55.559952 2528 kubelet.go:446] "Attempting to sync node with API server" Jan 20 02:18:55.563909 kubelet[2528]: I0120 02:18:55.560173 2528 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:18:55.563909 kubelet[2528]: I0120 02:18:55.560216 2528 kubelet.go:352] "Adding apiserver pod source" Jan 20 02:18:55.563909 kubelet[2528]: I0120 02:18:55.560235 2528 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:18:55.602791 kubelet[2528]: I0120 02:18:55.597494 2528 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 02:18:55.604533 kubelet[2528]: I0120 02:18:55.604095 2528 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 02:18:55.623452 kubelet[2528]: W0120 02:18:55.621095 2528 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 02:18:55.631853 kubelet[2528]: W0120 02:18:55.627590 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:18:55.631853 kubelet[2528]: E0120 02:18:55.627678 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:55.638202 kubelet[2528]: W0120 02:18:55.637083 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:18:55.638202 kubelet[2528]: E0120 02:18:55.637144 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:55.675508 kubelet[2528]: I0120 02:18:55.662082 2528 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:18:55.675508 kubelet[2528]: I0120 02:18:55.665761 2528 server.go:1287] "Started kubelet" Jan 20 02:18:55.696909 kubelet[2528]: I0120 02:18:55.694111 2528 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:18:55.696909 kubelet[2528]: I0120 02:18:55.695796 2528 server.go:479] "Adding debug handlers to kubelet server" Jan 20 02:18:55.711112 kubelet[2528]: I0120 02:18:55.709166 2528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:18:55.711258 kubelet[2528]: I0120 02:18:55.711236 2528 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:18:55.789633 kubelet[2528]: I0120 02:18:55.778369 2528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:18:55.789633 kubelet[2528]: E0120 02:18:55.727583 2528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4eed833be238 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:18:55.665717816 +0000 UTC m=+3.628156817,LastTimestamp:2026-01-20 02:18:55.665717816 +0000 UTC m=+3.628156817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:18:55.789633 kubelet[2528]: I0120 02:18:55.778667 2528 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:18:55.789633 kubelet[2528]: I0120 02:18:55.782147 2528 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:18:55.821836 kubelet[2528]: I0120 02:18:55.812844 2528 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:18:55.822990 kubelet[2528]: I0120 02:18:55.822873 2528 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:18:55.822990 kubelet[2528]: E0120 02:18:55.799380 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:18:55.825171 kubelet[2528]: E0120 02:18:55.823152 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Jan 20 02:18:55.829088 kubelet[2528]: I0120 02:18:55.825916 2528 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:18:55.829088 kubelet[2528]: E0120 02:18:55.827605 2528 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:18:55.829088 kubelet[2528]: W0120 02:18:55.827997 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:18:55.829088 kubelet[2528]: E0120 02:18:55.828127 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:55.941206 kubelet[2528]: E0120 02:18:55.939836 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:18:56.020101 kubelet[2528]: I0120 02:18:56.018662 2528 factory.go:221] Registration of the containerd container factory successfully Jan 20 02:18:56.020101 kubelet[2528]: I0120 02:18:56.018703 2528 factory.go:221] Registration of the systemd container factory successfully Jan 20 02:18:56.040119 kubelet[2528]: E0120 02:18:56.027404 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Jan 20 02:18:56.056591 kubelet[2528]: E0120 02:18:56.056551 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:18:56.204860 kubelet[2528]: E0120 02:18:56.204819 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:18:56.325330 kubelet[2528]: E0120 02:18:56.315641 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:18:56.400571 kubelet[2528]: I0120 02:18:56.399822 2528 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:18:56.400571 kubelet[2528]: I0120 02:18:56.399848 2528 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:18:56.400571 kubelet[2528]: I0120 02:18:56.399878 2528 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:18:56.421801 kubelet[2528]: E0120 02:18:56.421753 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:18:56.423559 kubelet[2528]: I0120 02:18:56.422462 2528 policy_none.go:49] "None policy: Start" Jan 20 02:18:56.423559 kubelet[2528]: I0120 02:18:56.422491 2528 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:18:56.423559 kubelet[2528]: I0120 02:18:56.422516 2528 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:18:56.486158 kubelet[2528]: E0120 02:18:56.485951 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Jan 20 02:18:56.498745 kubelet[2528]: I0120 02:18:56.497236 2528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 02:18:56.509108 kubelet[2528]: I0120 02:18:56.507841 2528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 02:18:56.509108 kubelet[2528]: I0120 02:18:56.507877 2528 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 02:18:56.512969 kubelet[2528]: W0120 02:18:56.511600 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:18:56.512969 kubelet[2528]: E0120 02:18:56.511686 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:56.517700 kubelet[2528]: I0120 02:18:56.516487 2528 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:18:56.517700 kubelet[2528]: I0120 02:18:56.516510 2528 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 02:18:56.517700 kubelet[2528]: E0120 02:18:56.516630 2528 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:18:56.523652 kubelet[2528]: E0120 02:18:56.522112 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:18:56.535844 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 02:18:56.619811 kubelet[2528]: E0120 02:18:56.617198 2528 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:18:56.618807 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 02:18:56.637525 kubelet[2528]: E0120 02:18:56.626732 2528 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:18:56.671959 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 02:18:56.715968 kubelet[2528]: I0120 02:18:56.706747 2528 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 02:18:56.715968 kubelet[2528]: I0120 02:18:56.707155 2528 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:18:56.715968 kubelet[2528]: I0120 02:18:56.707182 2528 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:18:56.715968 kubelet[2528]: I0120 02:18:56.712001 2528 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:18:56.728496 kubelet[2528]: E0120 02:18:56.725641 2528 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:18:56.728496 kubelet[2528]: E0120 02:18:56.725700 2528 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:18:56.838568 kubelet[2528]: I0120 02:18:56.838528 2528 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:18:56.841722 kubelet[2528]: E0120 02:18:56.841612 2528 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jan 20 02:18:56.907582 kubelet[2528]: W0120 02:18:56.906946 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:18:56.907582 kubelet[2528]: E0120 02:18:56.907142 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:56.924088 kubelet[2528]: W0120 02:18:56.915251 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:18:56.924231 kubelet[2528]: E0120 02:18:56.923985 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:56.991770 systemd[1]: Created slice kubepods-burstable-pod8ec71aed03b5afb6ab77367272a36fb8.slice - libcontainer container kubepods-burstable-pod8ec71aed03b5afb6ab77367272a36fb8.slice. Jan 20 02:18:57.002455 kubelet[2528]: I0120 02:18:56.993689 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ec71aed03b5afb6ab77367272a36fb8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ec71aed03b5afb6ab77367272a36fb8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:18:57.002455 kubelet[2528]: I0120 02:18:56.993734 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:18:57.002455 kubelet[2528]: I0120 02:18:56.993763 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:18:57.002455 kubelet[2528]: I0120 02:18:56.993783 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:18:57.002455 kubelet[2528]: I0120 02:18:56.993856 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:18:57.002715 kubelet[2528]: I0120 02:18:56.993884 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:18:57.002715 kubelet[2528]: I0120 02:18:56.993907 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:18:57.002715 kubelet[2528]: I0120 02:18:56.993928 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ec71aed03b5afb6ab77367272a36fb8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ec71aed03b5afb6ab77367272a36fb8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:18:57.002715 kubelet[2528]: I0120 02:18:56.993955 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ec71aed03b5afb6ab77367272a36fb8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8ec71aed03b5afb6ab77367272a36fb8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:18:57.052471 kubelet[2528]: I0120 02:18:57.052237 2528 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:18:57.053097 kubelet[2528]: E0120 02:18:57.052702 2528 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jan 20 02:18:57.064643 kubelet[2528]: E0120 02:18:57.063584 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:18:57.113399 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 02:18:57.137093 kubelet[2528]: E0120 02:18:57.135348 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:18:57.154097 containerd[1567]: time="2026-01-20T02:18:57.153189641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 02:18:57.176965 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 02:18:57.197720 kubelet[2528]: E0120 02:18:57.195988 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:18:57.209350 containerd[1567]: time="2026-01-20T02:18:57.209242184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 02:18:57.315229 kubelet[2528]: E0120 02:18:57.314700 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="1.6s" Jan 20 02:18:57.315229 kubelet[2528]: W0120 02:18:57.316800 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:18:57.337235 kubelet[2528]: E0120 02:18:57.329938 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:57.343741 kubelet[2528]: E0120 02:18:57.339519 2528 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:57.376207 containerd[1567]: time="2026-01-20T02:18:57.375585182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8ec71aed03b5afb6ab77367272a36fb8,Namespace:kube-system,Attempt:0,}" Jan 20 02:18:57.484548 kubelet[2528]: I0120 02:18:57.482597 2528 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:18:57.509386 kubelet[2528]: E0120 02:18:57.508742 2528 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jan 20 02:18:57.982094 kubelet[2528]: W0120 02:18:57.908724 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:18:57.996984 kubelet[2528]: E0120 02:18:57.996864 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:58.138449 containerd[1567]: time="2026-01-20T02:18:58.138327809Z" level=info msg="connecting to shim 308476605bc5cbdb6072e4e1890cf6c16c6de68199ba23e1e9b3ff18b4c581fb" address="unix:///run/containerd/s/4973bf79c744dd432a25d44bfecb8ffd67d64d09c87f99955775b8d41c9f5fdf" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:18:58.261320 containerd[1567]: time="2026-01-20T02:18:58.240963945Z" level=info msg="connecting to shim d2500a690cf32f0006ea37b089ce2a1659ac97e8dddc52cf0b915267505f8197" address="unix:///run/containerd/s/bfc830ed85e646159b79b602c78d1658de3de61febd97314da0781e92db66e56" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:18:58.384390 kubelet[2528]: I0120 02:18:58.383533 2528 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:18:58.391722 kubelet[2528]: E0120 02:18:58.391586 2528 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jan 20 02:18:58.423120 containerd[1567]: time="2026-01-20T02:18:58.420958550Z" level=info msg="connecting to shim 6f43d06c67b0c7ae4cf918bbd9eb7434f1ab80470b16ef7b77353bd4fc8ce7c5" address="unix:///run/containerd/s/b094779e3b9e76541082e1817eeeb7450946d35ab6eadd0bb422d290a771efab" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:18:58.714651 kubelet[2528]: W0120 02:18:58.699623 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:18:58.714651 kubelet[2528]: E0120 02:18:58.699781 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:58.930939 kubelet[2528]: E0120 02:18:58.930849 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="3.2s" Jan 20 02:18:58.958740 systemd[1]: Started cri-containerd-308476605bc5cbdb6072e4e1890cf6c16c6de68199ba23e1e9b3ff18b4c581fb.scope - libcontainer container 308476605bc5cbdb6072e4e1890cf6c16c6de68199ba23e1e9b3ff18b4c581fb. Jan 20 02:18:59.170338 systemd[1]: Started cri-containerd-d2500a690cf32f0006ea37b089ce2a1659ac97e8dddc52cf0b915267505f8197.scope - libcontainer container d2500a690cf32f0006ea37b089ce2a1659ac97e8dddc52cf0b915267505f8197. Jan 20 02:18:59.225159 kubelet[2528]: W0120 02:18:59.224711 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:18:59.225159 kubelet[2528]: E0120 02:18:59.224774 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:18:59.692752 systemd[1]: Started cri-containerd-6f43d06c67b0c7ae4cf918bbd9eb7434f1ab80470b16ef7b77353bd4fc8ce7c5.scope - libcontainer container 6f43d06c67b0c7ae4cf918bbd9eb7434f1ab80470b16ef7b77353bd4fc8ce7c5. Jan 20 02:19:00.153566 kubelet[2528]: E0120 02:19:00.136816 2528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4eed833be238 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:18:55.665717816 +0000 UTC m=+3.628156817,LastTimestamp:2026-01-20 02:18:55.665717816 +0000 UTC m=+3.628156817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:19:00.238061 kubelet[2528]: I0120 02:19:00.230800 2528 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:19:00.256600 kubelet[2528]: W0120 02:19:00.241340 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:19:00.256600 kubelet[2528]: E0120 02:19:00.241514 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:19:00.260000 kubelet[2528]: E0120 02:19:00.259848 2528 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jan 20 02:19:00.797613 kubelet[2528]: W0120 02:19:00.730274 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:19:00.797613 kubelet[2528]: E0120 02:19:00.734690 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:19:00.981867 containerd[1567]: time="2026-01-20T02:19:00.981809068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"308476605bc5cbdb6072e4e1890cf6c16c6de68199ba23e1e9b3ff18b4c581fb\"" Jan 20 02:19:01.371581 kubelet[2528]: E0120 02:19:01.365933 2528 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:19:01.375672 containerd[1567]: time="2026-01-20T02:19:01.375520733Z" level=info msg="CreateContainer within sandbox \"308476605bc5cbdb6072e4e1890cf6c16c6de68199ba23e1e9b3ff18b4c581fb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 02:19:01.421982 containerd[1567]: time="2026-01-20T02:19:01.421887004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2500a690cf32f0006ea37b089ce2a1659ac97e8dddc52cf0b915267505f8197\"" Jan 20 02:19:01.470176 containerd[1567]: time="2026-01-20T02:19:01.468865709Z" level=info msg="CreateContainer within sandbox \"d2500a690cf32f0006ea37b089ce2a1659ac97e8dddc52cf0b915267505f8197\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 02:19:01.795071 containerd[1567]: time="2026-01-20T02:19:01.782611636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8ec71aed03b5afb6ab77367272a36fb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f43d06c67b0c7ae4cf918bbd9eb7434f1ab80470b16ef7b77353bd4fc8ce7c5\"" Jan 20 02:19:01.866492 containerd[1567]: time="2026-01-20T02:19:01.845971253Z" level=info msg="CreateContainer within sandbox \"6f43d06c67b0c7ae4cf918bbd9eb7434f1ab80470b16ef7b77353bd4fc8ce7c5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 02:19:01.912223 containerd[1567]: time="2026-01-20T02:19:01.912096114Z" level=info msg="Container fa31471b5a284f84c3025ee484d18b1714f347cc6433937f34fbc736aae21821: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:19:01.930950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810112760.mount: Deactivated successfully. Jan 20 02:19:01.975361 containerd[1567]: time="2026-01-20T02:19:01.972827874Z" level=info msg="Container 1bf8ff131a2570de6580d97c23b2851c1991b177785c8dd9aa2cdbeefdcc01c7: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:19:02.001765 containerd[1567]: time="2026-01-20T02:19:01.998255061Z" level=info msg="Container 15f9f7c0d0634ba5502060beae6ba99967ddaac8e3d6d45ddd5cd8ca253c2baa: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:19:02.160701 kubelet[2528]: E0120 02:19:02.137771 2528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="6.4s" Jan 20 02:19:02.163434 containerd[1567]: time="2026-01-20T02:19:02.139768970Z" level=info msg="CreateContainer within sandbox \"d2500a690cf32f0006ea37b089ce2a1659ac97e8dddc52cf0b915267505f8197\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa31471b5a284f84c3025ee484d18b1714f347cc6433937f34fbc736aae21821\"" Jan 20 02:19:02.163434 containerd[1567]: time="2026-01-20T02:19:02.142376992Z" level=info msg="CreateContainer within sandbox \"308476605bc5cbdb6072e4e1890cf6c16c6de68199ba23e1e9b3ff18b4c581fb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1bf8ff131a2570de6580d97c23b2851c1991b177785c8dd9aa2cdbeefdcc01c7\"" Jan 20 02:19:02.163434 containerd[1567]: time="2026-01-20T02:19:02.154251016Z" level=info msg="StartContainer for \"1bf8ff131a2570de6580d97c23b2851c1991b177785c8dd9aa2cdbeefdcc01c7\"" Jan 20 02:19:02.163434 containerd[1567]: time="2026-01-20T02:19:02.155623085Z" level=info msg="StartContainer for \"fa31471b5a284f84c3025ee484d18b1714f347cc6433937f34fbc736aae21821\"" Jan 20 02:19:02.163434 containerd[1567]: time="2026-01-20T02:19:02.157675752Z" level=info msg="connecting to shim fa31471b5a284f84c3025ee484d18b1714f347cc6433937f34fbc736aae21821" address="unix:///run/containerd/s/bfc830ed85e646159b79b602c78d1658de3de61febd97314da0781e92db66e56" protocol=ttrpc version=3 Jan 20 02:19:02.168192 containerd[1567]: time="2026-01-20T02:19:02.168111322Z" level=info msg="connecting to shim 1bf8ff131a2570de6580d97c23b2851c1991b177785c8dd9aa2cdbeefdcc01c7" address="unix:///run/containerd/s/4973bf79c744dd432a25d44bfecb8ffd67d64d09c87f99955775b8d41c9f5fdf" protocol=ttrpc version=3 Jan 20 02:19:02.201285 containerd[1567]: time="2026-01-20T02:19:02.195754533Z" level=info msg="CreateContainer within sandbox \"6f43d06c67b0c7ae4cf918bbd9eb7434f1ab80470b16ef7b77353bd4fc8ce7c5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"15f9f7c0d0634ba5502060beae6ba99967ddaac8e3d6d45ddd5cd8ca253c2baa\"" Jan 20 02:19:02.201285 containerd[1567]: time="2026-01-20T02:19:02.199124375Z" level=info msg="StartContainer for \"15f9f7c0d0634ba5502060beae6ba99967ddaac8e3d6d45ddd5cd8ca253c2baa\"" Jan 20 02:19:02.205477 containerd[1567]: time="2026-01-20T02:19:02.204901184Z" level=info msg="connecting to shim 15f9f7c0d0634ba5502060beae6ba99967ddaac8e3d6d45ddd5cd8ca253c2baa" address="unix:///run/containerd/s/b094779e3b9e76541082e1817eeeb7450946d35ab6eadd0bb422d290a771efab" protocol=ttrpc version=3 Jan 20 02:19:02.313929 systemd[1]: Started cri-containerd-fa31471b5a284f84c3025ee484d18b1714f347cc6433937f34fbc736aae21821.scope - libcontainer container fa31471b5a284f84c3025ee484d18b1714f347cc6433937f34fbc736aae21821. Jan 20 02:19:02.375670 systemd[1]: Started cri-containerd-1bf8ff131a2570de6580d97c23b2851c1991b177785c8dd9aa2cdbeefdcc01c7.scope - libcontainer container 1bf8ff131a2570de6580d97c23b2851c1991b177785c8dd9aa2cdbeefdcc01c7. Jan 20 02:19:02.391736 systemd[1]: Started cri-containerd-15f9f7c0d0634ba5502060beae6ba99967ddaac8e3d6d45ddd5cd8ca253c2baa.scope - libcontainer container 15f9f7c0d0634ba5502060beae6ba99967ddaac8e3d6d45ddd5cd8ca253c2baa. Jan 20 02:19:02.926968 kubelet[2528]: W0120 02:19:02.926816 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:19:02.926968 kubelet[2528]: E0120 02:19:02.926910 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:19:03.399573 update_engine[1538]: I20260120 02:19:03.391963 1538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:19:03.414236 kubelet[2528]: W0120 02:19:03.399829 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Jan 20 02:19:03.414236 kubelet[2528]: E0120 02:19:03.399930 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Jan 20 02:19:03.414794 update_engine[1538]: I20260120 02:19:03.414750 1538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:19:03.425587 update_engine[1538]: I20260120 02:19:03.425535 1538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:19:03.461250 update_engine[1538]: E20260120 02:19:03.460498 1538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:19:03.543786 update_engine[1538]: I20260120 02:19:03.515573 1538 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:19:03.543786 update_engine[1538]: I20260120 02:19:03.515892 1538 omaha_request_action.cc:617] Omaha request response: Jan 20 02:19:03.570681 update_engine[1538]: E20260120 02:19:03.562225 1538 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 02:19:03.570681 update_engine[1538]: I20260120 02:19:03.567631 1538 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 02:19:03.570681 update_engine[1538]: I20260120 02:19:03.567652 1538 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:19:03.570681 update_engine[1538]: I20260120 02:19:03.567662 1538 update_attempter.cc:306] Processing Done. Jan 20 02:19:03.570681 update_engine[1538]: E20260120 02:19:03.567759 1538 update_attempter.cc:619] Update failed. Jan 20 02:19:03.570681 update_engine[1538]: I20260120 02:19:03.567818 1538 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 02:19:03.570681 update_engine[1538]: I20260120 02:19:03.567832 1538 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 02:19:03.570681 update_engine[1538]: I20260120 02:19:03.567842 1538 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 02:19:03.570681 update_engine[1538]: I20260120 02:19:03.568106 1538 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:19:03.570681 update_engine[1538]: I20260120 02:19:03.568215 1538 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:19:03.570681 update_engine[1538]: I20260120 02:19:03.568228 1538 omaha_request_action.cc:272] Request: Jan 20 02:19:03.570681 update_engine[1538]: Jan 20 02:19:03.570681 update_engine[1538]: Jan 20 02:19:03.570681 update_engine[1538]: Jan 20 02:19:03.570681 update_engine[1538]: Jan 20 02:19:03.570681 update_engine[1538]: Jan 20 02:19:03.570681 update_engine[1538]: Jan 20 02:19:03.570681 update_engine[1538]: I20260120 02:19:03.568238 1538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:19:03.603868 update_engine[1538]: I20260120 02:19:03.569365 1538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:19:03.603868 update_engine[1538]: I20260120 02:19:03.570569 1538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:19:03.611538 update_engine[1538]: E20260120 02:19:03.611471 1538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:19:03.611768 update_engine[1538]: I20260120 02:19:03.611733 1538 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:19:03.611849 update_engine[1538]: I20260120 02:19:03.611825 1538 omaha_request_action.cc:617] Omaha request response: Jan 20 02:19:03.612140 update_engine[1538]: I20260120 02:19:03.611892 1538 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:19:03.612140 update_engine[1538]: I20260120 02:19:03.611907 1538 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:19:03.612140 update_engine[1538]: I20260120 02:19:03.611917 1538 update_attempter.cc:306] Processing Done. Jan 20 02:19:03.612140 update_engine[1538]: I20260120 02:19:03.611931 1538 update_attempter.cc:310] Error event sent. Jan 20 02:19:03.612140 update_engine[1538]: I20260120 02:19:03.611945 1538 update_check_scheduler.cc:74] Next update check in 43m43s Jan 20 02:19:03.864855 locksmithd[1608]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 02:19:03.868591 locksmithd[1608]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 02:19:03.887180 kubelet[2528]: I0120 02:19:03.887140 2528 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:19:03.888634 kubelet[2528]: E0120 02:19:03.888508 2528 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jan 20 02:19:04.361258 containerd[1567]: time="2026-01-20T02:19:04.361199499Z" level=info msg="StartContainer for \"15f9f7c0d0634ba5502060beae6ba99967ddaac8e3d6d45ddd5cd8ca253c2baa\" returns successfully" Jan 20 02:19:04.643901 containerd[1567]: time="2026-01-20T02:19:04.637001318Z" level=info msg="StartContainer for \"fa31471b5a284f84c3025ee484d18b1714f347cc6433937f34fbc736aae21821\" returns successfully" Jan 20 02:19:05.409580 containerd[1567]: time="2026-01-20T02:19:05.400987822Z" level=info msg="StartContainer for \"1bf8ff131a2570de6580d97c23b2851c1991b177785c8dd9aa2cdbeefdcc01c7\" returns successfully" Jan 20 02:19:05.851124 kubelet[2528]: E0120 02:19:05.826668 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:19:05.878385 kubelet[2528]: E0120 02:19:05.867499 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:19:05.918539 kubelet[2528]: E0120 02:19:05.907894 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:19:06.731855 kubelet[2528]: E0120 02:19:06.727995 2528 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:19:06.988797 kubelet[2528]: E0120 02:19:06.982487 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:19:07.023277 kubelet[2528]: E0120 02:19:07.014186 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:19:07.045265 kubelet[2528]: E0120 02:19:07.044803 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:19:08.064826 kubelet[2528]: E0120 02:19:08.044234 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:19:08.099936 kubelet[2528]: E0120 02:19:08.084913 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:19:10.305112 kubelet[2528]: I0120 02:19:10.304628 2528 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:19:10.798529 kubelet[2528]: E0120 02:19:10.791106 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:19:12.794620 kubelet[2528]: E0120 02:19:12.793873 2528 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:19:15.780910 kubelet[2528]: W0120 02:19:15.778153 2528 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 02:19:15.780910 kubelet[2528]: E0120 02:19:15.778243 2528 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 02:19:16.729564 kubelet[2528]: E0120 02:19:16.729504 2528 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:19:17.223104 kubelet[2528]: E0120 02:19:17.222635 2528 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 02:19:17.428555 kubelet[2528]: E0120 02:19:17.423351 2528 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4eed833be238 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:18:55.665717816 +0000 UTC m=+3.628156817,LastTimestamp:2026-01-20 02:18:55.665717816 +0000 UTC m=+3.628156817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:19:17.578979 kubelet[2528]: I0120 02:19:17.577928 2528 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:19:17.578979 kubelet[2528]: E0120 02:19:17.577984 2528 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 02:19:17.605472 kubelet[2528]: I0120 02:19:17.602849 2528 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:19:17.737694 kubelet[2528]: E0120 02:19:17.735474 2528 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4eed8ce1dae3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:18:55.827589859 +0000 UTC m=+3.790028820,LastTimestamp:2026-01-20 02:18:55.827589859 +0000 UTC m=+3.790028820,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:19:17.912755 kubelet[2528]: I0120 02:19:17.908913 2528 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:19:17.932182 kubelet[2528]: I0120 02:19:17.932157 2528 apiserver.go:52] "Watching apiserver" Jan 20 02:19:17.989973 kubelet[2528]: I0120 02:19:17.989495 2528 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:19:18.030333 kubelet[2528]: I0120 02:19:18.028473 2528 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:19:21.342125 kubelet[2528]: I0120 02:19:21.341705 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.341684094 podStartE2EDuration="4.341684094s" podCreationTimestamp="2026-01-20 02:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:19:21.17969459 +0000 UTC m=+29.142133550" watchObservedRunningTime="2026-01-20 02:19:21.341684094 +0000 UTC m=+29.304123054" Jan 20 02:19:21.492339 kubelet[2528]: I0120 02:19:21.491242 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.491218274 podStartE2EDuration="4.491218274s" podCreationTimestamp="2026-01-20 02:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:19:21.343140739 +0000 UTC m=+29.305579699" watchObservedRunningTime="2026-01-20 02:19:21.491218274 +0000 UTC m=+29.453657234" Jan 20 02:19:23.056866 kubelet[2528]: I0120 02:19:23.056510 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.056486701 podStartE2EDuration="6.056486701s" podCreationTimestamp="2026-01-20 02:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:19:21.505353149 +0000 UTC m=+29.467792109" watchObservedRunningTime="2026-01-20 02:19:23.056486701 +0000 UTC m=+31.018925662" Jan 20 02:19:36.020142 kubelet[2528]: E0120 02:19:35.996422 2528 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.46s" Jan 20 02:19:37.621818 kubelet[2528]: E0120 02:19:37.593978 2528 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.044s" Jan 20 02:19:40.254967 kubelet[2528]: E0120 02:19:40.244162 2528 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.34s" Jan 20 02:19:43.567407 systemd[1]: Reload requested from client PID 2812 ('systemctl') (unit session-9.scope)... Jan 20 02:19:43.568240 systemd[1]: Reloading... Jan 20 02:19:46.178983 zram_generator::config[2855]: No configuration found. Jan 20 02:19:50.623999 systemd[1]: Reloading finished in 7041 ms. Jan 20 02:19:50.873326 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:19:51.043588 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:19:51.044345 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:19:51.044517 systemd[1]: kubelet.service: Consumed 7.351s CPU time, 138.3M memory peak. Jan 20 02:19:51.362793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:19:54.328318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:19:54.528707 (kubelet)[2900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:19:55.159178 kubelet[2900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:19:55.159178 kubelet[2900]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:19:55.159178 kubelet[2900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:19:55.159178 kubelet[2900]: I0120 02:19:55.155310 2900 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:19:55.244332 kubelet[2900]: I0120 02:19:55.242180 2900 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 02:19:55.244332 kubelet[2900]: I0120 02:19:55.242227 2900 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:19:55.268780 kubelet[2900]: I0120 02:19:55.247454 2900 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 02:19:55.280669 kubelet[2900]: I0120 02:19:55.277873 2900 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 02:19:55.357173 kubelet[2900]: I0120 02:19:55.354510 2900 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:19:55.522960 kubelet[2900]: I0120 02:19:55.508248 2900 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:19:55.667120 kubelet[2900]: I0120 02:19:55.664206 2900 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:19:55.667120 kubelet[2900]: I0120 02:19:55.664850 2900 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:19:55.667120 kubelet[2900]: I0120 02:19:55.665128 2900 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:19:55.667120 kubelet[2900]: I0120 02:19:55.665468 2900 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:19:55.718915 kubelet[2900]: I0120 02:19:55.718823 2900 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 02:19:55.719149 kubelet[2900]: I0120 02:19:55.718999 2900 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:19:55.729708 kubelet[2900]: I0120 02:19:55.719424 2900 kubelet.go:446] "Attempting to sync node with API server" Jan 20 02:19:55.729708 kubelet[2900]: I0120 02:19:55.719514 2900 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:19:55.747193 kubelet[2900]: I0120 02:19:55.745173 2900 kubelet.go:352] "Adding apiserver pod source" Jan 20 02:19:55.747193 kubelet[2900]: I0120 02:19:55.745333 2900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:19:56.129783 kubelet[2900]: I0120 02:19:56.121806 2900 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 02:19:56.140763 kubelet[2900]: I0120 02:19:56.135154 2900 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 02:19:56.181169 kubelet[2900]: I0120 02:19:56.177806 2900 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:19:56.181169 kubelet[2900]: I0120 02:19:56.177868 2900 server.go:1287] "Started kubelet" Jan 20 02:19:56.215746 kubelet[2900]: I0120 02:19:56.209301 2900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:19:56.247993 kubelet[2900]: I0120 02:19:56.236166 2900 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:19:56.264886 kubelet[2900]: I0120 02:19:56.264832 2900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:19:56.265498 kubelet[2900]: I0120 02:19:56.265479 2900 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:19:56.265968 kubelet[2900]: I0120 02:19:56.265946 2900 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:19:56.331785 kubelet[2900]: I0120 02:19:56.320993 2900 factory.go:221] Registration of the systemd container factory successfully Jan 20 02:19:56.331785 kubelet[2900]: I0120 02:19:56.321244 2900 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:19:56.486226 kubelet[2900]: I0120 02:19:56.485943 2900 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:19:56.488284 kubelet[2900]: I0120 02:19:56.487523 2900 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:19:56.523308 kubelet[2900]: I0120 02:19:56.520498 2900 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:19:56.603812 kubelet[2900]: I0120 02:19:56.601450 2900 factory.go:221] Registration of the containerd container factory successfully Jan 20 02:19:56.762731 kubelet[2900]: I0120 02:19:56.760527 2900 server.go:479] "Adding debug handlers to kubelet server" Jan 20 02:19:57.182894 kubelet[2900]: I0120 02:19:56.994176 2900 apiserver.go:52] "Watching apiserver" Jan 20 02:19:58.035778 kubelet[2900]: I0120 02:19:58.024509 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 02:19:58.093990 kubelet[2900]: I0120 02:19:58.093472 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 02:19:58.093990 kubelet[2900]: I0120 02:19:58.093516 2900 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 02:19:58.093990 kubelet[2900]: I0120 02:19:58.093546 2900 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:19:58.109713 kubelet[2900]: I0120 02:19:58.093557 2900 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 02:19:58.109713 kubelet[2900]: E0120 02:19:58.109674 2900 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:19:58.210453 kubelet[2900]: E0120 02:19:58.210266 2900 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:19:58.417474 kubelet[2900]: E0120 02:19:58.417435 2900 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:19:58.981394 kubelet[2900]: E0120 02:19:58.978117 2900 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:19:59.283352 sudo[2936]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 20 02:19:59.405916 sudo[2936]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 20 02:19:59.974554 kubelet[2900]: E0120 02:19:59.959737 2900 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:20:00.387501 kubelet[2900]: I0120 02:20:00.382303 2900 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:20:00.387501 kubelet[2900]: I0120 02:20:00.382334 2900 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:20:00.387501 kubelet[2900]: I0120 02:20:00.382365 2900 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:20:00.418201 kubelet[2900]: I0120 02:20:00.390308 2900 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 02:20:00.418201 kubelet[2900]: I0120 02:20:00.390336 2900 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 02:20:00.418201 kubelet[2900]: I0120 02:20:00.390371 2900 policy_none.go:49] "None policy: Start" Jan 20 02:20:00.418201 kubelet[2900]: I0120 02:20:00.390387 2900 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:20:00.418201 kubelet[2900]: I0120 02:20:00.390405 2900 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:20:00.418201 kubelet[2900]: I0120 02:20:00.390684 2900 state_mem.go:75] "Updated machine memory state" Jan 20 02:20:00.713658 kubelet[2900]: I0120 02:20:00.616421 2900 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 02:20:00.781749 kubelet[2900]: I0120 02:20:00.780389 2900 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:20:00.828150 kubelet[2900]: I0120 02:20:00.826728 2900 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:20:00.851932 kubelet[2900]: I0120 02:20:00.834488 2900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:20:00.893752 kubelet[2900]: E0120 02:20:00.890416 2900 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:20:01.130790 kubelet[2900]: I0120 02:20:01.126482 2900 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 02:20:01.161529 containerd[1567]: time="2026-01-20T02:20:01.161368148Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 02:20:01.162395 kubelet[2900]: I0120 02:20:01.161815 2900 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 02:20:01.331102 kubelet[2900]: I0120 02:20:01.323509 2900 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:20:01.776238 kubelet[2900]: I0120 02:20:01.762445 2900 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:20:01.791259 kubelet[2900]: I0120 02:20:01.782566 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxlj\" (UniqueName: \"kubernetes.io/projected/c1d7fc33-a220-41a8-a595-d0a23aa9c359-kube-api-access-nsxlj\") pod \"kube-proxy-nxm9l\" (UID: \"c1d7fc33-a220-41a8-a595-d0a23aa9c359\") " pod="kube-system/kube-proxy-nxm9l" Jan 20 02:20:01.791259 kubelet[2900]: I0120 02:20:01.787860 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ec71aed03b5afb6ab77367272a36fb8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ec71aed03b5afb6ab77367272a36fb8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:20:01.791259 kubelet[2900]: I0120 02:20:01.788163 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ec71aed03b5afb6ab77367272a36fb8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8ec71aed03b5afb6ab77367272a36fb8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:20:01.791259 kubelet[2900]: I0120 02:20:01.788656 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ec71aed03b5afb6ab77367272a36fb8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8ec71aed03b5afb6ab77367272a36fb8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:20:01.791259 kubelet[2900]: I0120 02:20:01.789129 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:20:01.791681 kubelet[2900]: I0120 02:20:01.789357 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:20:01.807392 kubelet[2900]: I0120 02:20:01.798241 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:20:01.807392 kubelet[2900]: I0120 02:20:01.798348 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1d7fc33-a220-41a8-a595-d0a23aa9c359-xtables-lock\") pod \"kube-proxy-nxm9l\" (UID: \"c1d7fc33-a220-41a8-a595-d0a23aa9c359\") " pod="kube-system/kube-proxy-nxm9l" Jan 20 02:20:01.807392 kubelet[2900]: I0120 02:20:01.798445 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1d7fc33-a220-41a8-a595-d0a23aa9c359-lib-modules\") pod \"kube-proxy-nxm9l\" (UID: \"c1d7fc33-a220-41a8-a595-d0a23aa9c359\") " pod="kube-system/kube-proxy-nxm9l" Jan 20 02:20:01.807392 kubelet[2900]: I0120 02:20:01.798474 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:20:01.807392 kubelet[2900]: I0120 02:20:01.798498 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:20:01.807789 kubelet[2900]: I0120 02:20:01.798649 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:20:01.807789 kubelet[2900]: I0120 02:20:01.798680 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1d7fc33-a220-41a8-a595-d0a23aa9c359-kube-proxy\") pod \"kube-proxy-nxm9l\" (UID: \"c1d7fc33-a220-41a8-a595-d0a23aa9c359\") " pod="kube-system/kube-proxy-nxm9l" Jan 20 02:20:01.807789 kubelet[2900]: I0120 02:20:01.799493 2900 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 02:20:01.807789 kubelet[2900]: I0120 02:20:01.799811 2900 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:20:02.262814 systemd[1]: Created slice kubepods-besteffort-podc1d7fc33_a220_41a8_a595_d0a23aa9c359.slice - libcontainer container kubepods-besteffort-podc1d7fc33_a220_41a8_a595_d0a23aa9c359.slice. Jan 20 02:20:03.467404 containerd[1567]: time="2026-01-20T02:20:03.433190282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxm9l,Uid:c1d7fc33-a220-41a8-a595-d0a23aa9c359,Namespace:kube-system,Attempt:0,}" Jan 20 02:20:05.071436 containerd[1567]: time="2026-01-20T02:20:05.070522351Z" level=info msg="connecting to shim 2c60c9535454c5515fc64a63437c201881edc8a334d19d5c3148adcfd91a919f" address="unix:///run/containerd/s/be6c51364851acca8d80c1b052a93051805730d999df87d7cf6fc84b2c587263" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:20:06.661991 systemd[1]: Started cri-containerd-2c60c9535454c5515fc64a63437c201881edc8a334d19d5c3148adcfd91a919f.scope - libcontainer container 2c60c9535454c5515fc64a63437c201881edc8a334d19d5c3148adcfd91a919f. Jan 20 02:20:08.312119 sudo[2936]: pam_unix(sudo:session): session closed for user root Jan 20 02:20:08.751175 containerd[1567]: time="2026-01-20T02:20:08.750129151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxm9l,Uid:c1d7fc33-a220-41a8-a595-d0a23aa9c359,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c60c9535454c5515fc64a63437c201881edc8a334d19d5c3148adcfd91a919f\"" Jan 20 02:20:08.822386 containerd[1567]: time="2026-01-20T02:20:08.817257544Z" level=info msg="CreateContainer within sandbox \"2c60c9535454c5515fc64a63437c201881edc8a334d19d5c3148adcfd91a919f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 02:20:08.988373 containerd[1567]: time="2026-01-20T02:20:08.988320413Z" level=info msg="Container 966cf59bb6a104050674b27c411bc50fdee2ccdb0b4f13fbe7e9d3b5adf82d19: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:20:08.996226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804281878.mount: Deactivated successfully. Jan 20 02:20:09.118463 containerd[1567]: time="2026-01-20T02:20:09.117977665Z" level=info msg="CreateContainer within sandbox \"2c60c9535454c5515fc64a63437c201881edc8a334d19d5c3148adcfd91a919f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"966cf59bb6a104050674b27c411bc50fdee2ccdb0b4f13fbe7e9d3b5adf82d19\"" Jan 20 02:20:09.141950 containerd[1567]: time="2026-01-20T02:20:09.141854151Z" level=info msg="StartContainer for \"966cf59bb6a104050674b27c411bc50fdee2ccdb0b4f13fbe7e9d3b5adf82d19\"" Jan 20 02:20:09.151140 containerd[1567]: time="2026-01-20T02:20:09.150970440Z" level=info msg="connecting to shim 966cf59bb6a104050674b27c411bc50fdee2ccdb0b4f13fbe7e9d3b5adf82d19" address="unix:///run/containerd/s/be6c51364851acca8d80c1b052a93051805730d999df87d7cf6fc84b2c587263" protocol=ttrpc version=3 Jan 20 02:20:09.461685 systemd[1]: Started cri-containerd-966cf59bb6a104050674b27c411bc50fdee2ccdb0b4f13fbe7e9d3b5adf82d19.scope - libcontainer container 966cf59bb6a104050674b27c411bc50fdee2ccdb0b4f13fbe7e9d3b5adf82d19. Jan 20 02:20:11.533508 containerd[1567]: time="2026-01-20T02:20:11.520556326Z" level=info msg="StartContainer for \"966cf59bb6a104050674b27c411bc50fdee2ccdb0b4f13fbe7e9d3b5adf82d19\" returns successfully" Jan 20 02:20:15.077393 kubelet[2900]: I0120 02:20:15.077239 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nxm9l" podStartSLOduration=17.077216184 podStartE2EDuration="17.077216184s" podCreationTimestamp="2026-01-20 02:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:20:12.265396704 +0000 UTC m=+17.667729930" watchObservedRunningTime="2026-01-20 02:20:15.077216184 +0000 UTC m=+20.479549381" Jan 20 02:20:15.126146 systemd[1]: Created slice kubepods-burstable-podf005b0f2_4c88_40c6_a2d4_a180bd513b5f.slice - libcontainer container kubepods-burstable-podf005b0f2_4c88_40c6_a2d4_a180bd513b5f.slice. Jan 20 02:20:15.287128 kubelet[2900]: I0120 02:20:15.286971 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-cgroup\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287389 kubelet[2900]: I0120 02:20:15.287146 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-lib-modules\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287389 kubelet[2900]: I0120 02:20:15.287193 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-config-path\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287389 kubelet[2900]: I0120 02:20:15.287252 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-bpf-maps\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287389 kubelet[2900]: I0120 02:20:15.287284 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-run\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287389 kubelet[2900]: I0120 02:20:15.287304 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-xtables-lock\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287389 kubelet[2900]: I0120 02:20:15.287326 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5fhs\" (UniqueName: \"kubernetes.io/projected/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-kube-api-access-w5fhs\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287868 kubelet[2900]: I0120 02:20:15.287356 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-host-proc-sys-kernel\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287868 kubelet[2900]: I0120 02:20:15.287376 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-hubble-tls\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287868 kubelet[2900]: I0120 02:20:15.287399 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-hostproc\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287868 kubelet[2900]: I0120 02:20:15.287424 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-host-proc-sys-net\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287868 kubelet[2900]: I0120 02:20:15.287446 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cni-path\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.287868 kubelet[2900]: I0120 02:20:15.287468 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-etc-cni-netd\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.288246 kubelet[2900]: I0120 02:20:15.287500 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-clustermesh-secrets\") pod \"cilium-wgmcb\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " pod="kube-system/cilium-wgmcb" Jan 20 02:20:15.340518 systemd[1]: Created slice kubepods-besteffort-pod8e769453_5f0d_4e1d_8910_c192acbf2294.slice - libcontainer container kubepods-besteffort-pod8e769453_5f0d_4e1d_8910_c192acbf2294.slice. Jan 20 02:20:15.491814 kubelet[2900]: I0120 02:20:15.472403 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4bv9\" (UniqueName: \"kubernetes.io/projected/8e769453-5f0d-4e1d-8910-c192acbf2294-kube-api-access-l4bv9\") pod \"cilium-operator-6c4d7847fc-k7xz8\" (UID: \"8e769453-5f0d-4e1d-8910-c192acbf2294\") " pod="kube-system/cilium-operator-6c4d7847fc-k7xz8" Jan 20 02:20:15.491814 kubelet[2900]: I0120 02:20:15.472530 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e769453-5f0d-4e1d-8910-c192acbf2294-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-k7xz8\" (UID: \"8e769453-5f0d-4e1d-8910-c192acbf2294\") " pod="kube-system/cilium-operator-6c4d7847fc-k7xz8" Jan 20 02:20:15.907089 containerd[1567]: time="2026-01-20T02:20:15.906487246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wgmcb,Uid:f005b0f2-4c88-40c6-a2d4-a180bd513b5f,Namespace:kube-system,Attempt:0,}" Jan 20 02:20:16.100076 containerd[1567]: time="2026-01-20T02:20:16.089352728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k7xz8,Uid:8e769453-5f0d-4e1d-8910-c192acbf2294,Namespace:kube-system,Attempt:0,}" Jan 20 02:20:16.319426 containerd[1567]: time="2026-01-20T02:20:16.311204545Z" level=info msg="connecting to shim 36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b" address="unix:///run/containerd/s/a2c5429780b4801be57d181460a37fd3d82492d213c379d73abe186ca57fecb3" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:20:16.458292 containerd[1567]: time="2026-01-20T02:20:16.430322832Z" level=info msg="connecting to shim a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25" address="unix:///run/containerd/s/af328ab87b396e02de74595607a38fe4daec85d05ba8c90a02821c3d46334704" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:20:16.978523 systemd[1]: Started cri-containerd-36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b.scope - libcontainer container 36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b. Jan 20 02:20:17.007440 systemd[1]: Started cri-containerd-a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25.scope - libcontainer container a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25. Jan 20 02:20:17.643328 containerd[1567]: time="2026-01-20T02:20:17.642991712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wgmcb,Uid:f005b0f2-4c88-40c6-a2d4-a180bd513b5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\"" Jan 20 02:20:17.779894 containerd[1567]: time="2026-01-20T02:20:17.773471313Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 02:20:18.018837 containerd[1567]: time="2026-01-20T02:20:18.018781241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k7xz8,Uid:8e769453-5f0d-4e1d-8910-c192acbf2294,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\"" Jan 20 02:20:58.290436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1940149702.mount: Deactivated successfully. Jan 20 02:21:28.996846 containerd[1567]: time="2026-01-20T02:21:28.995860039Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:21:28.996846 containerd[1567]: time="2026-01-20T02:21:28.990587351Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 02:21:29.025543 containerd[1567]: time="2026-01-20T02:21:29.025495576Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:21:29.121635 containerd[1567]: time="2026-01-20T02:21:29.118489576Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 1m11.339140778s" Jan 20 02:21:29.121635 containerd[1567]: time="2026-01-20T02:21:29.118584363Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 02:21:29.145688 containerd[1567]: time="2026-01-20T02:21:29.145446191Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 02:21:29.214234 containerd[1567]: time="2026-01-20T02:21:29.209224353Z" level=info msg="CreateContainer within sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 02:21:29.495829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109415185.mount: Deactivated successfully. Jan 20 02:21:29.743216 containerd[1567]: time="2026-01-20T02:21:29.741568264Z" level=info msg="Container 8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:21:29.866420 containerd[1567]: time="2026-01-20T02:21:29.864680820Z" level=info msg="CreateContainer within sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\"" Jan 20 02:21:29.868365 containerd[1567]: time="2026-01-20T02:21:29.868162724Z" level=info msg="StartContainer for \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\"" Jan 20 02:21:29.877596 containerd[1567]: time="2026-01-20T02:21:29.877550959Z" level=info msg="connecting to shim 8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00" address="unix:///run/containerd/s/a2c5429780b4801be57d181460a37fd3d82492d213c379d73abe186ca57fecb3" protocol=ttrpc version=3 Jan 20 02:21:30.294709 systemd[1]: Started cri-containerd-8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00.scope - libcontainer container 8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00. Jan 20 02:21:30.669721 containerd[1567]: time="2026-01-20T02:21:30.669107171Z" level=info msg="StartContainer for \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\" returns successfully" Jan 20 02:21:30.738565 systemd[1]: cri-containerd-8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00.scope: Deactivated successfully. Jan 20 02:21:30.811442 containerd[1567]: time="2026-01-20T02:21:30.811283736Z" level=info msg="received container exit event container_id:\"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\" id:\"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\" pid:3305 exited_at:{seconds:1768875690 nanos:793605730}" Jan 20 02:21:31.197878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00-rootfs.mount: Deactivated successfully. Jan 20 02:21:32.347923 containerd[1567]: time="2026-01-20T02:21:32.344386441Z" level=info msg="CreateContainer within sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 02:21:33.135847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3148384295.mount: Deactivated successfully. Jan 20 02:21:33.278539 containerd[1567]: time="2026-01-20T02:21:33.278241562Z" level=info msg="Container bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:21:33.317067 containerd[1567]: time="2026-01-20T02:21:33.315901973Z" level=info msg="CreateContainer within sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\"" Jan 20 02:21:33.328623 containerd[1567]: time="2026-01-20T02:21:33.328577084Z" level=info msg="StartContainer for \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\"" Jan 20 02:21:33.362110 containerd[1567]: time="2026-01-20T02:21:33.361937662Z" level=info msg="connecting to shim bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23" address="unix:///run/containerd/s/a2c5429780b4801be57d181460a37fd3d82492d213c379d73abe186ca57fecb3" protocol=ttrpc version=3 Jan 20 02:21:33.508874 systemd[1]: Started cri-containerd-bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23.scope - libcontainer container bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23. Jan 20 02:21:33.818887 containerd[1567]: time="2026-01-20T02:21:33.818466517Z" level=info msg="StartContainer for \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\" returns successfully" Jan 20 02:21:33.885354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 02:21:33.885666 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:21:33.906327 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:21:33.924577 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:21:33.925330 systemd[1]: cri-containerd-bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23.scope: Deactivated successfully. Jan 20 02:21:33.958435 containerd[1567]: time="2026-01-20T02:21:33.950863343Z" level=info msg="received container exit event container_id:\"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\" id:\"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\" pid:3358 exited_at:{seconds:1768875693 nanos:945757648}" Jan 20 02:21:34.033615 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 02:21:34.281147 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:21:34.389308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23-rootfs.mount: Deactivated successfully. Jan 20 02:21:35.550715 containerd[1567]: time="2026-01-20T02:21:35.546798153Z" level=info msg="CreateContainer within sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 02:21:35.824796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070171278.mount: Deactivated successfully. Jan 20 02:21:35.890128 containerd[1567]: time="2026-01-20T02:21:35.889408127Z" level=info msg="Container d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:21:36.012122 containerd[1567]: time="2026-01-20T02:21:36.009373527Z" level=info msg="CreateContainer within sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\"" Jan 20 02:21:36.025902 containerd[1567]: time="2026-01-20T02:21:36.022633487Z" level=info msg="StartContainer for \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\"" Jan 20 02:21:36.096830 containerd[1567]: time="2026-01-20T02:21:36.092312175Z" level=info msg="connecting to shim d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51" address="unix:///run/containerd/s/a2c5429780b4801be57d181460a37fd3d82492d213c379d73abe186ca57fecb3" protocol=ttrpc version=3 Jan 20 02:21:36.426880 systemd[1]: Started cri-containerd-d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51.scope - libcontainer container d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51. Jan 20 02:21:37.490605 containerd[1567]: time="2026-01-20T02:21:37.486090665Z" level=info msg="StartContainer for \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\" returns successfully" Jan 20 02:21:37.499597 systemd[1]: cri-containerd-d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51.scope: Deactivated successfully. Jan 20 02:21:37.524213 containerd[1567]: time="2026-01-20T02:21:37.521230500Z" level=info msg="received container exit event container_id:\"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\" id:\"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\" pid:3408 exited_at:{seconds:1768875697 nanos:517947832}" Jan 20 02:21:38.021424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51-rootfs.mount: Deactivated successfully. Jan 20 02:21:38.782215 containerd[1567]: time="2026-01-20T02:21:38.770479221Z" level=info msg="CreateContainer within sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 02:21:39.030797 containerd[1567]: time="2026-01-20T02:21:39.029098447Z" level=info msg="Container 1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:21:39.134367 containerd[1567]: time="2026-01-20T02:21:39.132677359Z" level=info msg="CreateContainer within sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\"" Jan 20 02:21:39.134367 containerd[1567]: time="2026-01-20T02:21:39.133759585Z" level=info msg="StartContainer for \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\"" Jan 20 02:21:39.160446 containerd[1567]: time="2026-01-20T02:21:39.159464126Z" level=info msg="connecting to shim 1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e" address="unix:///run/containerd/s/a2c5429780b4801be57d181460a37fd3d82492d213c379d73abe186ca57fecb3" protocol=ttrpc version=3 Jan 20 02:21:39.318732 systemd[1]: Started cri-containerd-1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e.scope - libcontainer container 1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e. Jan 20 02:21:39.734596 systemd[1]: cri-containerd-1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e.scope: Deactivated successfully. Jan 20 02:21:39.745530 containerd[1567]: time="2026-01-20T02:21:39.744539602Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf005b0f2_4c88_40c6_a2d4_a180bd513b5f.slice/cri-containerd-1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e.scope/memory.events\": no such file or directory" Jan 20 02:21:39.806804 containerd[1567]: time="2026-01-20T02:21:39.805870000Z" level=info msg="received container exit event container_id:\"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\" id:\"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\" pid:3452 exited_at:{seconds:1768875699 nanos:776712804}" Jan 20 02:21:39.872414 containerd[1567]: time="2026-01-20T02:21:39.872263369Z" level=info msg="StartContainer for \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\" returns successfully" Jan 20 02:21:40.122363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e-rootfs.mount: Deactivated successfully. Jan 20 02:21:40.950691 containerd[1567]: time="2026-01-20T02:21:40.945742658Z" level=info msg="CreateContainer within sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 02:21:41.242303 containerd[1567]: time="2026-01-20T02:21:41.241138576Z" level=info msg="Container f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:21:41.294558 containerd[1567]: time="2026-01-20T02:21:41.293371403Z" level=info msg="CreateContainer within sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\"" Jan 20 02:21:41.308944 containerd[1567]: time="2026-01-20T02:21:41.302685681Z" level=info msg="StartContainer for \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\"" Jan 20 02:21:41.355979 containerd[1567]: time="2026-01-20T02:21:41.335451981Z" level=info msg="connecting to shim f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e" address="unix:///run/containerd/s/a2c5429780b4801be57d181460a37fd3d82492d213c379d73abe186ca57fecb3" protocol=ttrpc version=3 Jan 20 02:21:41.630501 systemd[1]: Started cri-containerd-f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e.scope - libcontainer container f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e. Jan 20 02:21:42.131150 containerd[1567]: time="2026-01-20T02:21:42.126996356Z" level=info msg="StartContainer for \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\" returns successfully" Jan 20 02:21:42.140276 containerd[1567]: time="2026-01-20T02:21:42.139416371Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:21:42.162438 containerd[1567]: time="2026-01-20T02:21:42.161557197Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 02:21:42.179533 containerd[1567]: time="2026-01-20T02:21:42.178925283Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:21:42.195782 containerd[1567]: time="2026-01-20T02:21:42.190143780Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 13.044534906s" Jan 20 02:21:42.195782 containerd[1567]: time="2026-01-20T02:21:42.190224089Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 02:21:42.224175 containerd[1567]: time="2026-01-20T02:21:42.222933124Z" level=info msg="CreateContainer within sandbox \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 02:21:42.391495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458063441.mount: Deactivated successfully. Jan 20 02:21:42.401385 containerd[1567]: time="2026-01-20T02:21:42.392599304Z" level=info msg="Container 72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:21:42.588450 containerd[1567]: time="2026-01-20T02:21:42.587568772Z" level=info msg="CreateContainer within sandbox \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\"" Jan 20 02:21:42.631445 containerd[1567]: time="2026-01-20T02:21:42.627572740Z" level=info msg="StartContainer for \"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\"" Jan 20 02:21:42.687688 containerd[1567]: time="2026-01-20T02:21:42.677681513Z" level=info msg="connecting to shim 72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e" address="unix:///run/containerd/s/af328ab87b396e02de74595607a38fe4daec85d05ba8c90a02821c3d46334704" protocol=ttrpc version=3 Jan 20 02:21:42.952262 systemd[1]: Started cri-containerd-72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e.scope - libcontainer container 72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e. Jan 20 02:21:43.209582 kubelet[2900]: I0120 02:21:43.207518 2900 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 02:21:43.534520 containerd[1567]: time="2026-01-20T02:21:43.531706307Z" level=info msg="StartContainer for \"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\" returns successfully" Jan 20 02:21:43.590202 kubelet[2900]: W0120 02:21:43.588475 2900 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 20 02:21:43.590202 kubelet[2900]: E0120 02:21:43.588540 2900 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 20 02:21:43.596322 kubelet[2900]: I0120 02:21:43.593157 2900 status_manager.go:890] "Failed to get status for pod" podUID="70822b44-fdad-4ab3-a09a-888003a4ded6" pod="kube-system/coredns-668d6bf9bc-n7pbk" err="pods \"coredns-668d6bf9bc-n7pbk\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jan 20 02:21:43.633638 systemd[1]: Created slice kubepods-burstable-pod70822b44_fdad_4ab3_a09a_888003a4ded6.slice - libcontainer container kubepods-burstable-pod70822b44_fdad_4ab3_a09a_888003a4ded6.slice. Jan 20 02:21:43.708728 kubelet[2900]: I0120 02:21:43.708679 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll6lt\" (UniqueName: \"kubernetes.io/projected/70822b44-fdad-4ab3-a09a-888003a4ded6-kube-api-access-ll6lt\") pod \"coredns-668d6bf9bc-n7pbk\" (UID: \"70822b44-fdad-4ab3-a09a-888003a4ded6\") " pod="kube-system/coredns-668d6bf9bc-n7pbk" Jan 20 02:21:43.712647 kubelet[2900]: I0120 02:21:43.712603 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03a5d512-f8bc-4887-a449-4983961e6308-config-volume\") pod \"coredns-668d6bf9bc-dnkt2\" (UID: \"03a5d512-f8bc-4887-a449-4983961e6308\") " pod="kube-system/coredns-668d6bf9bc-dnkt2" Jan 20 02:21:43.712943 kubelet[2900]: I0120 02:21:43.712916 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r7w6\" (UniqueName: \"kubernetes.io/projected/03a5d512-f8bc-4887-a449-4983961e6308-kube-api-access-4r7w6\") pod \"coredns-668d6bf9bc-dnkt2\" (UID: \"03a5d512-f8bc-4887-a449-4983961e6308\") " pod="kube-system/coredns-668d6bf9bc-dnkt2" Jan 20 02:21:43.716264 kubelet[2900]: I0120 02:21:43.716237 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70822b44-fdad-4ab3-a09a-888003a4ded6-config-volume\") pod \"coredns-668d6bf9bc-n7pbk\" (UID: \"70822b44-fdad-4ab3-a09a-888003a4ded6\") " pod="kube-system/coredns-668d6bf9bc-n7pbk" Jan 20 02:21:43.754868 systemd[1]: Created slice kubepods-burstable-pod03a5d512_f8bc_4887_a449_4983961e6308.slice - libcontainer container kubepods-burstable-pod03a5d512_f8bc_4887_a449_4983961e6308.slice. Jan 20 02:21:44.906636 kubelet[2900]: E0120 02:21:44.906491 2900 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 20 02:21:44.912739 kubelet[2900]: E0120 02:21:44.912710 2900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70822b44-fdad-4ab3-a09a-888003a4ded6-config-volume podName:70822b44-fdad-4ab3-a09a-888003a4ded6 nodeName:}" failed. No retries permitted until 2026-01-20 02:21:45.412558973 +0000 UTC m=+110.814892170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/70822b44-fdad-4ab3-a09a-888003a4ded6-config-volume") pod "coredns-668d6bf9bc-n7pbk" (UID: "70822b44-fdad-4ab3-a09a-888003a4ded6") : failed to sync configmap cache: timed out waiting for the condition Jan 20 02:21:44.944583 kubelet[2900]: E0120 02:21:44.935589 2900 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 20 02:21:44.944583 kubelet[2900]: E0120 02:21:44.935756 2900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/03a5d512-f8bc-4887-a449-4983961e6308-config-volume podName:03a5d512-f8bc-4887-a449-4983961e6308 nodeName:}" failed. No retries permitted until 2026-01-20 02:21:45.435716789 +0000 UTC m=+110.838049996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03a5d512-f8bc-4887-a449-4983961e6308-config-volume") pod "coredns-668d6bf9bc-dnkt2" (UID: "03a5d512-f8bc-4887-a449-4983961e6308") : failed to sync configmap cache: timed out waiting for the condition Jan 20 02:21:45.006167 kubelet[2900]: I0120 02:21:45.005990 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wgmcb" podStartSLOduration=19.552882253 podStartE2EDuration="1m31.005964557s" podCreationTimestamp="2026-01-20 02:20:14 +0000 UTC" firstStartedPulling="2026-01-20 02:20:17.680846121 +0000 UTC m=+23.083179328" lastFinishedPulling="2026-01-20 02:21:29.133928435 +0000 UTC m=+94.536261632" observedRunningTime="2026-01-20 02:21:44.822908237 +0000 UTC m=+110.225241434" watchObservedRunningTime="2026-01-20 02:21:45.005964557 +0000 UTC m=+110.408297764" Jan 20 02:21:45.518230 containerd[1567]: time="2026-01-20T02:21:45.516675506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n7pbk,Uid:70822b44-fdad-4ab3-a09a-888003a4ded6,Namespace:kube-system,Attempt:0,}" Jan 20 02:21:45.663212 containerd[1567]: time="2026-01-20T02:21:45.661630432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnkt2,Uid:03a5d512-f8bc-4887-a449-4983961e6308,Namespace:kube-system,Attempt:0,}" Jan 20 02:21:57.798282 systemd-networkd[1474]: cilium_host: Link UP Jan 20 02:21:57.801961 systemd-networkd[1474]: cilium_net: Link UP Jan 20 02:21:57.805982 systemd-networkd[1474]: cilium_host: Gained carrier Jan 20 02:21:57.829857 systemd-networkd[1474]: cilium_net: Gained carrier Jan 20 02:21:57.840844 systemd-networkd[1474]: cilium_host: Gained IPv6LL Jan 20 02:21:57.841346 systemd-networkd[1474]: cilium_net: Gained IPv6LL Jan 20 02:21:59.159612 systemd-networkd[1474]: cilium_vxlan: Link UP Jan 20 02:21:59.159624 systemd-networkd[1474]: cilium_vxlan: Gained carrier Jan 20 02:22:00.946426 kernel: NET: Registered PF_ALG protocol family Jan 20 02:22:01.022692 systemd-networkd[1474]: cilium_vxlan: Gained IPv6LL Jan 20 02:22:07.739877 systemd-networkd[1474]: lxc_health: Link UP Jan 20 02:22:07.854975 systemd-networkd[1474]: lxc_health: Gained carrier Jan 20 02:22:08.325689 kubelet[2900]: I0120 02:22:08.323701 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-k7xz8" podStartSLOduration=30.150431872 podStartE2EDuration="1m54.323676599s" podCreationTimestamp="2026-01-20 02:20:14 +0000 UTC" firstStartedPulling="2026-01-20 02:20:18.028526931 +0000 UTC m=+23.430860128" lastFinishedPulling="2026-01-20 02:21:42.201771658 +0000 UTC m=+107.604104855" observedRunningTime="2026-01-20 02:21:45.205577541 +0000 UTC m=+110.607910738" watchObservedRunningTime="2026-01-20 02:22:08.323676599 +0000 UTC m=+133.726009826" Jan 20 02:22:09.480587 systemd-networkd[1474]: lxcf1634e18d3d4: Link UP Jan 20 02:22:09.787175 kernel: eth0: renamed from tmp4b553 Jan 20 02:22:09.906523 kernel: eth0: renamed from tmp8ba02 Jan 20 02:22:09.884847 systemd-networkd[1474]: lxceb2b8d4a0f45: Link UP Jan 20 02:22:09.890765 systemd-networkd[1474]: lxc_health: Gained IPv6LL Jan 20 02:22:09.957393 systemd-networkd[1474]: lxceb2b8d4a0f45: Gained carrier Jan 20 02:22:10.087901 systemd-networkd[1474]: lxcf1634e18d3d4: Gained carrier Jan 20 02:22:11.455727 systemd-networkd[1474]: lxcf1634e18d3d4: Gained IPv6LL Jan 20 02:22:11.654761 systemd-networkd[1474]: lxceb2b8d4a0f45: Gained IPv6LL Jan 20 02:22:24.534602 sudo[1783]: pam_unix(sudo:session): session closed for user root Jan 20 02:22:24.565608 sshd[1782]: Connection closed by 10.0.0.1 port 58328 Jan 20 02:22:24.578835 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jan 20 02:22:24.610665 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:58328.service: Deactivated successfully. Jan 20 02:22:24.626129 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 02:22:24.681863 systemd[1]: session-9.scope: Consumed 19.448s CPU time, 230.8M memory peak. Jan 20 02:22:24.725957 systemd-logind[1533]: Session 9 logged out. Waiting for processes to exit. Jan 20 02:22:24.739390 systemd-logind[1533]: Removed session 9. Jan 20 02:22:41.088381 containerd[1567]: time="2026-01-20T02:22:41.087753159Z" level=info msg="connecting to shim 4b553c836c199fe6d05d64b4ef9ee07644c38a6d1d9dbca728ff95205f6ad490" address="unix:///run/containerd/s/438f8420958a1ef4692c05e169a1e969a9337ca2bebbb5fa04136b94148116f7" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:22:41.462137 systemd[1]: Started cri-containerd-4b553c836c199fe6d05d64b4ef9ee07644c38a6d1d9dbca728ff95205f6ad490.scope - libcontainer container 4b553c836c199fe6d05d64b4ef9ee07644c38a6d1d9dbca728ff95205f6ad490. Jan 20 02:22:41.735304 containerd[1567]: time="2026-01-20T02:22:41.732291528Z" level=info msg="connecting to shim 8ba02d9ab11da233d6e39cc4683484215bf532a2fcdac56d2a0b321d9c1a74bc" address="unix:///run/containerd/s/641af5800ce7f9fb9638e733978439ad29cb5670a37e7c1e21363642fea19001" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:22:41.801978 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:22:42.086667 systemd[1]: Started cri-containerd-8ba02d9ab11da233d6e39cc4683484215bf532a2fcdac56d2a0b321d9c1a74bc.scope - libcontainer container 8ba02d9ab11da233d6e39cc4683484215bf532a2fcdac56d2a0b321d9c1a74bc. Jan 20 02:22:42.196190 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:22:42.264793 containerd[1567]: time="2026-01-20T02:22:42.264736157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n7pbk,Uid:70822b44-fdad-4ab3-a09a-888003a4ded6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b553c836c199fe6d05d64b4ef9ee07644c38a6d1d9dbca728ff95205f6ad490\"" Jan 20 02:22:42.330400 containerd[1567]: time="2026-01-20T02:22:42.328613020Z" level=info msg="CreateContainer within sandbox \"4b553c836c199fe6d05d64b4ef9ee07644c38a6d1d9dbca728ff95205f6ad490\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:22:42.558122 containerd[1567]: time="2026-01-20T02:22:42.555526883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnkt2,Uid:03a5d512-f8bc-4887-a449-4983961e6308,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ba02d9ab11da233d6e39cc4683484215bf532a2fcdac56d2a0b321d9c1a74bc\"" Jan 20 02:22:42.591473 containerd[1567]: time="2026-01-20T02:22:42.588860843Z" level=info msg="CreateContainer within sandbox \"8ba02d9ab11da233d6e39cc4683484215bf532a2fcdac56d2a0b321d9c1a74bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:22:42.708559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399621043.mount: Deactivated successfully. Jan 20 02:22:42.714800 containerd[1567]: time="2026-01-20T02:22:42.710664539Z" level=info msg="Container 74ef783035dc82c91235f4e4166d5b331bfa62b91bc8faca194e3b26fe87ed3c: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:22:42.731806 containerd[1567]: time="2026-01-20T02:22:42.729619461Z" level=info msg="Container b3a594d2f5cb09bc843070abd7c2da3aec98dbccf77f9efb6feb0f25f8a10dfb: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:22:42.769924 containerd[1567]: time="2026-01-20T02:22:42.768620389Z" level=info msg="CreateContainer within sandbox \"4b553c836c199fe6d05d64b4ef9ee07644c38a6d1d9dbca728ff95205f6ad490\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74ef783035dc82c91235f4e4166d5b331bfa62b91bc8faca194e3b26fe87ed3c\"" Jan 20 02:22:42.781991 containerd[1567]: time="2026-01-20T02:22:42.780964022Z" level=info msg="StartContainer for \"74ef783035dc82c91235f4e4166d5b331bfa62b91bc8faca194e3b26fe87ed3c\"" Jan 20 02:22:42.824962 containerd[1567]: time="2026-01-20T02:22:42.800766885Z" level=info msg="connecting to shim 74ef783035dc82c91235f4e4166d5b331bfa62b91bc8faca194e3b26fe87ed3c" address="unix:///run/containerd/s/438f8420958a1ef4692c05e169a1e969a9337ca2bebbb5fa04136b94148116f7" protocol=ttrpc version=3 Jan 20 02:22:42.886081 containerd[1567]: time="2026-01-20T02:22:42.883159723Z" level=info msg="CreateContainer within sandbox \"8ba02d9ab11da233d6e39cc4683484215bf532a2fcdac56d2a0b321d9c1a74bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3a594d2f5cb09bc843070abd7c2da3aec98dbccf77f9efb6feb0f25f8a10dfb\"" Jan 20 02:22:42.892490 containerd[1567]: time="2026-01-20T02:22:42.891562826Z" level=info msg="StartContainer for \"b3a594d2f5cb09bc843070abd7c2da3aec98dbccf77f9efb6feb0f25f8a10dfb\"" Jan 20 02:22:42.946077 containerd[1567]: time="2026-01-20T02:22:42.941445012Z" level=info msg="connecting to shim b3a594d2f5cb09bc843070abd7c2da3aec98dbccf77f9efb6feb0f25f8a10dfb" address="unix:///run/containerd/s/641af5800ce7f9fb9638e733978439ad29cb5670a37e7c1e21363642fea19001" protocol=ttrpc version=3 Jan 20 02:22:43.014868 systemd[1]: Started cri-containerd-74ef783035dc82c91235f4e4166d5b331bfa62b91bc8faca194e3b26fe87ed3c.scope - libcontainer container 74ef783035dc82c91235f4e4166d5b331bfa62b91bc8faca194e3b26fe87ed3c. Jan 20 02:22:43.217544 systemd[1]: Started cri-containerd-b3a594d2f5cb09bc843070abd7c2da3aec98dbccf77f9efb6feb0f25f8a10dfb.scope - libcontainer container b3a594d2f5cb09bc843070abd7c2da3aec98dbccf77f9efb6feb0f25f8a10dfb. Jan 20 02:22:43.429188 containerd[1567]: time="2026-01-20T02:22:43.428837616Z" level=info msg="StartContainer for \"74ef783035dc82c91235f4e4166d5b331bfa62b91bc8faca194e3b26fe87ed3c\" returns successfully" Jan 20 02:22:43.521718 containerd[1567]: time="2026-01-20T02:22:43.512100977Z" level=info msg="StartContainer for \"b3a594d2f5cb09bc843070abd7c2da3aec98dbccf77f9efb6feb0f25f8a10dfb\" returns successfully" Jan 20 02:22:43.861117 kubelet[2900]: I0120 02:22:43.855951 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dnkt2" podStartSLOduration=167.855864315 podStartE2EDuration="2m47.855864315s" podCreationTimestamp="2026-01-20 02:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:22:43.833500939 +0000 UTC m=+169.235834176" watchObservedRunningTime="2026-01-20 02:22:43.855864315 +0000 UTC m=+169.258197531" Jan 20 02:22:44.161344 kubelet[2900]: I0120 02:22:44.145961 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n7pbk" podStartSLOduration=167.145935585 podStartE2EDuration="2m47.145935585s" podCreationTimestamp="2026-01-20 02:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:22:44.14389559 +0000 UTC m=+169.546228797" watchObservedRunningTime="2026-01-20 02:22:44.145935585 +0000 UTC m=+169.548268781" Jan 20 02:23:28.141136 kubelet[2900]: E0120 02:23:28.140681 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:37.113694 kubelet[2900]: E0120 02:23:37.112911 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:46.116663 kubelet[2900]: E0120 02:23:46.115651 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:49.718252 kubelet[2900]: E0120 02:23:49.700863 2900 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.561s" Jan 20 02:23:51.594340 kubelet[2900]: E0120 02:23:51.594087 2900 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.39s" Jan 20 02:23:57.217649 kubelet[2900]: E0120 02:23:57.203394 2900 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.035s" Jan 20 02:24:01.066277 containerd[1567]: time="2026-01-20T02:24:00.984224143Z" level=warning msg="container event discarded" container=308476605bc5cbdb6072e4e1890cf6c16c6de68199ba23e1e9b3ff18b4c581fb type=CONTAINER_CREATED_EVENT Jan 20 02:24:01.066277 containerd[1567]: time="2026-01-20T02:24:01.065258584Z" level=warning msg="container event discarded" container=308476605bc5cbdb6072e4e1890cf6c16c6de68199ba23e1e9b3ff18b4c581fb type=CONTAINER_STARTED_EVENT Jan 20 02:24:01.433396 containerd[1567]: time="2026-01-20T02:24:01.433183737Z" level=warning msg="container event discarded" container=d2500a690cf32f0006ea37b089ce2a1659ac97e8dddc52cf0b915267505f8197 type=CONTAINER_CREATED_EVENT Jan 20 02:24:01.433396 containerd[1567]: time="2026-01-20T02:24:01.433312948Z" level=warning msg="container event discarded" container=d2500a690cf32f0006ea37b089ce2a1659ac97e8dddc52cf0b915267505f8197 type=CONTAINER_STARTED_EVENT Jan 20 02:24:01.789330 containerd[1567]: time="2026-01-20T02:24:01.786545026Z" level=warning msg="container event discarded" container=6f43d06c67b0c7ae4cf918bbd9eb7434f1ab80470b16ef7b77353bd4fc8ce7c5 type=CONTAINER_CREATED_EVENT Jan 20 02:24:01.789330 containerd[1567]: time="2026-01-20T02:24:01.786674004Z" level=warning msg="container event discarded" container=6f43d06c67b0c7ae4cf918bbd9eb7434f1ab80470b16ef7b77353bd4fc8ce7c5 type=CONTAINER_STARTED_EVENT Jan 20 02:24:02.071675 containerd[1567]: time="2026-01-20T02:24:02.071215663Z" level=warning msg="container event discarded" container=fa31471b5a284f84c3025ee484d18b1714f347cc6433937f34fbc736aae21821 type=CONTAINER_CREATED_EVENT Jan 20 02:24:02.142662 containerd[1567]: time="2026-01-20T02:24:02.142350251Z" level=warning msg="container event discarded" container=1bf8ff131a2570de6580d97c23b2851c1991b177785c8dd9aa2cdbeefdcc01c7 type=CONTAINER_CREATED_EVENT Jan 20 02:24:02.200372 containerd[1567]: time="2026-01-20T02:24:02.200156839Z" level=warning msg="container event discarded" container=15f9f7c0d0634ba5502060beae6ba99967ddaac8e3d6d45ddd5cd8ca253c2baa type=CONTAINER_CREATED_EVENT Jan 20 02:24:04.347798 containerd[1567]: time="2026-01-20T02:24:04.347431428Z" level=warning msg="container event discarded" container=15f9f7c0d0634ba5502060beae6ba99967ddaac8e3d6d45ddd5cd8ca253c2baa type=CONTAINER_STARTED_EVENT Jan 20 02:24:04.676917 containerd[1567]: time="2026-01-20T02:24:04.641548166Z" level=warning msg="container event discarded" container=fa31471b5a284f84c3025ee484d18b1714f347cc6433937f34fbc736aae21821 type=CONTAINER_STARTED_EVENT Jan 20 02:24:05.350845 containerd[1567]: time="2026-01-20T02:24:05.350211510Z" level=warning msg="container event discarded" container=1bf8ff131a2570de6580d97c23b2851c1991b177785c8dd9aa2cdbeefdcc01c7 type=CONTAINER_STARTED_EVENT Jan 20 02:24:07.120966 kubelet[2900]: E0120 02:24:07.120922 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:07.131790 kubelet[2900]: E0120 02:24:07.122328 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:10.122590 kubelet[2900]: E0120 02:24:10.117948 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:14.122547 kubelet[2900]: E0120 02:24:14.122416 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:16.279488 kubelet[2900]: E0120 02:24:16.278294 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:57.115615 kubelet[2900]: E0120 02:24:57.113450 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:05.117097 kubelet[2900]: E0120 02:25:05.112582 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:08.781746 containerd[1567]: time="2026-01-20T02:25:08.779261015Z" level=warning msg="container event discarded" container=2c60c9535454c5515fc64a63437c201881edc8a334d19d5c3148adcfd91a919f type=CONTAINER_CREATED_EVENT Jan 20 02:25:08.781746 containerd[1567]: time="2026-01-20T02:25:08.781348343Z" level=warning msg="container event discarded" container=2c60c9535454c5515fc64a63437c201881edc8a334d19d5c3148adcfd91a919f type=CONTAINER_STARTED_EVENT Jan 20 02:25:09.363278 containerd[1567]: time="2026-01-20T02:25:09.186964394Z" level=warning msg="container event discarded" container=966cf59bb6a104050674b27c411bc50fdee2ccdb0b4f13fbe7e9d3b5adf82d19 type=CONTAINER_CREATED_EVENT Jan 20 02:25:09.388677 kubelet[2900]: E0120 02:25:09.139623 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:11.437517 containerd[1567]: time="2026-01-20T02:25:11.437412333Z" level=warning msg="container event discarded" container=966cf59bb6a104050674b27c411bc50fdee2ccdb0b4f13fbe7e9d3b5adf82d19 type=CONTAINER_STARTED_EVENT Jan 20 02:25:15.122856 kubelet[2900]: E0120 02:25:15.111212 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:17.655531 containerd[1567]: time="2026-01-20T02:25:17.655370892Z" level=warning msg="container event discarded" container=36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b type=CONTAINER_CREATED_EVENT Jan 20 02:25:17.655531 containerd[1567]: time="2026-01-20T02:25:17.655485675Z" level=warning msg="container event discarded" container=36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b type=CONTAINER_STARTED_EVENT Jan 20 02:25:18.033868 containerd[1567]: time="2026-01-20T02:25:18.029547710Z" level=warning msg="container event discarded" container=a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25 type=CONTAINER_CREATED_EVENT Jan 20 02:25:18.033868 containerd[1567]: time="2026-01-20T02:25:18.029636176Z" level=warning msg="container event discarded" container=a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25 type=CONTAINER_STARTED_EVENT Jan 20 02:25:27.120263 kubelet[2900]: E0120 02:25:27.118107 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:32.494985 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:36352.service - OpenSSH per-connection server daemon (10.0.0.1:36352). Jan 20 02:25:33.289178 sshd[4508]: Accepted publickey for core from 10.0.0.1 port 36352 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:25:33.289230 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:25:33.320284 systemd-logind[1533]: New session 10 of user core. Jan 20 02:25:33.343487 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 02:25:34.726182 sshd[4511]: Connection closed by 10.0.0.1 port 36352 Jan 20 02:25:34.730416 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Jan 20 02:25:34.798653 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:36352.service: Deactivated successfully. Jan 20 02:25:34.818272 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 02:25:34.836256 systemd-logind[1533]: Session 10 logged out. Waiting for processes to exit. Jan 20 02:25:34.845556 systemd-logind[1533]: Removed session 10. Jan 20 02:25:36.123909 kubelet[2900]: E0120 02:25:36.123280 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:38.114439 kubelet[2900]: E0120 02:25:38.112142 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:38.114439 kubelet[2900]: E0120 02:25:38.113647 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:39.799646 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:41544.service - OpenSSH per-connection server daemon (10.0.0.1:41544). Jan 20 02:25:40.334807 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 41544 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:25:40.347667 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:25:40.419875 systemd-logind[1533]: New session 11 of user core. Jan 20 02:25:40.465163 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 02:25:41.282838 sshd[4534]: Connection closed by 10.0.0.1 port 41544 Jan 20 02:25:41.285673 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Jan 20 02:25:41.309939 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:41544.service: Deactivated successfully. Jan 20 02:25:41.317512 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 02:25:41.328388 systemd-logind[1533]: Session 11 logged out. Waiting for processes to exit. Jan 20 02:25:41.330668 systemd-logind[1533]: Removed session 11. Jan 20 02:25:46.439951 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:54586.service - OpenSSH per-connection server daemon (10.0.0.1:54586). Jan 20 02:25:47.000507 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 54586 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:25:47.003791 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:25:47.083900 systemd-logind[1533]: New session 12 of user core. Jan 20 02:25:47.107348 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 02:25:48.288669 sshd[4553]: Connection closed by 10.0.0.1 port 54586 Jan 20 02:25:48.282574 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Jan 20 02:25:48.302204 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:54586.service: Deactivated successfully. Jan 20 02:25:48.356831 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 02:25:48.370923 systemd-logind[1533]: Session 12 logged out. Waiting for processes to exit. Jan 20 02:25:48.382283 systemd-logind[1533]: Removed session 12. Jan 20 02:25:53.387301 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:54606.service - OpenSSH per-connection server daemon (10.0.0.1:54606). Jan 20 02:25:53.867974 sshd[4567]: Accepted publickey for core from 10.0.0.1 port 54606 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:25:53.919226 sshd-session[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:25:53.971973 systemd-logind[1533]: New session 13 of user core. Jan 20 02:25:54.007581 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 02:25:55.214898 sshd[4570]: Connection closed by 10.0.0.1 port 54606 Jan 20 02:25:55.218314 sshd-session[4567]: pam_unix(sshd:session): session closed for user core Jan 20 02:25:55.252663 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:54606.service: Deactivated successfully. Jan 20 02:25:55.293206 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 02:25:55.338911 systemd-logind[1533]: Session 13 logged out. Waiting for processes to exit. Jan 20 02:25:55.368577 systemd-logind[1533]: Removed session 13. Jan 20 02:26:00.338530 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:44848.service - OpenSSH per-connection server daemon (10.0.0.1:44848). Jan 20 02:26:00.904867 sshd[4586]: Accepted publickey for core from 10.0.0.1 port 44848 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:00.919955 sshd-session[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:00.975296 systemd-logind[1533]: New session 14 of user core. Jan 20 02:26:00.997486 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 02:26:02.016909 sshd[4589]: Connection closed by 10.0.0.1 port 44848 Jan 20 02:26:02.031942 sshd-session[4586]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:02.080931 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:44848.service: Deactivated successfully. Jan 20 02:26:02.099435 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 02:26:02.160175 systemd-logind[1533]: Session 14 logged out. Waiting for processes to exit. Jan 20 02:26:02.180752 systemd-logind[1533]: Removed session 14. Jan 20 02:26:07.152421 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:47388.service - OpenSSH per-connection server daemon (10.0.0.1:47388). Jan 20 02:26:07.564776 sshd[4603]: Accepted publickey for core from 10.0.0.1 port 47388 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:07.577624 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:07.660931 systemd-logind[1533]: New session 15 of user core. Jan 20 02:26:07.685223 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 02:26:09.233114 sshd[4606]: Connection closed by 10.0.0.1 port 47388 Jan 20 02:26:09.263536 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:09.334135 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:47388.service: Deactivated successfully. Jan 20 02:26:09.382338 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 02:26:09.434170 systemd-logind[1533]: Session 15 logged out. Waiting for processes to exit. Jan 20 02:26:09.492926 systemd-logind[1533]: Removed session 15. Jan 20 02:26:14.289852 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:47414.service - OpenSSH per-connection server daemon (10.0.0.1:47414). Jan 20 02:26:14.857417 sshd[4620]: Accepted publickey for core from 10.0.0.1 port 47414 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:14.874972 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:14.946238 systemd-logind[1533]: New session 16 of user core. Jan 20 02:26:15.025540 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 02:26:16.189445 kubelet[2900]: E0120 02:26:16.180741 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:16.330307 sshd[4623]: Connection closed by 10.0.0.1 port 47414 Jan 20 02:26:16.331452 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:16.382537 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:47414.service: Deactivated successfully. Jan 20 02:26:16.408874 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 02:26:16.432461 systemd-logind[1533]: Session 16 logged out. Waiting for processes to exit. Jan 20 02:26:16.465549 systemd-logind[1533]: Removed session 16. Jan 20 02:26:19.114923 kubelet[2900]: E0120 02:26:19.112423 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:21.439383 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:39798.service - OpenSSH per-connection server daemon (10.0.0.1:39798). Jan 20 02:26:22.044269 sshd[4639]: Accepted publickey for core from 10.0.0.1 port 39798 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:22.083072 sshd-session[4639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:22.190441 systemd-logind[1533]: New session 17 of user core. Jan 20 02:26:22.234511 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 02:26:23.117966 kubelet[2900]: E0120 02:26:23.116443 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:23.316364 sshd[4642]: Connection closed by 10.0.0.1 port 39798 Jan 20 02:26:23.317456 sshd-session[4639]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:23.364363 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:39798.service: Deactivated successfully. Jan 20 02:26:23.405540 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 02:26:23.441275 systemd-logind[1533]: Session 17 logged out. Waiting for processes to exit. Jan 20 02:26:23.443958 systemd-logind[1533]: Removed session 17. Jan 20 02:26:28.693116 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:40698.service - OpenSSH per-connection server daemon (10.0.0.1:40698). Jan 20 02:26:28.861755 sshd[4657]: Accepted publickey for core from 10.0.0.1 port 40698 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:28.872139 sshd-session[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:28.936473 systemd-logind[1533]: New session 18 of user core. Jan 20 02:26:28.960330 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 02:26:29.879793 sshd[4660]: Connection closed by 10.0.0.1 port 40698 Jan 20 02:26:29.881801 containerd[1567]: time="2026-01-20T02:26:29.878138578Z" level=warning msg="container event discarded" container=8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00 type=CONTAINER_CREATED_EVENT Jan 20 02:26:29.883935 sshd-session[4657]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:29.967246 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:40698.service: Deactivated successfully. Jan 20 02:26:30.012553 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 02:26:30.113399 systemd-logind[1533]: Session 18 logged out. Waiting for processes to exit. Jan 20 02:26:30.157368 systemd-logind[1533]: Removed session 18. Jan 20 02:26:30.658938 containerd[1567]: time="2026-01-20T02:26:30.658799506Z" level=warning msg="container event discarded" container=8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00 type=CONTAINER_STARTED_EVENT Jan 20 02:26:32.111118 containerd[1567]: time="2026-01-20T02:26:32.110164299Z" level=warning msg="container event discarded" container=8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00 type=CONTAINER_STOPPED_EVENT Jan 20 02:26:33.319392 containerd[1567]: time="2026-01-20T02:26:33.319301890Z" level=warning msg="container event discarded" container=bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23 type=CONTAINER_CREATED_EVENT Jan 20 02:26:33.833342 containerd[1567]: time="2026-01-20T02:26:33.826915738Z" level=warning msg="container event discarded" container=bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23 type=CONTAINER_STARTED_EVENT Jan 20 02:26:34.586469 containerd[1567]: time="2026-01-20T02:26:34.586389107Z" level=warning msg="container event discarded" container=bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23 type=CONTAINER_STOPPED_EVENT Jan 20 02:26:34.995804 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:54674.service - OpenSSH per-connection server daemon (10.0.0.1:54674). Jan 20 02:26:35.342255 sshd[4674]: Accepted publickey for core from 10.0.0.1 port 54674 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:35.370329 sshd-session[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:35.433863 systemd-logind[1533]: New session 19 of user core. Jan 20 02:26:35.503829 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 02:26:36.030937 containerd[1567]: time="2026-01-20T02:26:36.030424434Z" level=warning msg="container event discarded" container=d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51 type=CONTAINER_CREATED_EVENT Jan 20 02:26:36.793094 sshd[4677]: Connection closed by 10.0.0.1 port 54674 Jan 20 02:26:36.797338 sshd-session[4674]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:36.851685 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:54674.service: Deactivated successfully. Jan 20 02:26:36.893343 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 02:26:36.923123 systemd-logind[1533]: Session 19 logged out. Waiting for processes to exit. Jan 20 02:26:36.958345 systemd-logind[1533]: Removed session 19. Jan 20 02:26:37.115328 kubelet[2900]: E0120 02:26:37.111559 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:37.495763 containerd[1567]: time="2026-01-20T02:26:37.486150651Z" level=warning msg="container event discarded" container=d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51 type=CONTAINER_STARTED_EVENT Jan 20 02:26:38.303382 containerd[1567]: time="2026-01-20T02:26:38.303295144Z" level=warning msg="container event discarded" container=d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51 type=CONTAINER_STOPPED_EVENT Jan 20 02:26:39.138103 containerd[1567]: time="2026-01-20T02:26:39.137835496Z" level=warning msg="container event discarded" container=1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e type=CONTAINER_CREATED_EVENT Jan 20 02:26:39.832322 containerd[1567]: time="2026-01-20T02:26:39.823806481Z" level=warning msg="container event discarded" container=1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e type=CONTAINER_STARTED_EVENT Jan 20 02:26:40.310147 containerd[1567]: time="2026-01-20T02:26:40.309975753Z" level=warning msg="container event discarded" container=1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e type=CONTAINER_STOPPED_EVENT Jan 20 02:26:41.302577 containerd[1567]: time="2026-01-20T02:26:41.301735180Z" level=warning msg="container event discarded" container=f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e type=CONTAINER_CREATED_EVENT Jan 20 02:26:41.941985 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:54716.service - OpenSSH per-connection server daemon (10.0.0.1:54716). Jan 20 02:26:42.135179 containerd[1567]: time="2026-01-20T02:26:42.130977808Z" level=warning msg="container event discarded" container=f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e type=CONTAINER_STARTED_EVENT Jan 20 02:26:42.434998 sshd[4691]: Accepted publickey for core from 10.0.0.1 port 54716 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:42.465986 sshd-session[4691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:42.550852 systemd-logind[1533]: New session 20 of user core. Jan 20 02:26:42.571854 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 02:26:42.590287 containerd[1567]: time="2026-01-20T02:26:42.590200610Z" level=warning msg="container event discarded" container=72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e type=CONTAINER_CREATED_EVENT Jan 20 02:26:43.471973 sshd[4694]: Connection closed by 10.0.0.1 port 54716 Jan 20 02:26:43.472905 sshd-session[4691]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:43.492355 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:54716.service: Deactivated successfully. Jan 20 02:26:43.516732 containerd[1567]: time="2026-01-20T02:26:43.516601208Z" level=warning msg="container event discarded" container=72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e type=CONTAINER_STARTED_EVENT Jan 20 02:26:43.526434 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 02:26:43.558293 systemd-logind[1533]: Session 20 logged out. Waiting for processes to exit. Jan 20 02:26:43.588449 systemd-logind[1533]: Removed session 20. Jan 20 02:26:44.122129 kubelet[2900]: E0120 02:26:44.113772 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:46.117999 kubelet[2900]: E0120 02:26:46.117479 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:48.580494 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:41234.service - OpenSSH per-connection server daemon (10.0.0.1:41234). Jan 20 02:26:49.016421 sshd[4710]: Accepted publickey for core from 10.0.0.1 port 41234 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:49.035247 sshd-session[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:49.162956 systemd-logind[1533]: New session 21 of user core. Jan 20 02:26:49.197897 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 02:26:50.046002 sshd[4713]: Connection closed by 10.0.0.1 port 41234 Jan 20 02:26:50.049378 sshd-session[4710]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:50.088854 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:41234.service: Deactivated successfully. Jan 20 02:26:50.101128 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 02:26:50.115565 systemd-logind[1533]: Session 21 logged out. Waiting for processes to exit. Jan 20 02:26:50.135547 systemd-logind[1533]: Removed session 21. Jan 20 02:26:55.130336 kubelet[2900]: E0120 02:26:55.124557 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:55.163925 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:52388.service - OpenSSH per-connection server daemon (10.0.0.1:52388). Jan 20 02:26:55.619794 sshd[4728]: Accepted publickey for core from 10.0.0.1 port 52388 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:26:55.623385 sshd-session[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:26:55.691150 systemd-logind[1533]: New session 22 of user core. Jan 20 02:26:55.710343 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 02:26:56.580137 sshd[4731]: Connection closed by 10.0.0.1 port 52388 Jan 20 02:26:56.576882 sshd-session[4728]: pam_unix(sshd:session): session closed for user core Jan 20 02:26:56.613879 systemd-logind[1533]: Session 22 logged out. Waiting for processes to exit. Jan 20 02:26:56.615218 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:52388.service: Deactivated successfully. Jan 20 02:26:56.638416 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 02:26:56.667430 systemd-logind[1533]: Removed session 22. Jan 20 02:27:01.657852 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:52408.service - OpenSSH per-connection server daemon (10.0.0.1:52408). Jan 20 02:27:01.955102 sshd[4748]: Accepted publickey for core from 10.0.0.1 port 52408 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:01.966959 sshd-session[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:02.034135 systemd-logind[1533]: New session 23 of user core. Jan 20 02:27:02.083517 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 02:27:03.268837 sshd[4751]: Connection closed by 10.0.0.1 port 52408 Jan 20 02:27:03.272388 sshd-session[4748]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:03.326205 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:52408.service: Deactivated successfully. Jan 20 02:27:03.363483 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 02:27:03.379750 systemd-logind[1533]: Session 23 logged out. Waiting for processes to exit. Jan 20 02:27:03.389588 systemd-logind[1533]: Removed session 23. Jan 20 02:27:07.125890 kubelet[2900]: E0120 02:27:07.119256 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:27:08.355142 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:49804.service - OpenSSH per-connection server daemon (10.0.0.1:49804). Jan 20 02:27:08.858569 sshd[4767]: Accepted publickey for core from 10.0.0.1 port 49804 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:08.871442 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:08.930699 systemd-logind[1533]: New session 24 of user core. Jan 20 02:27:08.997412 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 02:27:09.844916 sshd[4770]: Connection closed by 10.0.0.1 port 49804 Jan 20 02:27:09.846326 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:09.882565 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:49804.service: Deactivated successfully. Jan 20 02:27:09.907314 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 02:27:09.932938 systemd-logind[1533]: Session 24 logged out. Waiting for processes to exit. Jan 20 02:27:09.942244 systemd-logind[1533]: Removed session 24. Jan 20 02:27:14.923414 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:35472.service - OpenSSH per-connection server daemon (10.0.0.1:35472). Jan 20 02:27:15.299680 sshd[4784]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:15.310772 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:15.364773 systemd-logind[1533]: New session 25 of user core. Jan 20 02:27:15.378563 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 02:27:16.527928 sshd[4787]: Connection closed by 10.0.0.1 port 35472 Jan 20 02:27:16.524337 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:16.564945 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:35472.service: Deactivated successfully. Jan 20 02:27:16.596517 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 02:27:16.642143 systemd-logind[1533]: Session 25 logged out. Waiting for processes to exit. Jan 20 02:27:16.669193 systemd-logind[1533]: Removed session 25. Jan 20 02:27:20.118747 kubelet[2900]: E0120 02:27:20.113262 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:27:21.656827 systemd[1]: Started sshd@25-10.0.0.89:22-10.0.0.1:35552.service - OpenSSH per-connection server daemon (10.0.0.1:35552). Jan 20 02:27:22.140118 sshd[4805]: Accepted publickey for core from 10.0.0.1 port 35552 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:22.145524 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:22.234258 systemd-logind[1533]: New session 26 of user core. Jan 20 02:27:22.284586 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 02:27:22.930764 sshd[4808]: Connection closed by 10.0.0.1 port 35552 Jan 20 02:27:22.931805 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:22.957265 systemd[1]: sshd@25-10.0.0.89:22-10.0.0.1:35552.service: Deactivated successfully. Jan 20 02:27:22.978191 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 02:27:22.995667 systemd-logind[1533]: Session 26 logged out. Waiting for processes to exit. Jan 20 02:27:23.008688 systemd-logind[1533]: Removed session 26. Jan 20 02:27:23.116115 kubelet[2900]: E0120 02:27:23.111148 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:27:28.014269 systemd[1]: Started sshd@26-10.0.0.89:22-10.0.0.1:58930.service - OpenSSH per-connection server daemon (10.0.0.1:58930). Jan 20 02:27:28.573483 sshd[4823]: Accepted publickey for core from 10.0.0.1 port 58930 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:28.589551 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:28.688174 systemd-logind[1533]: New session 27 of user core. Jan 20 02:27:28.756555 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 02:27:29.670978 sshd[4826]: Connection closed by 10.0.0.1 port 58930 Jan 20 02:27:29.666871 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:29.701282 systemd[1]: sshd@26-10.0.0.89:22-10.0.0.1:58930.service: Deactivated successfully. Jan 20 02:27:29.723169 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 02:27:29.749428 systemd-logind[1533]: Session 27 logged out. Waiting for processes to exit. Jan 20 02:27:29.778556 systemd-logind[1533]: Removed session 27. Jan 20 02:27:34.831484 systemd[1]: Started sshd@27-10.0.0.89:22-10.0.0.1:36180.service - OpenSSH per-connection server daemon (10.0.0.1:36180). Jan 20 02:27:35.166590 sshd[4841]: Accepted publickey for core from 10.0.0.1 port 36180 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:35.172329 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:35.226379 systemd-logind[1533]: New session 28 of user core. Jan 20 02:27:35.252474 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 02:27:36.282568 sshd[4844]: Connection closed by 10.0.0.1 port 36180 Jan 20 02:27:36.286319 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:36.342876 systemd[1]: sshd@27-10.0.0.89:22-10.0.0.1:36180.service: Deactivated successfully. Jan 20 02:27:36.398303 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 02:27:36.410501 systemd-logind[1533]: Session 28 logged out. Waiting for processes to exit. Jan 20 02:27:36.431798 systemd-logind[1533]: Removed session 28. Jan 20 02:27:41.384321 systemd[1]: Started sshd@28-10.0.0.89:22-10.0.0.1:36208.service - OpenSSH per-connection server daemon (10.0.0.1:36208). Jan 20 02:27:41.973519 sshd[4859]: Accepted publickey for core from 10.0.0.1 port 36208 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:41.982419 sshd-session[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:42.109160 systemd-logind[1533]: New session 29 of user core. Jan 20 02:27:42.160283 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 02:27:42.282599 containerd[1567]: time="2026-01-20T02:27:42.282265420Z" level=warning msg="container event discarded" container=4b553c836c199fe6d05d64b4ef9ee07644c38a6d1d9dbca728ff95205f6ad490 type=CONTAINER_CREATED_EVENT Jan 20 02:27:42.282599 containerd[1567]: time="2026-01-20T02:27:42.282364804Z" level=warning msg="container event discarded" container=4b553c836c199fe6d05d64b4ef9ee07644c38a6d1d9dbca728ff95205f6ad490 type=CONTAINER_STARTED_EVENT Jan 20 02:27:42.576276 containerd[1567]: time="2026-01-20T02:27:42.576106825Z" level=warning msg="container event discarded" container=8ba02d9ab11da233d6e39cc4683484215bf532a2fcdac56d2a0b321d9c1a74bc type=CONTAINER_CREATED_EVENT Jan 20 02:27:42.576685 containerd[1567]: time="2026-01-20T02:27:42.576590725Z" level=warning msg="container event discarded" container=8ba02d9ab11da233d6e39cc4683484215bf532a2fcdac56d2a0b321d9c1a74bc type=CONTAINER_STARTED_EVENT Jan 20 02:27:42.774928 containerd[1567]: time="2026-01-20T02:27:42.774565501Z" level=warning msg="container event discarded" container=74ef783035dc82c91235f4e4166d5b331bfa62b91bc8faca194e3b26fe87ed3c type=CONTAINER_CREATED_EVENT Jan 20 02:27:42.876566 containerd[1567]: time="2026-01-20T02:27:42.873216380Z" level=warning msg="container event discarded" container=b3a594d2f5cb09bc843070abd7c2da3aec98dbccf77f9efb6feb0f25f8a10dfb type=CONTAINER_CREATED_EVENT Jan 20 02:27:43.399239 sshd[4862]: Connection closed by 10.0.0.1 port 36208 Jan 20 02:27:43.400999 sshd-session[4859]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:43.421491 systemd[1]: sshd@28-10.0.0.89:22-10.0.0.1:36208.service: Deactivated successfully. Jan 20 02:27:43.435469 containerd[1567]: time="2026-01-20T02:27:43.435317244Z" level=warning msg="container event discarded" container=74ef783035dc82c91235f4e4166d5b331bfa62b91bc8faca194e3b26fe87ed3c type=CONTAINER_STARTED_EVENT Jan 20 02:27:43.447098 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 02:27:43.496589 systemd-logind[1533]: Session 29 logged out. Waiting for processes to exit. Jan 20 02:27:43.516214 containerd[1567]: time="2026-01-20T02:27:43.507414143Z" level=warning msg="container event discarded" container=b3a594d2f5cb09bc843070abd7c2da3aec98dbccf77f9efb6feb0f25f8a10dfb type=CONTAINER_STARTED_EVENT Jan 20 02:27:43.530217 systemd-logind[1533]: Removed session 29. Jan 20 02:27:46.133757 kubelet[2900]: E0120 02:27:46.133526 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:27:48.504505 systemd[1]: Started sshd@29-10.0.0.89:22-10.0.0.1:46568.service - OpenSSH per-connection server daemon (10.0.0.1:46568). Jan 20 02:27:49.039370 sshd[4879]: Accepted publickey for core from 10.0.0.1 port 46568 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:49.066813 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:49.142799 systemd-logind[1533]: New session 30 of user core. Jan 20 02:27:49.198798 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 02:27:50.018900 sshd[4882]: Connection closed by 10.0.0.1 port 46568 Jan 20 02:27:50.021535 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:50.056803 systemd[1]: sshd@29-10.0.0.89:22-10.0.0.1:46568.service: Deactivated successfully. Jan 20 02:27:50.081941 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 02:27:50.092185 systemd-logind[1533]: Session 30 logged out. Waiting for processes to exit. Jan 20 02:27:50.127778 systemd[1]: Started sshd@30-10.0.0.89:22-10.0.0.1:46578.service - OpenSSH per-connection server daemon (10.0.0.1:46578). Jan 20 02:27:50.142889 systemd-logind[1533]: Removed session 30. Jan 20 02:27:50.414145 sshd[4896]: Accepted publickey for core from 10.0.0.1 port 46578 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:50.440823 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:50.550907 systemd-logind[1533]: New session 31 of user core. Jan 20 02:27:50.579257 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 02:27:52.197332 sshd[4899]: Connection closed by 10.0.0.1 port 46578 Jan 20 02:27:52.200778 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:52.261463 systemd[1]: sshd@30-10.0.0.89:22-10.0.0.1:46578.service: Deactivated successfully. Jan 20 02:27:52.279733 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 02:27:52.308232 systemd-logind[1533]: Session 31 logged out. Waiting for processes to exit. Jan 20 02:27:52.331298 systemd[1]: Started sshd@31-10.0.0.89:22-10.0.0.1:46590.service - OpenSSH per-connection server daemon (10.0.0.1:46590). Jan 20 02:27:52.360461 systemd-logind[1533]: Removed session 31. Jan 20 02:27:52.744679 sshd[4912]: Accepted publickey for core from 10.0.0.1 port 46590 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:52.758591 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:52.804876 systemd-logind[1533]: New session 32 of user core. Jan 20 02:27:52.840830 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 02:27:53.790877 sshd[4915]: Connection closed by 10.0.0.1 port 46590 Jan 20 02:27:53.794504 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Jan 20 02:27:53.837444 systemd[1]: sshd@31-10.0.0.89:22-10.0.0.1:46590.service: Deactivated successfully. Jan 20 02:27:53.874511 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 02:27:53.924732 systemd-logind[1533]: Session 32 logged out. Waiting for processes to exit. Jan 20 02:27:53.978137 systemd-logind[1533]: Removed session 32. Jan 20 02:27:58.114305 kubelet[2900]: E0120 02:27:58.112593 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:27:58.855267 systemd[1]: Started sshd@32-10.0.0.89:22-10.0.0.1:48056.service - OpenSSH per-connection server daemon (10.0.0.1:48056). Jan 20 02:27:59.299510 sshd[4930]: Accepted publickey for core from 10.0.0.1 port 48056 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:27:59.319523 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:27:59.377845 systemd-logind[1533]: New session 33 of user core. Jan 20 02:27:59.420189 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 02:28:00.053571 sshd[4934]: Connection closed by 10.0.0.1 port 48056 Jan 20 02:28:00.058436 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:00.079243 systemd[1]: sshd@32-10.0.0.89:22-10.0.0.1:48056.service: Deactivated successfully. Jan 20 02:28:00.123863 kubelet[2900]: E0120 02:28:00.119800 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:00.136278 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 02:28:00.183913 systemd-logind[1533]: Session 33 logged out. Waiting for processes to exit. Jan 20 02:28:00.202602 systemd-logind[1533]: Removed session 33. Jan 20 02:28:05.119395 systemd[1]: Started sshd@33-10.0.0.89:22-10.0.0.1:42450.service - OpenSSH per-connection server daemon (10.0.0.1:42450). Jan 20 02:28:05.553409 sshd[4947]: Accepted publickey for core from 10.0.0.1 port 42450 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:28:05.563979 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:28:05.619964 systemd-logind[1533]: New session 34 of user core. Jan 20 02:28:05.663992 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 02:28:06.134103 kubelet[2900]: E0120 02:28:06.133440 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:06.392508 sshd[4950]: Connection closed by 10.0.0.1 port 42450 Jan 20 02:28:06.393532 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:06.444212 systemd[1]: sshd@33-10.0.0.89:22-10.0.0.1:42450.service: Deactivated successfully. Jan 20 02:28:06.483435 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 02:28:06.494993 systemd-logind[1533]: Session 34 logged out. Waiting for processes to exit. Jan 20 02:28:06.515462 systemd-logind[1533]: Removed session 34. Jan 20 02:28:11.450672 systemd[1]: Started sshd@34-10.0.0.89:22-10.0.0.1:42454.service - OpenSSH per-connection server daemon (10.0.0.1:42454). Jan 20 02:28:11.936463 sshd[4964]: Accepted publickey for core from 10.0.0.1 port 42454 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:28:11.947713 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:28:11.998809 systemd-logind[1533]: New session 35 of user core. Jan 20 02:28:12.019779 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 02:28:12.756331 sshd[4967]: Connection closed by 10.0.0.1 port 42454 Jan 20 02:28:12.757383 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:12.785056 systemd[1]: sshd@34-10.0.0.89:22-10.0.0.1:42454.service: Deactivated successfully. Jan 20 02:28:12.808532 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 02:28:12.837623 systemd-logind[1533]: Session 35 logged out. Waiting for processes to exit. Jan 20 02:28:12.847549 systemd-logind[1533]: Removed session 35. Jan 20 02:28:15.130829 kubelet[2900]: E0120 02:28:15.121526 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:15.130829 kubelet[2900]: E0120 02:28:15.124186 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:17.881741 systemd[1]: Started sshd@35-10.0.0.89:22-10.0.0.1:35540.service - OpenSSH per-connection server daemon (10.0.0.1:35540). Jan 20 02:28:18.690497 sshd[4983]: Accepted publickey for core from 10.0.0.1 port 35540 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:28:18.690053 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:28:18.747086 systemd-logind[1533]: New session 36 of user core. Jan 20 02:28:18.790398 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 02:28:19.860116 sshd[4986]: Connection closed by 10.0.0.1 port 35540 Jan 20 02:28:19.864291 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:19.896403 systemd-logind[1533]: Session 36 logged out. Waiting for processes to exit. Jan 20 02:28:19.898831 systemd[1]: sshd@35-10.0.0.89:22-10.0.0.1:35540.service: Deactivated successfully. Jan 20 02:28:19.921662 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 02:28:19.944552 systemd-logind[1533]: Removed session 36. Jan 20 02:28:24.958232 systemd[1]: Started sshd@36-10.0.0.89:22-10.0.0.1:41404.service - OpenSSH per-connection server daemon (10.0.0.1:41404). Jan 20 02:28:25.316934 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 41404 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:28:25.327612 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:28:25.349428 systemd-logind[1533]: New session 37 of user core. Jan 20 02:28:25.367997 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 02:28:26.312315 sshd[5003]: Connection closed by 10.0.0.1 port 41404 Jan 20 02:28:26.313387 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:26.331983 systemd[1]: sshd@36-10.0.0.89:22-10.0.0.1:41404.service: Deactivated successfully. Jan 20 02:28:26.347367 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 02:28:26.362549 systemd-logind[1533]: Session 37 logged out. Waiting for processes to exit. Jan 20 02:28:26.414773 systemd-logind[1533]: Removed session 37. Jan 20 02:28:31.372917 systemd[1]: Started sshd@37-10.0.0.89:22-10.0.0.1:41420.service - OpenSSH per-connection server daemon (10.0.0.1:41420). Jan 20 02:28:31.729212 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 41420 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:28:31.735962 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:28:31.785931 systemd-logind[1533]: New session 38 of user core. Jan 20 02:28:31.806521 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 02:28:32.115712 kubelet[2900]: E0120 02:28:32.114834 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:33.051479 sshd[5021]: Connection closed by 10.0.0.1 port 41420 Jan 20 02:28:33.067421 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:33.112826 systemd[1]: sshd@37-10.0.0.89:22-10.0.0.1:41420.service: Deactivated successfully. Jan 20 02:28:33.146877 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 02:28:33.166620 systemd-logind[1533]: Session 38 logged out. Waiting for processes to exit. Jan 20 02:28:33.192911 systemd-logind[1533]: Removed session 38. Jan 20 02:28:34.698743 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 02:28:34.997675 systemd-tmpfiles[5038]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:28:34.997731 systemd-tmpfiles[5038]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:28:34.998446 systemd-tmpfiles[5038]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:28:35.012445 systemd-tmpfiles[5038]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 02:28:35.040719 systemd-tmpfiles[5038]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 02:28:35.048695 systemd-tmpfiles[5038]: ACLs are not supported, ignoring. Jan 20 02:28:35.048827 systemd-tmpfiles[5038]: ACLs are not supported, ignoring. Jan 20 02:28:35.127818 systemd-tmpfiles[5038]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:28:35.127840 systemd-tmpfiles[5038]: Skipping /boot Jan 20 02:28:35.185854 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 02:28:35.191286 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 02:28:35.220977 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Jan 20 02:28:38.140517 systemd[1]: Started sshd@38-10.0.0.89:22-10.0.0.1:58784.service - OpenSSH per-connection server daemon (10.0.0.1:58784). Jan 20 02:28:38.699187 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 58784 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:28:38.708698 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:28:38.767854 systemd-logind[1533]: New session 39 of user core. Jan 20 02:28:38.811086 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 02:28:40.434095 sshd[5045]: Connection closed by 10.0.0.1 port 58784 Jan 20 02:28:40.442925 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:40.493431 systemd[1]: sshd@38-10.0.0.89:22-10.0.0.1:58784.service: Deactivated successfully. Jan 20 02:28:40.496745 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 02:28:40.526668 systemd-logind[1533]: Session 39 logged out. Waiting for processes to exit. Jan 20 02:28:40.535626 systemd-logind[1533]: Removed session 39. Jan 20 02:28:45.535618 systemd[1]: Started sshd@39-10.0.0.89:22-10.0.0.1:37958.service - OpenSSH per-connection server daemon (10.0.0.1:37958). Jan 20 02:28:46.008400 sshd[5060]: Accepted publickey for core from 10.0.0.1 port 37958 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:28:46.028707 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:28:46.104635 systemd-logind[1533]: New session 40 of user core. Jan 20 02:28:46.150852 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 02:28:46.995394 sshd[5064]: Connection closed by 10.0.0.1 port 37958 Jan 20 02:28:46.997364 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:47.010854 systemd-logind[1533]: Session 40 logged out. Waiting for processes to exit. Jan 20 02:28:47.016720 systemd[1]: sshd@39-10.0.0.89:22-10.0.0.1:37958.service: Deactivated successfully. Jan 20 02:28:47.036836 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 02:28:47.070799 systemd-logind[1533]: Removed session 40. Jan 20 02:28:51.116664 kubelet[2900]: E0120 02:28:51.114302 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:52.083978 systemd[1]: Started sshd@40-10.0.0.89:22-10.0.0.1:37972.service - OpenSSH per-connection server daemon (10.0.0.1:37972). Jan 20 02:28:52.486784 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 37972 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:28:52.490967 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:28:52.523171 systemd-logind[1533]: New session 41 of user core. Jan 20 02:28:52.542111 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 02:28:53.195642 sshd[5082]: Connection closed by 10.0.0.1 port 37972 Jan 20 02:28:53.214299 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:53.281596 systemd[1]: sshd@40-10.0.0.89:22-10.0.0.1:37972.service: Deactivated successfully. Jan 20 02:28:53.302295 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 02:28:53.312785 systemd-logind[1533]: Session 41 logged out. Waiting for processes to exit. Jan 20 02:28:53.330622 systemd-logind[1533]: Removed session 41. Jan 20 02:28:58.322732 systemd[1]: Started sshd@41-10.0.0.89:22-10.0.0.1:33912.service - OpenSSH per-connection server daemon (10.0.0.1:33912). Jan 20 02:28:58.693866 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 33912 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:28:58.704757 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:28:58.756732 systemd-logind[1533]: New session 42 of user core. Jan 20 02:28:58.793527 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 02:28:59.566064 sshd[5101]: Connection closed by 10.0.0.1 port 33912 Jan 20 02:28:59.568370 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:59.622200 systemd[1]: sshd@41-10.0.0.89:22-10.0.0.1:33912.service: Deactivated successfully. Jan 20 02:28:59.693949 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 02:28:59.712167 systemd-logind[1533]: Session 42 logged out. Waiting for processes to exit. Jan 20 02:28:59.734347 systemd-logind[1533]: Removed session 42. Jan 20 02:29:04.633358 systemd[1]: Started sshd@42-10.0.0.89:22-10.0.0.1:45162.service - OpenSSH per-connection server daemon (10.0.0.1:45162). Jan 20 02:29:04.820687 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 45162 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:29:04.831381 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:04.885907 systemd-logind[1533]: New session 43 of user core. Jan 20 02:29:04.921529 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 02:29:05.890144 sshd[5121]: Connection closed by 10.0.0.1 port 45162 Jan 20 02:29:05.897993 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:05.985586 systemd[1]: sshd@42-10.0.0.89:22-10.0.0.1:45162.service: Deactivated successfully. Jan 20 02:29:05.996828 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 02:29:06.007379 systemd-logind[1533]: Session 43 logged out. Waiting for processes to exit. Jan 20 02:29:06.030069 systemd-logind[1533]: Removed session 43. Jan 20 02:29:10.978722 systemd[1]: Started sshd@43-10.0.0.89:22-10.0.0.1:45164.service - OpenSSH per-connection server daemon (10.0.0.1:45164). Jan 20 02:29:11.318694 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 45164 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:29:11.334853 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:11.388907 systemd-logind[1533]: New session 44 of user core. Jan 20 02:29:11.405166 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 02:29:12.309927 sshd[5139]: Connection closed by 10.0.0.1 port 45164 Jan 20 02:29:12.325872 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:12.371466 systemd[1]: sshd@43-10.0.0.89:22-10.0.0.1:45164.service: Deactivated successfully. Jan 20 02:29:12.392999 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 02:29:12.434433 systemd-logind[1533]: Session 44 logged out. Waiting for processes to exit. Jan 20 02:29:12.441850 systemd-logind[1533]: Removed session 44. Jan 20 02:29:15.114391 kubelet[2900]: E0120 02:29:15.112231 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:17.419282 systemd[1]: Started sshd@44-10.0.0.89:22-10.0.0.1:46766.service - OpenSSH per-connection server daemon (10.0.0.1:46766). Jan 20 02:29:18.630956 sshd[5154]: Accepted publickey for core from 10.0.0.1 port 46766 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:29:18.660249 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:18.741932 systemd-logind[1533]: New session 45 of user core. Jan 20 02:29:18.775179 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 02:29:19.603146 sshd[5157]: Connection closed by 10.0.0.1 port 46766 Jan 20 02:29:19.615756 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:19.668592 systemd[1]: sshd@44-10.0.0.89:22-10.0.0.1:46766.service: Deactivated successfully. Jan 20 02:29:19.690669 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 02:29:19.711781 systemd-logind[1533]: Session 45 logged out. Waiting for processes to exit. Jan 20 02:29:19.730488 systemd-logind[1533]: Removed session 45. Jan 20 02:29:23.127797 kubelet[2900]: E0120 02:29:23.126222 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:24.116721 kubelet[2900]: E0120 02:29:24.114596 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:24.776992 systemd[1]: Started sshd@45-10.0.0.89:22-10.0.0.1:44638.service - OpenSSH per-connection server daemon (10.0.0.1:44638). Jan 20 02:29:25.423000 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 44638 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:29:25.435207 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:25.503791 systemd-logind[1533]: New session 46 of user core. Jan 20 02:29:25.529596 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 02:29:26.472431 sshd[5177]: Connection closed by 10.0.0.1 port 44638 Jan 20 02:29:26.493202 sshd-session[5174]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:26.542807 systemd[1]: sshd@45-10.0.0.89:22-10.0.0.1:44638.service: Deactivated successfully. Jan 20 02:29:26.565185 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 02:29:26.580317 systemd-logind[1533]: Session 46 logged out. Waiting for processes to exit. Jan 20 02:29:26.607643 systemd-logind[1533]: Removed session 46. Jan 20 02:29:27.113946 kubelet[2900]: E0120 02:29:27.112622 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:31.603217 systemd[1]: Started sshd@46-10.0.0.89:22-10.0.0.1:44650.service - OpenSSH per-connection server daemon (10.0.0.1:44650). Jan 20 02:29:32.175351 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 44650 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:29:32.200535 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:32.318400 systemd-logind[1533]: New session 47 of user core. Jan 20 02:29:32.351557 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 02:29:33.639105 sshd[5195]: Connection closed by 10.0.0.1 port 44650 Jan 20 02:29:33.650179 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:33.690691 systemd[1]: sshd@46-10.0.0.89:22-10.0.0.1:44650.service: Deactivated successfully. Jan 20 02:29:33.692356 systemd-logind[1533]: Session 47 logged out. Waiting for processes to exit. Jan 20 02:29:33.694683 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 02:29:33.738738 systemd-logind[1533]: Removed session 47. Jan 20 02:29:34.130806 kubelet[2900]: E0120 02:29:34.117614 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:35.116128 kubelet[2900]: E0120 02:29:35.110425 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:38.784470 systemd[1]: Started sshd@47-10.0.0.89:22-10.0.0.1:48084.service - OpenSSH per-connection server daemon (10.0.0.1:48084). Jan 20 02:29:39.314132 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 48084 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:29:39.321617 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:39.404680 systemd-logind[1533]: New session 48 of user core. Jan 20 02:29:39.434722 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 02:29:40.392693 sshd[5211]: Connection closed by 10.0.0.1 port 48084 Jan 20 02:29:40.409653 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:40.474776 systemd[1]: sshd@47-10.0.0.89:22-10.0.0.1:48084.service: Deactivated successfully. Jan 20 02:29:40.517258 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 02:29:40.528739 systemd-logind[1533]: Session 48 logged out. Waiting for processes to exit. Jan 20 02:29:40.572611 systemd-logind[1533]: Removed session 48. Jan 20 02:29:43.739211 kubelet[2900]: E0120 02:29:43.715423 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:44.639174 kubelet[2900]: E0120 02:29:44.626251 2900 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.284s" Jan 20 02:29:45.604541 systemd[1]: Started sshd@48-10.0.0.89:22-10.0.0.1:51014.service - OpenSSH per-connection server daemon (10.0.0.1:51014). Jan 20 02:29:46.309979 sshd[5230]: Accepted publickey for core from 10.0.0.1 port 51014 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:29:46.325233 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:46.411814 systemd-logind[1533]: New session 49 of user core. Jan 20 02:29:46.489272 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 02:29:48.058168 sshd[5235]: Connection closed by 10.0.0.1 port 51014 Jan 20 02:29:48.057601 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:48.090412 systemd[1]: sshd@48-10.0.0.89:22-10.0.0.1:51014.service: Deactivated successfully. Jan 20 02:29:48.113637 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 02:29:48.147515 systemd-logind[1533]: Session 49 logged out. Waiting for processes to exit. Jan 20 02:29:48.172787 systemd-logind[1533]: Removed session 49. Jan 20 02:29:53.204750 systemd[1]: Started sshd@49-10.0.0.89:22-10.0.0.1:51020.service - OpenSSH per-connection server daemon (10.0.0.1:51020). Jan 20 02:29:53.860470 sshd[5249]: Accepted publickey for core from 10.0.0.1 port 51020 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:29:53.878628 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:53.942188 systemd-logind[1533]: New session 50 of user core. Jan 20 02:29:53.979764 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 02:29:55.095131 sshd[5252]: Connection closed by 10.0.0.1 port 51020 Jan 20 02:29:55.093398 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:55.142421 systemd[1]: sshd@49-10.0.0.89:22-10.0.0.1:51020.service: Deactivated successfully. Jan 20 02:29:55.169899 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 02:29:55.205741 systemd-logind[1533]: Session 50 logged out. Waiting for processes to exit. Jan 20 02:29:55.224190 systemd-logind[1533]: Removed session 50. Jan 20 02:30:00.126123 kubelet[2900]: E0120 02:30:00.119414 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:00.202837 systemd[1]: Started sshd@50-10.0.0.89:22-10.0.0.1:35770.service - OpenSSH per-connection server daemon (10.0.0.1:35770). Jan 20 02:30:00.772236 sshd[5269]: Accepted publickey for core from 10.0.0.1 port 35770 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:00.776604 sshd-session[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:00.816426 systemd-logind[1533]: New session 51 of user core. Jan 20 02:30:00.872875 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 02:30:01.833612 sshd[5272]: Connection closed by 10.0.0.1 port 35770 Jan 20 02:30:01.843118 sshd-session[5269]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:01.901542 systemd[1]: sshd@50-10.0.0.89:22-10.0.0.1:35770.service: Deactivated successfully. Jan 20 02:30:01.910481 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 02:30:01.923465 systemd-logind[1533]: Session 51 logged out. Waiting for processes to exit. Jan 20 02:30:01.941561 systemd[1]: Started sshd@51-10.0.0.89:22-10.0.0.1:35780.service - OpenSSH per-connection server daemon (10.0.0.1:35780). Jan 20 02:30:01.960787 systemd-logind[1533]: Removed session 51. Jan 20 02:30:02.285987 sshd[5286]: Accepted publickey for core from 10.0.0.1 port 35780 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:02.285224 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:02.341827 systemd-logind[1533]: New session 52 of user core. Jan 20 02:30:02.394305 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 02:30:04.291475 sshd[5289]: Connection closed by 10.0.0.1 port 35780 Jan 20 02:30:04.289684 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:04.320313 systemd[1]: sshd@51-10.0.0.89:22-10.0.0.1:35780.service: Deactivated successfully. Jan 20 02:30:04.337630 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 02:30:04.360571 systemd-logind[1533]: Session 52 logged out. Waiting for processes to exit. Jan 20 02:30:04.386167 systemd[1]: Started sshd@52-10.0.0.89:22-10.0.0.1:35788.service - OpenSSH per-connection server daemon (10.0.0.1:35788). Jan 20 02:30:04.405432 systemd-logind[1533]: Removed session 52. Jan 20 02:30:05.123218 sshd[5301]: Accepted publickey for core from 10.0.0.1 port 35788 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:05.135907 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:05.181810 systemd-logind[1533]: New session 53 of user core. Jan 20 02:30:05.227458 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 02:30:10.069755 sshd[5304]: Connection closed by 10.0.0.1 port 35788 Jan 20 02:30:10.072093 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:10.115563 systemd[1]: sshd@52-10.0.0.89:22-10.0.0.1:35788.service: Deactivated successfully. Jan 20 02:30:10.128792 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 02:30:10.133316 systemd[1]: session-53.scope: Consumed 1.143s CPU time, 45.4M memory peak. Jan 20 02:30:10.177618 systemd-logind[1533]: Session 53 logged out. Waiting for processes to exit. Jan 20 02:30:10.201514 systemd[1]: Started sshd@53-10.0.0.89:22-10.0.0.1:53928.service - OpenSSH per-connection server daemon (10.0.0.1:53928). Jan 20 02:30:10.208186 systemd-logind[1533]: Removed session 53. Jan 20 02:30:10.563411 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 53928 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:10.565817 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:10.639884 systemd-logind[1533]: New session 54 of user core. Jan 20 02:30:10.670366 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 02:30:12.375367 sshd[5330]: Connection closed by 10.0.0.1 port 53928 Jan 20 02:30:12.377620 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:12.507942 systemd[1]: sshd@53-10.0.0.89:22-10.0.0.1:53928.service: Deactivated successfully. Jan 20 02:30:12.515390 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 02:30:12.518142 systemd-logind[1533]: Session 54 logged out. Waiting for processes to exit. Jan 20 02:30:12.526464 systemd[1]: Started sshd@54-10.0.0.89:22-10.0.0.1:53932.service - OpenSSH per-connection server daemon (10.0.0.1:53932). Jan 20 02:30:12.530421 systemd-logind[1533]: Removed session 54. Jan 20 02:30:12.814282 sshd[5341]: Accepted publickey for core from 10.0.0.1 port 53932 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:12.819795 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:12.856712 systemd-logind[1533]: New session 55 of user core. Jan 20 02:30:12.867507 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 02:30:13.632133 sshd[5344]: Connection closed by 10.0.0.1 port 53932 Jan 20 02:30:13.627935 sshd-session[5341]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:13.661870 systemd[1]: sshd@54-10.0.0.89:22-10.0.0.1:53932.service: Deactivated successfully. Jan 20 02:30:13.711812 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 02:30:13.743534 systemd-logind[1533]: Session 55 logged out. Waiting for processes to exit. Jan 20 02:30:13.773321 systemd-logind[1533]: Removed session 55. Jan 20 02:30:16.128461 kubelet[2900]: E0120 02:30:16.120542 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:18.726535 systemd[1]: Started sshd@55-10.0.0.89:22-10.0.0.1:58866.service - OpenSSH per-connection server daemon (10.0.0.1:58866). Jan 20 02:30:19.173511 sshd[5360]: Accepted publickey for core from 10.0.0.1 port 58866 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:19.181936 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:19.233989 systemd-logind[1533]: New session 56 of user core. Jan 20 02:30:19.271281 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 02:30:20.481937 sshd[5363]: Connection closed by 10.0.0.1 port 58866 Jan 20 02:30:20.482958 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:20.504198 systemd[1]: sshd@55-10.0.0.89:22-10.0.0.1:58866.service: Deactivated successfully. Jan 20 02:30:20.533443 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 02:30:20.586970 systemd-logind[1533]: Session 56 logged out. Waiting for processes to exit. Jan 20 02:30:20.626666 systemd-logind[1533]: Removed session 56. Jan 20 02:30:25.625795 systemd[1]: Started sshd@56-10.0.0.89:22-10.0.0.1:33466.service - OpenSSH per-connection server daemon (10.0.0.1:33466). Jan 20 02:30:25.968881 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 33466 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:25.980910 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:26.023948 systemd-logind[1533]: New session 57 of user core. Jan 20 02:30:26.059736 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 02:30:26.643214 sshd[5380]: Connection closed by 10.0.0.1 port 33466 Jan 20 02:30:26.658832 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:26.695656 systemd[1]: sshd@56-10.0.0.89:22-10.0.0.1:33466.service: Deactivated successfully. Jan 20 02:30:26.724959 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 02:30:26.749119 systemd-logind[1533]: Session 57 logged out. Waiting for processes to exit. Jan 20 02:30:26.765344 systemd-logind[1533]: Removed session 57. Jan 20 02:30:31.697560 systemd[1]: Started sshd@57-10.0.0.89:22-10.0.0.1:33472.service - OpenSSH per-connection server daemon (10.0.0.1:33472). Jan 20 02:30:32.007900 sshd[5394]: Accepted publickey for core from 10.0.0.1 port 33472 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:32.022796 sshd-session[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:32.060700 systemd-logind[1533]: New session 58 of user core. Jan 20 02:30:32.081487 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 02:30:33.454341 sshd[5397]: Connection closed by 10.0.0.1 port 33472 Jan 20 02:30:33.456945 sshd-session[5394]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:33.489807 systemd[1]: sshd@57-10.0.0.89:22-10.0.0.1:33472.service: Deactivated successfully. Jan 20 02:30:33.515363 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 02:30:33.557228 systemd-logind[1533]: Session 58 logged out. Waiting for processes to exit. Jan 20 02:30:33.559788 systemd-logind[1533]: Removed session 58. Jan 20 02:30:38.561150 systemd[1]: Started sshd@58-10.0.0.89:22-10.0.0.1:60032.service - OpenSSH per-connection server daemon (10.0.0.1:60032). Jan 20 02:30:39.091447 sshd[5410]: Accepted publickey for core from 10.0.0.1 port 60032 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:39.103931 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:39.209786 systemd-logind[1533]: New session 59 of user core. Jan 20 02:30:39.247508 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 02:30:40.179107 sshd[5413]: Connection closed by 10.0.0.1 port 60032 Jan 20 02:30:40.185686 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:40.261281 systemd[1]: sshd@58-10.0.0.89:22-10.0.0.1:60032.service: Deactivated successfully. Jan 20 02:30:40.518924 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 02:30:40.561956 systemd-logind[1533]: Session 59 logged out. Waiting for processes to exit. Jan 20 02:30:40.583069 systemd-logind[1533]: Removed session 59. Jan 20 02:30:44.262643 kubelet[2900]: E0120 02:30:44.259780 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:45.282267 systemd[1]: Started sshd@59-10.0.0.89:22-10.0.0.1:48552.service - OpenSSH per-connection server daemon (10.0.0.1:48552). Jan 20 02:30:45.820381 sshd[5426]: Accepted publickey for core from 10.0.0.1 port 48552 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:45.843520 sshd-session[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:45.937658 systemd-logind[1533]: New session 60 of user core. Jan 20 02:30:46.000171 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 02:30:46.124528 kubelet[2900]: E0120 02:30:46.114723 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:47.052617 sshd[5431]: Connection closed by 10.0.0.1 port 48552 Jan 20 02:30:47.054077 sshd-session[5426]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:47.100829 systemd[1]: sshd@59-10.0.0.89:22-10.0.0.1:48552.service: Deactivated successfully. Jan 20 02:30:47.118813 kubelet[2900]: E0120 02:30:47.116143 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:47.157340 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 02:30:47.229159 systemd-logind[1533]: Session 60 logged out. Waiting for processes to exit. Jan 20 02:30:47.244799 systemd-logind[1533]: Removed session 60. Jan 20 02:30:48.139363 kubelet[2900]: E0120 02:30:48.137410 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:49.115255 kubelet[2900]: E0120 02:30:49.111867 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:51.114320 kubelet[2900]: E0120 02:30:51.112975 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:52.177905 systemd[1]: Started sshd@60-10.0.0.89:22-10.0.0.1:48566.service - OpenSSH per-connection server daemon (10.0.0.1:48566). Jan 20 02:30:52.630743 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 48566 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:52.658664 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:52.705603 systemd-logind[1533]: New session 61 of user core. Jan 20 02:30:52.725088 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 02:30:53.401365 sshd[5448]: Connection closed by 10.0.0.1 port 48566 Jan 20 02:30:53.402633 sshd-session[5445]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:53.433549 systemd[1]: sshd@60-10.0.0.89:22-10.0.0.1:48566.service: Deactivated successfully. Jan 20 02:30:53.446887 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 02:30:53.455328 systemd-logind[1533]: Session 61 logged out. Waiting for processes to exit. Jan 20 02:30:53.463201 systemd-logind[1533]: Removed session 61. Jan 20 02:30:58.520342 systemd[1]: Started sshd@61-10.0.0.89:22-10.0.0.1:36170.service - OpenSSH per-connection server daemon (10.0.0.1:36170). Jan 20 02:30:58.973638 sshd[5462]: Accepted publickey for core from 10.0.0.1 port 36170 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:30:59.022458 sshd-session[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:59.104974 systemd-logind[1533]: New session 62 of user core. Jan 20 02:30:59.188216 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 02:31:00.285555 sshd[5465]: Connection closed by 10.0.0.1 port 36170 Jan 20 02:31:00.288493 sshd-session[5462]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:00.317363 systemd[1]: sshd@61-10.0.0.89:22-10.0.0.1:36170.service: Deactivated successfully. Jan 20 02:31:00.339547 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 02:31:00.365573 systemd-logind[1533]: Session 62 logged out. Waiting for processes to exit. Jan 20 02:31:00.383769 systemd-logind[1533]: Removed session 62. Jan 20 02:31:05.357172 systemd[1]: Started sshd@62-10.0.0.89:22-10.0.0.1:52508.service - OpenSSH per-connection server daemon (10.0.0.1:52508). Jan 20 02:31:05.679488 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 52508 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:31:05.713304 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:05.791399 systemd-logind[1533]: New session 63 of user core. Jan 20 02:31:05.834492 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 02:31:06.853961 sshd[5482]: Connection closed by 10.0.0.1 port 52508 Jan 20 02:31:06.857287 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:06.934419 systemd[1]: sshd@62-10.0.0.89:22-10.0.0.1:52508.service: Deactivated successfully. Jan 20 02:31:06.945429 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 02:31:06.999598 systemd-logind[1533]: Session 63 logged out. Waiting for processes to exit. Jan 20 02:31:07.021845 systemd-logind[1533]: Removed session 63. Jan 20 02:31:11.946148 systemd[1]: Started sshd@63-10.0.0.89:22-10.0.0.1:52524.service - OpenSSH per-connection server daemon (10.0.0.1:52524). Jan 20 02:31:12.345720 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 52524 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:31:12.371207 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:12.440779 systemd-logind[1533]: New session 64 of user core. Jan 20 02:31:12.478713 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 20 02:31:13.552861 sshd[5499]: Connection closed by 10.0.0.1 port 52524 Jan 20 02:31:13.546170 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:13.571096 systemd-logind[1533]: Session 64 logged out. Waiting for processes to exit. Jan 20 02:31:13.584859 systemd[1]: sshd@63-10.0.0.89:22-10.0.0.1:52524.service: Deactivated successfully. Jan 20 02:31:13.614206 systemd[1]: session-64.scope: Deactivated successfully. Jan 20 02:31:13.627130 systemd-logind[1533]: Removed session 64. Jan 20 02:31:14.229613 kubelet[2900]: E0120 02:31:14.228922 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:18.661113 systemd[1]: Started sshd@64-10.0.0.89:22-10.0.0.1:49668.service - OpenSSH per-connection server daemon (10.0.0.1:49668). Jan 20 02:31:19.169330 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 49668 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:31:19.189777 sshd-session[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:19.224421 systemd-logind[1533]: New session 65 of user core. Jan 20 02:31:19.242272 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 20 02:31:20.236661 sshd[5520]: Connection closed by 10.0.0.1 port 49668 Jan 20 02:31:20.247130 sshd-session[5517]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:20.308831 systemd[1]: sshd@64-10.0.0.89:22-10.0.0.1:49668.service: Deactivated successfully. Jan 20 02:31:20.332114 systemd[1]: session-65.scope: Deactivated successfully. Jan 20 02:31:20.355681 systemd-logind[1533]: Session 65 logged out. Waiting for processes to exit. Jan 20 02:31:20.378628 systemd-logind[1533]: Removed session 65. Jan 20 02:31:25.329930 systemd[1]: Started sshd@65-10.0.0.89:22-10.0.0.1:36848.service - OpenSSH per-connection server daemon (10.0.0.1:36848). Jan 20 02:31:25.936177 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 36848 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:31:25.950539 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:26.022697 systemd-logind[1533]: New session 66 of user core. Jan 20 02:31:26.057964 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 20 02:31:26.942707 sshd[5536]: Connection closed by 10.0.0.1 port 36848 Jan 20 02:31:26.940376 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:26.971423 systemd[1]: sshd@65-10.0.0.89:22-10.0.0.1:36848.service: Deactivated successfully. Jan 20 02:31:26.976844 systemd[1]: session-66.scope: Deactivated successfully. Jan 20 02:31:26.981222 systemd-logind[1533]: Session 66 logged out. Waiting for processes to exit. Jan 20 02:31:26.983965 systemd-logind[1533]: Removed session 66. Jan 20 02:31:32.024843 systemd[1]: Started sshd@66-10.0.0.89:22-10.0.0.1:36856.service - OpenSSH per-connection server daemon (10.0.0.1:36856). Jan 20 02:31:32.364927 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 36856 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:31:32.381421 sshd-session[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:32.484876 systemd-logind[1533]: New session 67 of user core. Jan 20 02:31:32.532603 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 20 02:31:33.710902 sshd[5553]: Connection closed by 10.0.0.1 port 36856 Jan 20 02:31:33.712991 sshd-session[5550]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:33.745515 systemd[1]: sshd@66-10.0.0.89:22-10.0.0.1:36856.service: Deactivated successfully. Jan 20 02:31:33.749541 systemd[1]: session-67.scope: Deactivated successfully. Jan 20 02:31:33.754801 systemd-logind[1533]: Session 67 logged out. Waiting for processes to exit. Jan 20 02:31:33.758522 systemd-logind[1533]: Removed session 67. Jan 20 02:31:35.111952 kubelet[2900]: E0120 02:31:35.111815 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:38.905102 systemd[1]: Started sshd@67-10.0.0.89:22-10.0.0.1:34888.service - OpenSSH per-connection server daemon (10.0.0.1:34888). Jan 20 02:31:39.440907 sshd[5567]: Accepted publickey for core from 10.0.0.1 port 34888 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:31:39.437296 sshd-session[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:39.503735 systemd-logind[1533]: New session 68 of user core. Jan 20 02:31:39.548370 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 20 02:31:40.277102 sshd[5570]: Connection closed by 10.0.0.1 port 34888 Jan 20 02:31:40.273667 sshd-session[5567]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:40.334312 systemd[1]: sshd@67-10.0.0.89:22-10.0.0.1:34888.service: Deactivated successfully. Jan 20 02:31:40.338229 systemd[1]: session-68.scope: Deactivated successfully. Jan 20 02:31:40.376718 systemd-logind[1533]: Session 68 logged out. Waiting for processes to exit. Jan 20 02:31:40.408076 systemd[1]: Started sshd@68-10.0.0.89:22-10.0.0.1:34904.service - OpenSSH per-connection server daemon (10.0.0.1:34904). Jan 20 02:31:40.414812 systemd-logind[1533]: Removed session 68. Jan 20 02:31:40.819989 sshd[5583]: Accepted publickey for core from 10.0.0.1 port 34904 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:31:40.826390 sshd-session[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:40.885771 systemd-logind[1533]: New session 69 of user core. Jan 20 02:31:40.933348 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 20 02:31:46.908160 containerd[1567]: time="2026-01-20T02:31:46.907527795Z" level=info msg="StopContainer for \"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\" with timeout 30 (s)" Jan 20 02:31:46.913455 containerd[1567]: time="2026-01-20T02:31:46.913401781Z" level=info msg="Stop container \"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\" with signal terminated" Jan 20 02:31:47.056146 systemd[1]: cri-containerd-72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e.scope: Deactivated successfully. Jan 20 02:31:47.062183 systemd[1]: cri-containerd-72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e.scope: Consumed 4.363s CPU time, 26.9M memory peak, 4K written to disk. Jan 20 02:31:47.071668 containerd[1567]: time="2026-01-20T02:31:47.071254233Z" level=info msg="received container exit event container_id:\"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\" id:\"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\" pid:3564 exited_at:{seconds:1768876307 nanos:70344172}" Jan 20 02:31:47.115867 kubelet[2900]: E0120 02:31:47.111777 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:47.145372 containerd[1567]: time="2026-01-20T02:31:47.144317361Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:31:47.224312 containerd[1567]: time="2026-01-20T02:31:47.218529028Z" level=info msg="StopContainer for \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\" with timeout 2 (s)" Jan 20 02:31:47.224312 containerd[1567]: time="2026-01-20T02:31:47.220558046Z" level=info msg="Stop container \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\" with signal terminated" Jan 20 02:31:47.342839 systemd-networkd[1474]: lxc_health: Link DOWN Jan 20 02:31:47.342852 systemd-networkd[1474]: lxc_health: Lost carrier Jan 20 02:31:47.469875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e-rootfs.mount: Deactivated successfully. Jan 20 02:31:47.660168 systemd[1]: cri-containerd-f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e.scope: Deactivated successfully. Jan 20 02:31:47.660753 systemd[1]: cri-containerd-f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e.scope: Consumed 29.102s CPU time, 145.7M memory peak, 524K read from disk, 13.3M written to disk. Jan 20 02:31:47.675283 containerd[1567]: time="2026-01-20T02:31:47.674986868Z" level=info msg="received container exit event container_id:\"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\" id:\"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\" pid:3491 exited_at:{seconds:1768876307 nanos:671597101}" Jan 20 02:31:47.735082 sshd[5586]: Connection closed by 10.0.0.1 port 34904 Jan 20 02:31:47.760623 sshd-session[5583]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:47.798454 containerd[1567]: time="2026-01-20T02:31:47.797867457Z" level=info msg="StopContainer for \"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\" returns successfully" Jan 20 02:31:47.813746 containerd[1567]: time="2026-01-20T02:31:47.813703157Z" level=info msg="StopPodSandbox for \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\"" Jan 20 02:31:47.834823 containerd[1567]: time="2026-01-20T02:31:47.834758076Z" level=info msg="Container to stop \"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:31:47.872323 systemd[1]: sshd@68-10.0.0.89:22-10.0.0.1:34904.service: Deactivated successfully. Jan 20 02:31:47.884305 systemd[1]: session-69.scope: Deactivated successfully. Jan 20 02:31:47.899463 systemd[1]: session-69.scope: Consumed 1.161s CPU time, 26.5M memory peak. Jan 20 02:31:47.927547 systemd-logind[1533]: Session 69 logged out. Waiting for processes to exit. Jan 20 02:31:48.028105 systemd[1]: Started sshd@69-10.0.0.89:22-10.0.0.1:58690.service - OpenSSH per-connection server daemon (10.0.0.1:58690). Jan 20 02:31:48.049993 systemd[1]: cri-containerd-a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25.scope: Deactivated successfully. Jan 20 02:31:48.096859 systemd-logind[1533]: Removed session 69. Jan 20 02:31:48.124551 containerd[1567]: time="2026-01-20T02:31:48.103146049Z" level=info msg="received sandbox exit event container_id:\"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" id:\"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" exit_status:137 exited_at:{seconds:1768876308 nanos:102452177}" monitor_name=podsandbox Jan 20 02:31:48.367322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e-rootfs.mount: Deactivated successfully. Jan 20 02:31:48.505103 sshd[5666]: Accepted publickey for core from 10.0.0.1 port 58690 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:31:48.513308 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:48.567303 systemd-logind[1533]: New session 70 of user core. Jan 20 02:31:48.592700 containerd[1567]: time="2026-01-20T02:31:48.572371992Z" level=info msg="StopContainer for \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\" returns successfully" Jan 20 02:31:48.592700 containerd[1567]: time="2026-01-20T02:31:48.572953617Z" level=info msg="StopPodSandbox for \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\"" Jan 20 02:31:48.592700 containerd[1567]: time="2026-01-20T02:31:48.573107201Z" level=info msg="Container to stop \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:31:48.592700 containerd[1567]: time="2026-01-20T02:31:48.573125585Z" level=info msg="Container to stop \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:31:48.592700 containerd[1567]: time="2026-01-20T02:31:48.573137166Z" level=info msg="Container to stop \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:31:48.592700 containerd[1567]: time="2026-01-20T02:31:48.573149088Z" level=info msg="Container to stop \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:31:48.592700 containerd[1567]: time="2026-01-20T02:31:48.573163185Z" level=info msg="Container to stop \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 02:31:48.623913 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 20 02:31:48.640287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25-rootfs.mount: Deactivated successfully. Jan 20 02:31:48.661318 systemd[1]: cri-containerd-36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b.scope: Deactivated successfully. Jan 20 02:31:48.667817 containerd[1567]: time="2026-01-20T02:31:48.667765827Z" level=info msg="received sandbox exit event container_id:\"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" id:\"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" exit_status:137 exited_at:{seconds:1768876308 nanos:657450973}" monitor_name=podsandbox Jan 20 02:31:48.772439 containerd[1567]: time="2026-01-20T02:31:48.771168124Z" level=info msg="shim disconnected" id=a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25 namespace=k8s.io Jan 20 02:31:48.772439 containerd[1567]: time="2026-01-20T02:31:48.771266306Z" level=warning msg="cleaning up after shim disconnected" id=a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25 namespace=k8s.io Jan 20 02:31:48.772439 containerd[1567]: time="2026-01-20T02:31:48.771282956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 02:31:49.094758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25-shm.mount: Deactivated successfully. Jan 20 02:31:49.131814 containerd[1567]: time="2026-01-20T02:31:49.097880026Z" level=info msg="TearDown network for sandbox \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" successfully" Jan 20 02:31:49.131814 containerd[1567]: time="2026-01-20T02:31:49.097921192Z" level=info msg="StopPodSandbox for \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" returns successfully" Jan 20 02:31:49.132607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b-rootfs.mount: Deactivated successfully. Jan 20 02:31:49.152968 containerd[1567]: time="2026-01-20T02:31:49.146967215Z" level=info msg="received sandbox container exit event sandbox_id:\"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" exit_status:137 exited_at:{seconds:1768876308 nanos:102452177}" monitor_name=criService Jan 20 02:31:49.199908 containerd[1567]: time="2026-01-20T02:31:49.199791574Z" level=info msg="shim disconnected" id=36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b namespace=k8s.io Jan 20 02:31:49.200382 containerd[1567]: time="2026-01-20T02:31:49.200210678Z" level=warning msg="cleaning up after shim disconnected" id=36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b namespace=k8s.io Jan 20 02:31:49.200382 containerd[1567]: time="2026-01-20T02:31:49.200235053Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 02:31:49.288371 kubelet[2900]: I0120 02:31:49.288324 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4bv9\" (UniqueName: \"kubernetes.io/projected/8e769453-5f0d-4e1d-8910-c192acbf2294-kube-api-access-l4bv9\") pod \"8e769453-5f0d-4e1d-8910-c192acbf2294\" (UID: \"8e769453-5f0d-4e1d-8910-c192acbf2294\") " Jan 20 02:31:49.289907 kubelet[2900]: I0120 02:31:49.289875 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e769453-5f0d-4e1d-8910-c192acbf2294-cilium-config-path\") pod \"8e769453-5f0d-4e1d-8910-c192acbf2294\" (UID: \"8e769453-5f0d-4e1d-8910-c192acbf2294\") " Jan 20 02:31:49.360259 kubelet[2900]: I0120 02:31:49.353096 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e769453-5f0d-4e1d-8910-c192acbf2294-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e769453-5f0d-4e1d-8910-c192acbf2294" (UID: "8e769453-5f0d-4e1d-8910-c192acbf2294"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 02:31:49.362363 systemd[1]: var-lib-kubelet-pods-8e769453\x2d5f0d\x2d4e1d\x2d8910\x2dc192acbf2294-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl4bv9.mount: Deactivated successfully. Jan 20 02:31:49.402565 kubelet[2900]: I0120 02:31:49.394522 2900 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e769453-5f0d-4e1d-8910-c192acbf2294-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:49.448554 kubelet[2900]: I0120 02:31:49.447428 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e769453-5f0d-4e1d-8910-c192acbf2294-kube-api-access-l4bv9" (OuterVolumeSpecName: "kube-api-access-l4bv9") pod "8e769453-5f0d-4e1d-8910-c192acbf2294" (UID: "8e769453-5f0d-4e1d-8910-c192acbf2294"). InnerVolumeSpecName "kube-api-access-l4bv9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:31:49.456381 containerd[1567]: time="2026-01-20T02:31:49.455370981Z" level=info msg="received sandbox container exit event sandbox_id:\"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" exit_status:137 exited_at:{seconds:1768876308 nanos:657450973}" monitor_name=criService Jan 20 02:31:49.480306 containerd[1567]: time="2026-01-20T02:31:49.480148438Z" level=info msg="TearDown network for sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" successfully" Jan 20 02:31:49.480306 containerd[1567]: time="2026-01-20T02:31:49.480195415Z" level=info msg="StopPodSandbox for \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" returns successfully" Jan 20 02:31:49.501737 kubelet[2900]: I0120 02:31:49.498407 2900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l4bv9\" (UniqueName: \"kubernetes.io/projected/8e769453-5f0d-4e1d-8910-c192acbf2294-kube-api-access-l4bv9\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:49.499106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b-shm.mount: Deactivated successfully. Jan 20 02:31:49.565343 kubelet[2900]: I0120 02:31:49.550201 2900 scope.go:117] "RemoveContainer" containerID="72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e" Jan 20 02:31:49.686908 containerd[1567]: time="2026-01-20T02:31:49.683737105Z" level=info msg="RemoveContainer for \"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\"" Jan 20 02:31:49.690283 systemd[1]: Removed slice kubepods-besteffort-pod8e769453_5f0d_4e1d_8910_c192acbf2294.slice - libcontainer container kubepods-besteffort-pod8e769453_5f0d_4e1d_8910_c192acbf2294.slice. Jan 20 02:31:49.690430 systemd[1]: kubepods-besteffort-pod8e769453_5f0d_4e1d_8910_c192acbf2294.slice: Consumed 4.498s CPU time, 27.1M memory peak, 4K written to disk. Jan 20 02:31:49.718829 containerd[1567]: time="2026-01-20T02:31:49.718778595Z" level=info msg="RemoveContainer for \"72f6bcc1e94f194880fd8088919ad27744fe9c4943545207da8c14999e65726e\" returns successfully" Jan 20 02:31:49.819452 kubelet[2900]: I0120 02:31:49.819413 2900 scope.go:117] "RemoveContainer" containerID="f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e" Jan 20 02:31:49.856584 containerd[1567]: time="2026-01-20T02:31:49.847504984Z" level=info msg="RemoveContainer for \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\"" Jan 20 02:31:49.923107 kubelet[2900]: I0120 02:31:49.920170 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-config-path\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923107 kubelet[2900]: I0120 02:31:49.920222 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-xtables-lock\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923107 kubelet[2900]: I0120 02:31:49.920252 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5fhs\" (UniqueName: \"kubernetes.io/projected/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-kube-api-access-w5fhs\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923107 kubelet[2900]: I0120 02:31:49.920274 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-cgroup\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923107 kubelet[2900]: I0120 02:31:49.920295 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-lib-modules\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923107 kubelet[2900]: I0120 02:31:49.920327 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-hubble-tls\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923479 kubelet[2900]: I0120 02:31:49.920349 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-run\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923479 kubelet[2900]: I0120 02:31:49.920374 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-clustermesh-secrets\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923479 kubelet[2900]: I0120 02:31:49.920454 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-bpf-maps\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923479 kubelet[2900]: I0120 02:31:49.920476 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cni-path\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923479 kubelet[2900]: I0120 02:31:49.920499 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-etc-cni-netd\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923479 kubelet[2900]: I0120 02:31:49.920521 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-host-proc-sys-kernel\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923768 kubelet[2900]: I0120 02:31:49.920541 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-hostproc\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923768 kubelet[2900]: I0120 02:31:49.920561 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-host-proc-sys-net\") pod \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\" (UID: \"f005b0f2-4c88-40c6-a2d4-a180bd513b5f\") " Jan 20 02:31:49.923768 kubelet[2900]: I0120 02:31:49.920712 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:31:49.925072 kubelet[2900]: I0120 02:31:49.924913 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:31:49.939946 containerd[1567]: time="2026-01-20T02:31:49.928291735Z" level=info msg="RemoveContainer for \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\" returns successfully" Jan 20 02:31:49.940189 kubelet[2900]: I0120 02:31:49.930570 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:31:49.940189 kubelet[2900]: I0120 02:31:49.931147 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:31:49.940189 kubelet[2900]: I0120 02:31:49.932117 2900 scope.go:117] "RemoveContainer" containerID="1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e" Jan 20 02:31:49.963215 kubelet[2900]: I0120 02:31:49.963154 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:31:49.963814 kubelet[2900]: I0120 02:31:49.963609 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:31:49.963814 kubelet[2900]: I0120 02:31:49.963709 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cni-path" (OuterVolumeSpecName: "cni-path") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:31:49.963814 kubelet[2900]: I0120 02:31:49.963738 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:31:49.963814 kubelet[2900]: I0120 02:31:49.963770 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:31:49.963814 kubelet[2900]: I0120 02:31:49.963790 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-hostproc" (OuterVolumeSpecName: "hostproc") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 02:31:49.966348 kubelet[2900]: I0120 02:31:49.966123 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 02:31:50.022531 kubelet[2900]: I0120 02:31:50.021969 2900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.022531 kubelet[2900]: I0120 02:31:50.022134 2900 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.022531 kubelet[2900]: I0120 02:31:50.022151 2900 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.022531 kubelet[2900]: I0120 02:31:50.022252 2900 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.022531 kubelet[2900]: I0120 02:31:50.022266 2900 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.022531 kubelet[2900]: I0120 02:31:50.022278 2900 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.022531 kubelet[2900]: I0120 02:31:50.022371 2900 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.022531 kubelet[2900]: I0120 02:31:50.022384 2900 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.067914 kubelet[2900]: I0120 02:31:50.022395 2900 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.067914 kubelet[2900]: I0120 02:31:50.022410 2900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.067914 kubelet[2900]: I0120 02:31:50.022504 2900 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.130101 containerd[1567]: time="2026-01-20T02:31:50.096444412Z" level=info msg="RemoveContainer for \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\"" Jan 20 02:31:50.101855 systemd[1]: var-lib-kubelet-pods-f005b0f2\x2d4c88\x2d40c6\x2da2d4\x2da180bd513b5f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5fhs.mount: Deactivated successfully. Jan 20 02:31:50.112794 systemd[1]: var-lib-kubelet-pods-f005b0f2\x2d4c88\x2d40c6\x2da2d4\x2da180bd513b5f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 02:31:50.113146 systemd[1]: var-lib-kubelet-pods-f005b0f2\x2d4c88\x2d40c6\x2da2d4\x2da180bd513b5f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 02:31:50.137085 containerd[1567]: time="2026-01-20T02:31:50.136357315Z" level=info msg="RemoveContainer for \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\" returns successfully" Jan 20 02:31:50.141559 kubelet[2900]: I0120 02:31:50.134592 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 02:31:50.141559 kubelet[2900]: I0120 02:31:50.135531 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-kube-api-access-w5fhs" (OuterVolumeSpecName: "kube-api-access-w5fhs") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "kube-api-access-w5fhs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:31:50.141559 kubelet[2900]: I0120 02:31:50.136558 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f005b0f2-4c88-40c6-a2d4-a180bd513b5f" (UID: "f005b0f2-4c88-40c6-a2d4-a180bd513b5f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:31:50.143456 kubelet[2900]: I0120 02:31:50.143358 2900 scope.go:117] "RemoveContainer" containerID="d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51" Jan 20 02:31:50.146331 kubelet[2900]: I0120 02:31:50.145941 2900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e769453-5f0d-4e1d-8910-c192acbf2294" path="/var/lib/kubelet/pods/8e769453-5f0d-4e1d-8910-c192acbf2294/volumes" Jan 20 02:31:50.169780 containerd[1567]: time="2026-01-20T02:31:50.165154022Z" level=info msg="RemoveContainer for \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\"" Jan 20 02:31:50.172601 systemd[1]: Removed slice kubepods-burstable-podf005b0f2_4c88_40c6_a2d4_a180bd513b5f.slice - libcontainer container kubepods-burstable-podf005b0f2_4c88_40c6_a2d4_a180bd513b5f.slice. Jan 20 02:31:50.172798 systemd[1]: kubepods-burstable-podf005b0f2_4c88_40c6_a2d4_a180bd513b5f.slice: Consumed 29.499s CPU time, 146M memory peak, 956K read from disk, 13.3M written to disk. Jan 20 02:31:50.233159 kubelet[2900]: I0120 02:31:50.232898 2900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w5fhs\" (UniqueName: \"kubernetes.io/projected/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-kube-api-access-w5fhs\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.233159 kubelet[2900]: I0120 02:31:50.233112 2900 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.233159 kubelet[2900]: I0120 02:31:50.233132 2900 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f005b0f2-4c88-40c6-a2d4-a180bd513b5f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 20 02:31:50.246896 containerd[1567]: time="2026-01-20T02:31:50.244465740Z" level=info msg="RemoveContainer for \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\" returns successfully" Jan 20 02:31:50.255344 kubelet[2900]: I0120 02:31:50.248888 2900 scope.go:117] "RemoveContainer" containerID="bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23" Jan 20 02:31:50.258118 containerd[1567]: time="2026-01-20T02:31:50.257973718Z" level=info msg="RemoveContainer for \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\"" Jan 20 02:31:50.281930 containerd[1567]: time="2026-01-20T02:31:50.281319971Z" level=info msg="RemoveContainer for \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\" returns successfully" Jan 20 02:31:50.282666 kubelet[2900]: I0120 02:31:50.282145 2900 scope.go:117] "RemoveContainer" containerID="8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00" Jan 20 02:31:50.321743 containerd[1567]: time="2026-01-20T02:31:50.316939602Z" level=info msg="RemoveContainer for \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\"" Jan 20 02:31:50.399470 containerd[1567]: time="2026-01-20T02:31:50.390391897Z" level=info msg="RemoveContainer for \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\" returns successfully" Jan 20 02:31:50.399707 kubelet[2900]: I0120 02:31:50.391731 2900 scope.go:117] "RemoveContainer" containerID="f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e" Jan 20 02:31:50.467714 containerd[1567]: time="2026-01-20T02:31:50.392212292Z" level=error msg="ContainerStatus for \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\": not found" Jan 20 02:31:50.479208 kubelet[2900]: E0120 02:31:50.469953 2900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\": not found" containerID="f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e" Jan 20 02:31:50.479208 kubelet[2900]: I0120 02:31:50.470225 2900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e"} err="failed to get container status \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1bfef8f8a0eceee0baacf613dc76bc785b60d35353428d57a968826f5668e1e\": not found" Jan 20 02:31:50.479208 kubelet[2900]: I0120 02:31:50.470347 2900 scope.go:117] "RemoveContainer" containerID="1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e" Jan 20 02:31:50.479208 kubelet[2900]: E0120 02:31:50.473483 2900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\": not found" containerID="1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e" Jan 20 02:31:50.479208 kubelet[2900]: I0120 02:31:50.473517 2900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e"} err="failed to get container status \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\": not found" Jan 20 02:31:50.479208 kubelet[2900]: I0120 02:31:50.473542 2900 scope.go:117] "RemoveContainer" containerID="d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51" Jan 20 02:31:50.479698 containerd[1567]: time="2026-01-20T02:31:50.473312987Z" level=error msg="ContainerStatus for \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1218cbfc8b64c1fcc5be8d5441639f197b22f92972f3d82dcf0363aae81e876e\": not found" Jan 20 02:31:50.479698 containerd[1567]: time="2026-01-20T02:31:50.473805147Z" level=error msg="ContainerStatus for \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\": not found" Jan 20 02:31:50.479698 containerd[1567]: time="2026-01-20T02:31:50.474787362Z" level=error msg="ContainerStatus for \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\": not found" Jan 20 02:31:50.479698 containerd[1567]: time="2026-01-20T02:31:50.475257462Z" level=error msg="ContainerStatus for \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\": not found" Jan 20 02:31:50.479882 kubelet[2900]: E0120 02:31:50.474472 2900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\": not found" containerID="d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51" Jan 20 02:31:50.479882 kubelet[2900]: I0120 02:31:50.474501 2900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51"} err="failed to get container status \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1db26e4fc9521d2633ad5628d4da9a432f4266763c51f59f75e16bd57958d51\": not found" Jan 20 02:31:50.479882 kubelet[2900]: I0120 02:31:50.474527 2900 scope.go:117] "RemoveContainer" containerID="bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23" Jan 20 02:31:50.479882 kubelet[2900]: E0120 02:31:50.474915 2900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\": not found" containerID="bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23" Jan 20 02:31:50.479882 kubelet[2900]: I0120 02:31:50.474949 2900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23"} err="failed to get container status \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc615b2fb3ad05b67e66f16594847d7ebf4b73ab9625c4076656a83dceee0f23\": not found" Jan 20 02:31:50.479882 kubelet[2900]: I0120 02:31:50.474977 2900 scope.go:117] "RemoveContainer" containerID="8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00" Jan 20 02:31:50.499948 kubelet[2900]: E0120 02:31:50.499122 2900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\": not found" containerID="8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00" Jan 20 02:31:50.499948 kubelet[2900]: I0120 02:31:50.499361 2900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00"} err="failed to get container status \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f82e0a00ba40374fbe5a0fdfd868f68467c910ed0f46b20b15113a3bf4aee00\": not found" Jan 20 02:31:51.242233 kubelet[2900]: E0120 02:31:51.241989 2900 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:31:52.170137 kubelet[2900]: I0120 02:31:52.168796 2900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f005b0f2-4c88-40c6-a2d4-a180bd513b5f" path="/var/lib/kubelet/pods/f005b0f2-4c88-40c6-a2d4-a180bd513b5f/volumes" Jan 20 02:31:53.131124 kubelet[2900]: E0120 02:31:53.128560 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:53.700421 sshd[5696]: Connection closed by 10.0.0.1 port 58690 Jan 20 02:31:53.702493 sshd-session[5666]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:53.989251 systemd[1]: sshd@69-10.0.0.89:22-10.0.0.1:58690.service: Deactivated successfully. Jan 20 02:31:54.039970 systemd[1]: session-70.scope: Deactivated successfully. Jan 20 02:31:54.040403 systemd[1]: session-70.scope: Consumed 1.330s CPU time, 27.5M memory peak. Jan 20 02:31:54.087755 kubelet[2900]: I0120 02:31:54.064376 2900 memory_manager.go:355] "RemoveStaleState removing state" podUID="f005b0f2-4c88-40c6-a2d4-a180bd513b5f" containerName="cilium-agent" Jan 20 02:31:54.087755 kubelet[2900]: I0120 02:31:54.064411 2900 memory_manager.go:355] "RemoveStaleState removing state" podUID="8e769453-5f0d-4e1d-8910-c192acbf2294" containerName="cilium-operator" Jan 20 02:31:54.161741 systemd-logind[1533]: Session 70 logged out. Waiting for processes to exit. Jan 20 02:31:54.197470 kubelet[2900]: I0120 02:31:54.197091 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/676cc2ee-4cd7-49c7-80e5-8a589e724b26-cilium-run\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.197470 kubelet[2900]: I0120 02:31:54.197132 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/676cc2ee-4cd7-49c7-80e5-8a589e724b26-bpf-maps\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.197470 kubelet[2900]: I0120 02:31:54.197157 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/676cc2ee-4cd7-49c7-80e5-8a589e724b26-cilium-config-path\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.197470 kubelet[2900]: I0120 02:31:54.197184 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/676cc2ee-4cd7-49c7-80e5-8a589e724b26-xtables-lock\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.197470 kubelet[2900]: I0120 02:31:54.197204 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/676cc2ee-4cd7-49c7-80e5-8a589e724b26-host-proc-sys-net\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.197470 kubelet[2900]: I0120 02:31:54.197226 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkh8p\" (UniqueName: \"kubernetes.io/projected/676cc2ee-4cd7-49c7-80e5-8a589e724b26-kube-api-access-tkh8p\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.204001 kubelet[2900]: I0120 02:31:54.197246 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/676cc2ee-4cd7-49c7-80e5-8a589e724b26-hostproc\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.204001 kubelet[2900]: I0120 02:31:54.197264 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/676cc2ee-4cd7-49c7-80e5-8a589e724b26-cilium-cgroup\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.204001 kubelet[2900]: I0120 02:31:54.197285 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/676cc2ee-4cd7-49c7-80e5-8a589e724b26-clustermesh-secrets\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.204001 kubelet[2900]: I0120 02:31:54.197304 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/676cc2ee-4cd7-49c7-80e5-8a589e724b26-hubble-tls\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.204001 kubelet[2900]: I0120 02:31:54.197325 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/676cc2ee-4cd7-49c7-80e5-8a589e724b26-lib-modules\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.204001 kubelet[2900]: I0120 02:31:54.197345 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/676cc2ee-4cd7-49c7-80e5-8a589e724b26-cilium-ipsec-secrets\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.204361 kubelet[2900]: I0120 02:31:54.197364 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/676cc2ee-4cd7-49c7-80e5-8a589e724b26-cni-path\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.208957 kubelet[2900]: I0120 02:31:54.204619 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/676cc2ee-4cd7-49c7-80e5-8a589e724b26-etc-cni-netd\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.208957 kubelet[2900]: I0120 02:31:54.204670 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/676cc2ee-4cd7-49c7-80e5-8a589e724b26-host-proc-sys-kernel\") pod \"cilium-8jllv\" (UID: \"676cc2ee-4cd7-49c7-80e5-8a589e724b26\") " pod="kube-system/cilium-8jllv" Jan 20 02:31:54.224280 systemd[1]: Started sshd@70-10.0.0.89:22-10.0.0.1:58702.service - OpenSSH per-connection server daemon (10.0.0.1:58702). Jan 20 02:31:54.237362 systemd-logind[1533]: Removed session 70. Jan 20 02:31:54.422977 systemd[1]: Created slice kubepods-burstable-pod676cc2ee_4cd7_49c7_80e5_8a589e724b26.slice - libcontainer container kubepods-burstable-pod676cc2ee_4cd7_49c7_80e5_8a589e724b26.slice. Jan 20 02:31:55.200113 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 58702 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:31:55.236711 containerd[1567]: time="2026-01-20T02:31:55.193980854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jllv,Uid:676cc2ee-4cd7-49c7-80e5-8a589e724b26,Namespace:kube-system,Attempt:0,}" Jan 20 02:31:55.302120 kubelet[2900]: E0120 02:31:55.192900 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:55.224878 sshd-session[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:55.419486 systemd-logind[1533]: New session 71 of user core. Jan 20 02:31:55.504229 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 20 02:31:55.876378 sshd[5756]: Connection closed by 10.0.0.1 port 58702 Jan 20 02:31:55.882372 sshd-session[5748]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:55.912090 containerd[1567]: time="2026-01-20T02:31:55.902831582Z" level=info msg="connecting to shim 93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73" address="unix:///run/containerd/s/90941e433a00dedd1d0a939fed77e7b55d56eeee88730e2909492d5cc47cab86" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:31:55.981971 systemd[1]: sshd@70-10.0.0.89:22-10.0.0.1:58702.service: Deactivated successfully. Jan 20 02:31:56.080192 systemd[1]: session-71.scope: Deactivated successfully. Jan 20 02:31:56.103669 systemd-logind[1533]: Session 71 logged out. Waiting for processes to exit. Jan 20 02:31:56.140791 systemd[1]: Started sshd@71-10.0.0.89:22-10.0.0.1:42190.service - OpenSSH per-connection server daemon (10.0.0.1:42190). Jan 20 02:31:56.175634 systemd-logind[1533]: Removed session 71. Jan 20 02:31:56.275823 kubelet[2900]: E0120 02:31:56.273517 2900 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:31:56.395442 systemd[1]: Started cri-containerd-93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73.scope - libcontainer container 93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73. Jan 20 02:31:56.954183 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 42190 ssh2: RSA SHA256:BMhLTsdZI1Yg9CtONFct84Vkunhwf+VD9Wd68FSWc3I Jan 20 02:31:56.972507 sshd-session[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:57.085442 systemd-logind[1533]: New session 72 of user core. Jan 20 02:31:57.133644 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 20 02:31:57.379204 containerd[1567]: time="2026-01-20T02:31:57.357511050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jllv,Uid:676cc2ee-4cd7-49c7-80e5-8a589e724b26,Namespace:kube-system,Attempt:0,} returns sandbox id \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\"" Jan 20 02:31:57.415151 kubelet[2900]: E0120 02:31:57.415001 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:57.493169 containerd[1567]: time="2026-01-20T02:31:57.491311358Z" level=info msg="CreateContainer within sandbox \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 02:31:57.898948 containerd[1567]: time="2026-01-20T02:31:57.898346948Z" level=info msg="Container ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:31:58.139137 containerd[1567]: time="2026-01-20T02:31:58.136078467Z" level=info msg="StopPodSandbox for \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\"" Jan 20 02:31:58.139137 containerd[1567]: time="2026-01-20T02:31:58.136277386Z" level=info msg="TearDown network for sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" successfully" Jan 20 02:31:58.139137 containerd[1567]: time="2026-01-20T02:31:58.136301089Z" level=info msg="StopPodSandbox for \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" returns successfully" Jan 20 02:31:58.193487 containerd[1567]: time="2026-01-20T02:31:58.176973225Z" level=info msg="RemovePodSandbox for \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\"" Jan 20 02:31:58.193487 containerd[1567]: time="2026-01-20T02:31:58.177415794Z" level=info msg="Forcibly stopping sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\"" Jan 20 02:31:58.195290 containerd[1567]: time="2026-01-20T02:31:58.195168995Z" level=info msg="TearDown network for sandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" successfully" Jan 20 02:31:58.273772 containerd[1567]: time="2026-01-20T02:31:58.268515902Z" level=info msg="CreateContainer within sandbox \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554\"" Jan 20 02:31:58.281799 containerd[1567]: time="2026-01-20T02:31:58.269458518Z" level=info msg="Ensure that sandbox 36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b in task-service has been cleanup successfully" Jan 20 02:31:58.307947 containerd[1567]: time="2026-01-20T02:31:58.303296644Z" level=info msg="StartContainer for \"ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554\"" Jan 20 02:31:58.371665 containerd[1567]: time="2026-01-20T02:31:58.347801532Z" level=info msg="connecting to shim ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554" address="unix:///run/containerd/s/90941e433a00dedd1d0a939fed77e7b55d56eeee88730e2909492d5cc47cab86" protocol=ttrpc version=3 Jan 20 02:31:58.414098 containerd[1567]: time="2026-01-20T02:31:58.413970252Z" level=info msg="RemovePodSandbox \"36f398e90115d3b5a52e50f6f66d9370052024ceecd8ef74be7b4ed5b449b27b\" returns successfully" Jan 20 02:31:58.448405 containerd[1567]: time="2026-01-20T02:31:58.446462121Z" level=info msg="StopPodSandbox for \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\"" Jan 20 02:31:58.465607 containerd[1567]: time="2026-01-20T02:31:58.465485631Z" level=info msg="TearDown network for sandbox \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" successfully" Jan 20 02:31:58.465819 containerd[1567]: time="2026-01-20T02:31:58.465790926Z" level=info msg="StopPodSandbox for \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" returns successfully" Jan 20 02:31:58.530245 containerd[1567]: time="2026-01-20T02:31:58.517809445Z" level=info msg="RemovePodSandbox for \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\"" Jan 20 02:31:58.530245 containerd[1567]: time="2026-01-20T02:31:58.523955888Z" level=info msg="Forcibly stopping sandbox \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\"" Jan 20 02:31:58.530245 containerd[1567]: time="2026-01-20T02:31:58.524231979Z" level=info msg="TearDown network for sandbox \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" successfully" Jan 20 02:31:58.672461 containerd[1567]: time="2026-01-20T02:31:58.672403636Z" level=info msg="Ensure that sandbox a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25 in task-service has been cleanup successfully" Jan 20 02:31:58.760463 containerd[1567]: time="2026-01-20T02:31:58.759777408Z" level=info msg="RemovePodSandbox \"a9d6faff47842e26959e9fb3ba2ed41f0543d7ca175de93c2afe864512dedb25\" returns successfully" Jan 20 02:31:59.185360 systemd[1]: Started cri-containerd-ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554.scope - libcontainer container ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554. Jan 20 02:31:59.296109 kubelet[2900]: I0120 02:31:59.295432 2900 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T02:31:59Z","lastTransitionTime":"2026-01-20T02:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 02:32:00.385844 containerd[1567]: time="2026-01-20T02:32:00.380272836Z" level=info msg="StartContainer for \"ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554\" returns successfully" Jan 20 02:32:00.495227 systemd[1]: cri-containerd-ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554.scope: Deactivated successfully. Jan 20 02:32:00.507675 containerd[1567]: time="2026-01-20T02:32:00.503512441Z" level=info msg="received container exit event container_id:\"ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554\" id:\"ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554\" pid:5833 exited_at:{seconds:1768876320 nanos:502927628}" Jan 20 02:32:00.868374 kubelet[2900]: E0120 02:32:00.864330 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:01.046360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee3c8dc7f4ff1e958df1a78ecde596c9f3f972ac4e1dbc72cd65b573f9d0f554-rootfs.mount: Deactivated successfully. Jan 20 02:32:01.358287 kubelet[2900]: E0120 02:32:01.343745 2900 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:32:01.957771 kubelet[2900]: E0120 02:32:01.946610 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:02.065777 containerd[1567]: time="2026-01-20T02:32:02.038182307Z" level=info msg="CreateContainer within sandbox \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 02:32:02.332379 containerd[1567]: time="2026-01-20T02:32:02.304463893Z" level=info msg="Container f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:32:02.495258 containerd[1567]: time="2026-01-20T02:32:02.489363173Z" level=info msg="CreateContainer within sandbox \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9\"" Jan 20 02:32:02.508631 containerd[1567]: time="2026-01-20T02:32:02.503634921Z" level=info msg="StartContainer for \"f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9\"" Jan 20 02:32:02.529723 containerd[1567]: time="2026-01-20T02:32:02.524737949Z" level=info msg="connecting to shim f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9" address="unix:///run/containerd/s/90941e433a00dedd1d0a939fed77e7b55d56eeee88730e2909492d5cc47cab86" protocol=ttrpc version=3 Jan 20 02:32:03.019683 systemd[1]: Started cri-containerd-f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9.scope - libcontainer container f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9. Jan 20 02:32:03.808234 containerd[1567]: time="2026-01-20T02:32:03.805441026Z" level=info msg="StartContainer for \"f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9\" returns successfully" Jan 20 02:32:03.954277 systemd[1]: cri-containerd-f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9.scope: Deactivated successfully. Jan 20 02:32:04.020437 containerd[1567]: time="2026-01-20T02:32:04.015647414Z" level=info msg="received container exit event container_id:\"f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9\" id:\"f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9\" pid:5879 exited_at:{seconds:1768876324 nanos:2449039}" Jan 20 02:32:05.044131 kubelet[2900]: E0120 02:32:05.044083 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:05.579292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5d8f0a3430f04d75e1efd7376c7d98694b00dca156889e579a20293d897d5d9-rootfs.mount: Deactivated successfully. Jan 20 02:32:06.135363 kubelet[2900]: E0120 02:32:06.135310 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:06.166462 containerd[1567]: time="2026-01-20T02:32:06.166291607Z" level=info msg="CreateContainer within sandbox \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 02:32:06.362930 kubelet[2900]: E0120 02:32:06.362842 2900 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:32:06.568340 containerd[1567]: time="2026-01-20T02:32:06.568285503Z" level=info msg="Container 962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:32:06.720711 containerd[1567]: time="2026-01-20T02:32:06.718241559Z" level=info msg="CreateContainer within sandbox \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b\"" Jan 20 02:32:06.733219 containerd[1567]: time="2026-01-20T02:32:06.733166407Z" level=info msg="StartContainer for \"962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b\"" Jan 20 02:32:06.747688 containerd[1567]: time="2026-01-20T02:32:06.747628278Z" level=info msg="connecting to shim 962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b" address="unix:///run/containerd/s/90941e433a00dedd1d0a939fed77e7b55d56eeee88730e2909492d5cc47cab86" protocol=ttrpc version=3 Jan 20 02:32:06.961808 systemd[1]: Started cri-containerd-962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b.scope - libcontainer container 962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b. Jan 20 02:32:07.126977 kubelet[2900]: E0120 02:32:07.113327 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dnkt2" podUID="03a5d512-f8bc-4887-a449-4983961e6308" Jan 20 02:32:08.167762 kubelet[2900]: E0120 02:32:08.133698 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-n7pbk" podUID="70822b44-fdad-4ab3-a09a-888003a4ded6" Jan 20 02:32:08.722678 containerd[1567]: time="2026-01-20T02:32:08.711822865Z" level=info msg="StartContainer for \"962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b\" returns successfully" Jan 20 02:32:08.810302 systemd[1]: cri-containerd-962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b.scope: Deactivated successfully. Jan 20 02:32:08.848867 containerd[1567]: time="2026-01-20T02:32:08.845998734Z" level=info msg="received container exit event container_id:\"962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b\" id:\"962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b\" pid:5922 exited_at:{seconds:1768876328 nanos:844879204}" Jan 20 02:32:09.132094 kubelet[2900]: E0120 02:32:09.130619 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dnkt2" podUID="03a5d512-f8bc-4887-a449-4983961e6308" Jan 20 02:32:09.148416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-962ad05eeff77dec641ab7d9d3bbdd76299142c99891db7a4a533bc3f0fb793b-rootfs.mount: Deactivated successfully. Jan 20 02:32:09.493422 kubelet[2900]: E0120 02:32:09.490704 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:09.541248 containerd[1567]: time="2026-01-20T02:32:09.535347016Z" level=info msg="CreateContainer within sandbox \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 02:32:09.817604 containerd[1567]: time="2026-01-20T02:32:09.809418417Z" level=info msg="Container 18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:32:10.002897 containerd[1567]: time="2026-01-20T02:32:10.002783483Z" level=info msg="CreateContainer within sandbox \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad\"" Jan 20 02:32:10.023935 containerd[1567]: time="2026-01-20T02:32:10.007538557Z" level=info msg="StartContainer for \"18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad\"" Jan 20 02:32:10.023935 containerd[1567]: time="2026-01-20T02:32:10.009368020Z" level=info msg="connecting to shim 18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad" address="unix:///run/containerd/s/90941e433a00dedd1d0a939fed77e7b55d56eeee88730e2909492d5cc47cab86" protocol=ttrpc version=3 Jan 20 02:32:10.121626 kubelet[2900]: E0120 02:32:10.116356 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-n7pbk" podUID="70822b44-fdad-4ab3-a09a-888003a4ded6" Jan 20 02:32:10.364575 systemd[1]: Started cri-containerd-18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad.scope - libcontainer container 18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad. Jan 20 02:32:11.149081 kubelet[2900]: E0120 02:32:11.127260 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dnkt2" podUID="03a5d512-f8bc-4887-a449-4983961e6308" Jan 20 02:32:11.305184 containerd[1567]: time="2026-01-20T02:32:11.299965294Z" level=info msg="StartContainer for \"18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad\" returns successfully" Jan 20 02:32:11.319879 systemd[1]: cri-containerd-18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad.scope: Deactivated successfully. Jan 20 02:32:11.351544 containerd[1567]: time="2026-01-20T02:32:11.344206536Z" level=info msg="received container exit event container_id:\"18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad\" id:\"18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad\" pid:5963 exited_at:{seconds:1768876331 nanos:343690245}" Jan 20 02:32:11.400902 kubelet[2900]: E0120 02:32:11.390288 2900 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:32:11.832221 kubelet[2900]: E0120 02:32:11.831718 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:11.880482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18d4aab3f4fd3e65764ab2533aec0db311d362432e3c2d5020a0eeafdd9ae8ad-rootfs.mount: Deactivated successfully. Jan 20 02:32:12.123559 kubelet[2900]: E0120 02:32:12.117129 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-n7pbk" podUID="70822b44-fdad-4ab3-a09a-888003a4ded6" Jan 20 02:32:12.987562 kubelet[2900]: E0120 02:32:12.986191 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:13.050571 containerd[1567]: time="2026-01-20T02:32:13.041293622Z" level=info msg="CreateContainer within sandbox \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 02:32:13.116455 kubelet[2900]: E0120 02:32:13.116352 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dnkt2" podUID="03a5d512-f8bc-4887-a449-4983961e6308" Jan 20 02:32:13.293373 containerd[1567]: time="2026-01-20T02:32:13.293200095Z" level=info msg="Container b3b93476b1ef3b900609e9414012e7c24d6a1c3d879d2bbe96c40867cc25a131: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:32:13.345666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517249.mount: Deactivated successfully. Jan 20 02:32:13.492865 containerd[1567]: time="2026-01-20T02:32:13.492449109Z" level=info msg="CreateContainer within sandbox \"93bf0065aa5398d807e78f2a48f4e5195ec050d466f8d03e561d31912c3fad73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b3b93476b1ef3b900609e9414012e7c24d6a1c3d879d2bbe96c40867cc25a131\"" Jan 20 02:32:13.517550 containerd[1567]: time="2026-01-20T02:32:13.514493299Z" level=info msg="StartContainer for \"b3b93476b1ef3b900609e9414012e7c24d6a1c3d879d2bbe96c40867cc25a131\"" Jan 20 02:32:13.547937 containerd[1567]: time="2026-01-20T02:32:13.547672651Z" level=info msg="connecting to shim b3b93476b1ef3b900609e9414012e7c24d6a1c3d879d2bbe96c40867cc25a131" address="unix:///run/containerd/s/90941e433a00dedd1d0a939fed77e7b55d56eeee88730e2909492d5cc47cab86" protocol=ttrpc version=3 Jan 20 02:32:13.899380 systemd[1]: Started cri-containerd-b3b93476b1ef3b900609e9414012e7c24d6a1c3d879d2bbe96c40867cc25a131.scope - libcontainer container b3b93476b1ef3b900609e9414012e7c24d6a1c3d879d2bbe96c40867cc25a131. Jan 20 02:32:14.127153 kubelet[2900]: E0120 02:32:14.126376 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-n7pbk" podUID="70822b44-fdad-4ab3-a09a-888003a4ded6" Jan 20 02:32:14.476601 containerd[1567]: time="2026-01-20T02:32:14.476514067Z" level=info msg="StartContainer for \"b3b93476b1ef3b900609e9414012e7c24d6a1c3d879d2bbe96c40867cc25a131\" returns successfully" Jan 20 02:32:15.143820 kubelet[2900]: E0120 02:32:15.134064 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dnkt2" podUID="03a5d512-f8bc-4887-a449-4983961e6308" Jan 20 02:32:16.126705 kubelet[2900]: E0120 02:32:16.123559 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-n7pbk" podUID="70822b44-fdad-4ab3-a09a-888003a4ded6" Jan 20 02:32:16.307101 kubelet[2900]: E0120 02:32:16.296805 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:16.489931 kubelet[2900]: I0120 02:32:16.489559 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8jllv" podStartSLOduration=23.489535595 podStartE2EDuration="23.489535595s" podCreationTimestamp="2026-01-20 02:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:32:16.461331953 +0000 UTC m=+741.863665170" watchObservedRunningTime="2026-01-20 02:32:16.489535595 +0000 UTC m=+741.891868802" Jan 20 02:32:17.123132 kubelet[2900]: E0120 02:32:17.112477 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:17.323520 kubelet[2900]: E0120 02:32:17.320984 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:18.126659 kubelet[2900]: E0120 02:32:18.114649 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:18.818131 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 20 02:32:21.135122 kubelet[2900]: E0120 02:32:21.123726 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:25.218753 kubelet[2900]: E0120 02:32:25.208191 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:36.133438 kubelet[2900]: E0120 02:32:36.117196 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:47.183681 systemd-networkd[1474]: lxc_health: Link UP Jan 20 02:32:47.234089 kubelet[2900]: E0120 02:32:47.214900 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:47.218574 systemd-networkd[1474]: lxc_health: Gained carrier Jan 20 02:32:48.282123 kubelet[2900]: E0120 02:32:48.271653 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:48.461635 systemd-networkd[1474]: lxc_health: Gained IPv6LL Jan 20 02:32:52.420730 kubelet[2900]: E0120 02:32:52.419581 2900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:01.773501 sshd[5811]: Connection closed by 10.0.0.1 port 42190 Jan 20 02:33:01.778459 sshd-session[5790]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:01.813451 systemd[1]: sshd@71-10.0.0.89:22-10.0.0.1:42190.service: Deactivated successfully. Jan 20 02:33:01.829878 systemd[1]: session-72.scope: Deactivated successfully. Jan 20 02:33:01.830352 systemd[1]: session-72.scope: Consumed 1.769s CPU time, 30.6M memory peak. Jan 20 02:33:01.863213 systemd-logind[1533]: Session 72 logged out. Waiting for processes to exit. Jan 20 02:33:01.880969 systemd-logind[1533]: Removed session 72.