Mar 12 01:57:16.641826 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:10:29 -00 2026 Mar 12 01:57:16.642232 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=523ecccd411fd77fb4e35365aed2f15bc4e80e4a859c0ead1a8e49984aa5098c Mar 12 01:57:16.642249 kernel: BIOS-provided physical RAM map: Mar 12 01:57:16.645790 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 12 01:57:16.645807 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 12 01:57:16.645854 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 12 01:57:16.645865 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 12 01:57:16.645875 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 12 01:57:16.645916 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 12 01:57:16.645927 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 12 01:57:16.645966 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 12 01:57:16.646007 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 12 01:57:16.646018 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 12 01:57:16.646048 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 12 01:57:16.646061 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 12 01:57:16.646072 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 12 01:57:16.646127 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 12 01:57:16.646138 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 12 01:57:16.646149 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 12 01:57:16.646160 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 12 01:57:16.646171 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 12 01:57:16.646182 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 12 01:57:16.646193 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 12 01:57:16.646205 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 12 01:57:16.646217 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 12 01:57:16.646226 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 12 01:57:16.646274 kernel: NX (Execute Disable) protection: active Mar 12 01:57:16.646285 kernel: APIC: Static calls initialized Mar 12 01:57:16.646343 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 12 01:57:16.646353 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 12 01:57:16.646364 kernel: extended physical RAM map: Mar 12 01:57:16.646374 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 12 01:57:16.646387 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 12 01:57:16.646397 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 12 01:57:16.646407 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 12 01:57:16.646418 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 12 01:57:16.646429 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 12 01:57:16.646479 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 12 01:57:16.646519 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 12 01:57:16.646531 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 12 01:57:16.646577 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 12 01:57:16.653944 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 12 01:57:16.653974 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 12 01:57:16.653986 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 12 01:57:16.654000 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 12 01:57:16.654011 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 12 01:57:16.654022 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 12 01:57:16.654034 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 12 01:57:16.654047 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 12 01:57:16.654060 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 12 01:57:16.654121 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 12 01:57:16.654134 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 12 01:57:16.654147 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 12 01:57:16.654158 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 12 01:57:16.654170 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 12 01:57:16.654182 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 12 01:57:16.654195 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 12 01:57:16.654244 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 12 01:57:16.654291 kernel: efi: EFI v2.7 by EDK II Mar 12 01:57:16.654303 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 12 01:57:16.654346 kernel: random: crng init done Mar 12 01:57:16.654394 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 12 01:57:16.654436 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 12 01:57:16.654449 kernel: secureboot: Secure boot disabled Mar 12 01:57:16.654461 kernel: SMBIOS 2.8 present. Mar 12 01:57:16.654471 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 12 01:57:16.654484 kernel: DMI: Memory slots populated: 1/1 Mar 12 01:57:16.654495 kernel: Hypervisor detected: KVM Mar 12 01:57:16.654507 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 12 01:57:16.654519 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 01:57:16.654531 kernel: kvm-clock: using sched offset of 131333841073 cycles Mar 12 01:57:16.654543 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 01:57:16.654597 kernel: tsc: Detected 2445.426 MHz processor Mar 12 01:57:16.654610 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 01:57:16.662709 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 01:57:16.662753 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 12 01:57:16.662768 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 12 01:57:16.662782 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 01:57:16.662794 kernel: Using GB pages for direct mapping Mar 12 01:57:16.662860 kernel: ACPI: Early table checksum verification disabled Mar 12 01:57:16.662906 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 12 01:57:16.662919 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 12 01:57:16.662934 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:57:16.662945 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:57:16.662958 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 12 01:57:16.662969 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:57:16.663024 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:57:16.663040 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:57:16.663052 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:57:16.663064 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 12 01:57:16.663079 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 12 01:57:16.663089 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 12 01:57:16.663102 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 12 01:57:16.663162 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 12 01:57:16.663174 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 12 01:57:16.663187 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 12 01:57:16.663199 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 12 01:57:16.663212 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 12 01:57:16.663224 kernel: No NUMA configuration found Mar 12 01:57:16.663236 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 12 01:57:16.663288 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 12 01:57:16.663302 kernel: Zone ranges: Mar 12 01:57:16.663315 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 01:57:16.663329 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 12 01:57:16.663339 kernel: Normal empty Mar 12 01:57:16.663351 kernel: Device empty Mar 12 01:57:16.663365 kernel: Movable zone start for each node Mar 12 01:57:16.663376 kernel: Early memory node ranges Mar 12 01:57:16.663425 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 12 01:57:16.663470 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 12 01:57:16.663482 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 12 01:57:16.663495 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 12 01:57:16.663508 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 12 01:57:16.663519 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 12 01:57:16.663533 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 12 01:57:16.663580 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 12 01:57:16.663721 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 12 01:57:16.663736 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:57:16.663846 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 12 01:57:16.663891 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 12 01:57:16.663905 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:57:16.663919 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 12 01:57:16.663931 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 12 01:57:16.663942 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 12 01:57:16.663956 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 12 01:57:16.664009 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 12 01:57:16.664021 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 01:57:16.664034 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 01:57:16.664047 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 01:57:16.664098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 01:57:16.664110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 01:57:16.664121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 01:57:16.664133 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 01:57:16.664144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 01:57:16.664156 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 01:57:16.664168 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 12 01:57:16.664214 kernel: TSC deadline timer available Mar 12 01:57:16.664225 kernel: CPU topo: Max. logical packages: 1 Mar 12 01:57:16.664237 kernel: CPU topo: Max. logical dies: 1 Mar 12 01:57:16.664252 kernel: CPU topo: Max. dies per package: 1 Mar 12 01:57:16.664263 kernel: CPU topo: Max. threads per core: 1 Mar 12 01:57:16.664276 kernel: CPU topo: Num. cores per package: 4 Mar 12 01:57:16.664287 kernel: CPU topo: Num. threads per package: 4 Mar 12 01:57:16.664336 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 12 01:57:16.664347 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 01:57:16.664359 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 12 01:57:16.664371 kernel: kvm-guest: setup PV sched yield Mar 12 01:57:16.664383 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 12 01:57:16.664396 kernel: Booting paravirtualized kernel on KVM Mar 12 01:57:16.664407 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 01:57:16.664449 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 12 01:57:16.664461 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 12 01:57:16.664473 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 12 01:57:16.664485 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 12 01:57:16.664496 kernel: kvm-guest: PV spinlocks enabled Mar 12 01:57:16.664508 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 01:57:16.664545 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=523ecccd411fd77fb4e35365aed2f15bc4e80e4a859c0ead1a8e49984aa5098c Mar 12 01:57:16.664583 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 01:57:16.664595 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 01:57:16.664607 kernel: Fallback order for Node 0: 0 Mar 12 01:57:16.664692 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 12 01:57:16.664705 kernel: Policy zone: DMA32 Mar 12 01:57:16.664718 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 01:57:16.664730 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 12 01:57:16.673929 kernel: ftrace: allocating 40130 entries in 157 pages Mar 12 01:57:16.673946 kernel: ftrace: allocated 157 pages with 5 groups Mar 12 01:57:16.673961 kernel: Dynamic Preempt: voluntary Mar 12 01:57:16.673975 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 01:57:16.673989 kernel: rcu: RCU event tracing is enabled. Mar 12 01:57:16.674032 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 12 01:57:16.674045 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 01:57:16.674099 kernel: Rude variant of Tasks RCU enabled. Mar 12 01:57:16.674113 kernel: Tracing variant of Tasks RCU enabled. Mar 12 01:57:16.674125 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 01:57:16.674136 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 12 01:57:16.674187 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:57:16.674206 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:57:16.674218 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:57:16.674231 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 12 01:57:16.674282 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 01:57:16.674295 kernel: Console: colour dummy device 80x25 Mar 12 01:57:16.674309 kernel: printk: legacy console [ttyS0] enabled Mar 12 01:57:16.674321 kernel: ACPI: Core revision 20240827 Mar 12 01:57:16.674334 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 12 01:57:16.674347 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 01:57:16.674359 kernel: x2apic enabled Mar 12 01:57:16.674421 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 01:57:16.674435 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 12 01:57:16.674448 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 12 01:57:16.674462 kernel: kvm-guest: setup PV IPIs Mar 12 01:57:16.674474 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 12 01:57:16.674487 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 12 01:57:16.674499 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 12 01:57:16.674557 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 01:57:16.674571 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 12 01:57:16.674583 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 12 01:57:16.674595 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 01:57:16.674606 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 01:57:16.698874 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 01:57:16.698907 kernel: Speculative Store Bypass: Vulnerable Mar 12 01:57:16.698922 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 12 01:57:16.699002 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 12 01:57:16.699052 kernel: active return thunk: srso_alias_return_thunk Mar 12 01:57:16.699068 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 12 01:57:16.699078 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 12 01:57:16.699091 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 12 01:57:16.699104 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 01:57:16.699116 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 01:57:16.699129 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 01:57:16.699143 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 01:57:16.699155 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 12 01:57:16.699214 kernel: Freeing SMP alternatives memory: 32K Mar 12 01:57:16.699225 kernel: pid_max: default: 32768 minimum: 301 Mar 12 01:57:16.699237 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 12 01:57:16.699250 kernel: landlock: Up and running. Mar 12 01:57:16.699261 kernel: SELinux: Initializing. Mar 12 01:57:16.699275 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:57:16.699286 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:57:16.699298 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 12 01:57:16.699310 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 12 01:57:16.699322 kernel: signal: max sigframe size: 1776 Mar 12 01:57:16.699380 kernel: rcu: Hierarchical SRCU implementation. Mar 12 01:57:16.699396 kernel: rcu: Max phase no-delay instances is 400. Mar 12 01:57:16.699409 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 12 01:57:16.699421 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 01:57:16.699433 kernel: smp: Bringing up secondary CPUs ... Mar 12 01:57:16.699444 kernel: smpboot: x86: Booting SMP configuration: Mar 12 01:57:16.699456 kernel: .... node #0, CPUs: #1 #2 #3 Mar 12 01:57:16.699517 kernel: smp: Brought up 1 node, 4 CPUs Mar 12 01:57:16.699529 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 12 01:57:16.699542 kernel: Memory: 2439048K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120812K reserved, 0K cma-reserved) Mar 12 01:57:16.699554 kernel: devtmpfs: initialized Mar 12 01:57:16.699566 kernel: x86/mm: Memory block size: 128MB Mar 12 01:57:16.699578 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 12 01:57:16.699591 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 12 01:57:16.699720 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 12 01:57:16.699734 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 12 01:57:16.699746 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 12 01:57:16.699758 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 12 01:57:16.699770 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 01:57:16.699782 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 12 01:57:16.699794 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 01:57:16.699846 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 01:57:16.699858 kernel: audit: initializing netlink subsys (disabled) Mar 12 01:57:16.699871 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 01:57:16.699882 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 01:57:16.699894 kernel: audit: type=2000 audit(1773280495.880:1): state=initialized audit_enabled=0 res=1 Mar 12 01:57:16.699906 kernel: cpuidle: using governor menu Mar 12 01:57:16.699918 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 01:57:16.699968 kernel: dca service started, version 1.12.1 Mar 12 01:57:16.699981 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 12 01:57:16.699994 kernel: PCI: Using configuration type 1 for base access Mar 12 01:57:16.700006 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 01:57:16.700018 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 01:57:16.700030 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 01:57:16.700043 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 01:57:16.700096 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 01:57:16.700110 kernel: ACPI: Added _OSI(Module Device) Mar 12 01:57:16.700122 kernel: ACPI: Added _OSI(Processor Device) Mar 12 01:57:16.700134 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 01:57:16.700146 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 01:57:16.700157 kernel: ACPI: Interpreter enabled Mar 12 01:57:16.700169 kernel: ACPI: PM: (supports S0 S3 S5) Mar 12 01:57:16.700222 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 01:57:16.700234 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 01:57:16.700249 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 01:57:16.700263 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 01:57:16.700274 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 01:57:16.700943 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 01:57:16.701319 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 12 01:57:16.712272 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 12 01:57:16.712318 kernel: PCI host bridge to bus 0000:00 Mar 12 01:57:16.712778 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 01:57:16.713154 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 01:57:16.713445 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 01:57:16.713866 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 12 01:57:16.714114 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 12 01:57:16.714354 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 12 01:57:16.714590 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 01:57:16.721413 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 12 01:57:16.734422 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 12 01:57:16.743396 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 12 01:57:16.745037 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 12 01:57:16.745365 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 12 01:57:16.745772 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 01:57:16.752507 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 82031 usecs Mar 12 01:57:16.752962 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 12 01:57:16.753272 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 12 01:57:16.753590 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 12 01:57:16.758880 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 12 01:57:16.759219 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 12 01:57:16.759506 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 12 01:57:16.760059 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 12 01:57:16.760346 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 12 01:57:16.761884 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 12 01:57:16.762189 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 12 01:57:16.762507 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 12 01:57:16.766379 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 12 01:57:16.766816 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 12 01:57:16.767129 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 12 01:57:16.767430 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 01:57:16.767826 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 73242 usecs Mar 12 01:57:16.768151 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 12 01:57:16.768517 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 12 01:57:16.768916 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 12 01:57:16.769319 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 12 01:57:16.782829 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 12 01:57:16.782877 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 01:57:16.782890 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 01:57:16.782946 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 01:57:16.782959 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 01:57:16.782970 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 01:57:16.782982 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 01:57:16.782994 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 01:57:16.783006 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 01:57:16.783018 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 01:57:16.783055 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 01:57:16.783067 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 01:57:16.783079 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 01:57:16.783090 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 01:57:16.783102 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 01:57:16.783113 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 01:57:16.783125 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 01:57:16.783160 kernel: iommu: Default domain type: Translated Mar 12 01:57:16.783172 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 01:57:16.783183 kernel: efivars: Registered efivars operations Mar 12 01:57:16.783195 kernel: PCI: Using ACPI for IRQ routing Mar 12 01:57:16.783207 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 01:57:16.783219 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 12 01:57:16.783230 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 12 01:57:16.783264 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 12 01:57:16.783276 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 12 01:57:16.783287 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 12 01:57:16.783299 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 12 01:57:16.783311 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 12 01:57:16.783322 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 12 01:57:16.783767 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 01:57:16.784117 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 01:57:16.784410 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 01:57:16.784431 kernel: vgaarb: loaded Mar 12 01:57:16.784445 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 12 01:57:16.784458 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 12 01:57:16.784472 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 01:57:16.784486 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 01:57:16.784553 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 01:57:16.784567 kernel: pnp: PnP ACPI init Mar 12 01:57:16.784976 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 12 01:57:16.785000 kernel: hrtimer: interrupt took 6572878 ns Mar 12 01:57:16.785013 kernel: pnp: PnP ACPI: found 6 devices Mar 12 01:57:16.785027 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 01:57:16.785088 kernel: NET: Registered PF_INET protocol family Mar 12 01:57:16.785103 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 01:57:16.785320 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 01:57:16.785364 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 01:57:16.785378 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 01:57:16.785392 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 01:57:16.785406 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 01:57:16.785454 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:57:16.785469 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:57:16.785484 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 01:57:16.794728 kernel: NET: Registered PF_XDP protocol family Mar 12 01:57:16.795147 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 12 01:57:16.795450 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 12 01:57:16.795902 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 01:57:16.796945 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 01:57:16.797197 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 01:57:16.797489 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 12 01:57:16.800257 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 12 01:57:16.808701 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 12 01:57:16.808759 kernel: PCI: CLS 0 bytes, default 64 Mar 12 01:57:16.808825 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 12 01:57:16.808841 kernel: Initialise system trusted keyrings Mar 12 01:57:16.808854 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 01:57:16.808867 kernel: Key type asymmetric registered Mar 12 01:57:16.808883 kernel: Asymmetric key parser 'x509' registered Mar 12 01:57:16.808894 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 12 01:57:16.808909 kernel: io scheduler mq-deadline registered Mar 12 01:57:16.808922 kernel: io scheduler kyber registered Mar 12 01:57:16.808974 kernel: io scheduler bfq registered Mar 12 01:57:16.808990 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 01:57:16.809003 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 01:57:16.809059 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 01:57:16.809107 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 01:57:16.809123 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 01:57:16.809137 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 01:57:16.809148 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 01:57:16.809200 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 01:57:16.809214 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 01:57:16.809562 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 12 01:57:16.809720 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 01:57:16.810021 kernel: rtc_cmos 00:04: registered as rtc0 Mar 12 01:57:16.810316 kernel: rtc_cmos 00:04: setting system clock to 2026-03-12T01:56:50 UTC (1773280610) Mar 12 01:57:16.810603 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 12 01:57:16.810710 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 12 01:57:16.810726 kernel: efifb: probing for efifb Mar 12 01:57:16.810780 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 12 01:57:16.810795 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 12 01:57:16.810808 kernel: efifb: scrolling: redraw Mar 12 01:57:16.810822 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 12 01:57:16.810835 kernel: Console: switching to colour frame buffer device 160x50 Mar 12 01:57:16.810849 kernel: fb0: EFI VGA frame buffer device Mar 12 01:57:16.810863 kernel: pstore: Using crash dump compression: deflate Mar 12 01:57:16.810877 kernel: pstore: Registered efi_pstore as persistent store backend Mar 12 01:57:16.810934 kernel: NET: Registered PF_INET6 protocol family Mar 12 01:57:16.810947 kernel: Segment Routing with IPv6 Mar 12 01:57:16.810961 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 01:57:16.810973 kernel: NET: Registered PF_PACKET protocol family Mar 12 01:57:16.810987 kernel: Key type dns_resolver registered Mar 12 01:57:16.811000 kernel: IPI shorthand broadcast: enabled Mar 12 01:57:16.811013 kernel: sched_clock: Marking stable (59609137881, 16877857602)->(122280429224, -45793433741) Mar 12 01:57:16.811067 kernel: registered taskstats version 1 Mar 12 01:57:16.811081 kernel: Loading compiled-in X.509 certificates Mar 12 01:57:16.811094 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 27c66ec625717b70411c2f435cd24369467ba361' Mar 12 01:57:16.811108 kernel: Demotion targets for Node 0: null Mar 12 01:57:16.811121 kernel: Key type .fscrypt registered Mar 12 01:57:16.811136 kernel: Key type fscrypt-provisioning registered Mar 12 01:57:16.811148 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 01:57:16.811169 kernel: ima: Allocated hash algorithm: sha1 Mar 12 01:57:16.811181 kernel: ima: No architecture policies found Mar 12 01:57:16.811195 kernel: clk: Disabling unused clocks Mar 12 01:57:16.811210 kernel: Freeing unused kernel image (initmem) memory: 15540K Mar 12 01:57:16.811223 kernel: Write protecting the kernel read-only data: 47104k Mar 12 01:57:16.811236 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Mar 12 01:57:16.811250 kernel: Run /init as init process Mar 12 01:57:16.811268 kernel: with arguments: Mar 12 01:57:16.811281 kernel: /init Mar 12 01:57:16.811293 kernel: with environment: Mar 12 01:57:16.811308 kernel: HOME=/ Mar 12 01:57:16.811319 kernel: TERM=linux Mar 12 01:57:16.811333 kernel: SCSI subsystem initialized Mar 12 01:57:16.811345 kernel: libata version 3.00 loaded. Mar 12 01:57:16.822150 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 01:57:16.822204 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 01:57:16.822539 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 12 01:57:16.823117 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 12 01:57:16.823404 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 01:57:16.823952 kernel: scsi host0: ahci Mar 12 01:57:16.829474 kernel: scsi host1: ahci Mar 12 01:57:16.835396 kernel: scsi host2: ahci Mar 12 01:57:16.835819 kernel: scsi host3: ahci Mar 12 01:57:16.836141 kernel: scsi host4: ahci Mar 12 01:57:16.836456 kernel: scsi host5: ahci Mar 12 01:57:16.836481 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Mar 12 01:57:16.836560 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Mar 12 01:57:16.836575 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Mar 12 01:57:16.836590 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Mar 12 01:57:16.836604 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Mar 12 01:57:16.836734 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Mar 12 01:57:16.836750 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 01:57:16.836771 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 01:57:16.836785 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 12 01:57:16.836798 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 01:57:16.836812 kernel: ata3.00: LPM support broken, forcing max_power Mar 12 01:57:16.836866 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 12 01:57:16.836881 kernel: ata3.00: applying bridge limits Mar 12 01:57:16.836896 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 01:57:16.836910 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 01:57:16.850140 kernel: ata3.00: LPM support broken, forcing max_power Mar 12 01:57:16.850158 kernel: ata3.00: configured for UDMA/100 Mar 12 01:57:16.850756 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 01:57:16.851085 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 12 01:57:16.851392 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Mar 12 01:57:16.851422 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 01:57:16.851438 kernel: GPT:16515071 != 27000831 Mar 12 01:57:16.851453 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 01:57:16.851467 kernel: GPT:16515071 != 27000831 Mar 12 01:57:16.851481 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 01:57:16.851495 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:57:16.851941 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 12 01:57:16.851974 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 01:57:16.852293 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 12 01:57:16.852318 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 01:57:16.852331 kernel: device-mapper: uevent: version 1.0.3 Mar 12 01:57:16.852345 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 12 01:57:16.852359 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Mar 12 01:57:16.852373 kernel: raid6: avx2x4 gen() 6351 MB/s Mar 12 01:57:16.852448 kernel: raid6: avx2x2 gen() 13792 MB/s Mar 12 01:57:16.852463 kernel: raid6: avx2x1 gen() 7223 MB/s Mar 12 01:57:16.852475 kernel: raid6: using algorithm avx2x2 gen() 13792 MB/s Mar 12 01:57:16.852488 kernel: raid6: .... xor() 7607 MB/s, rmw enabled Mar 12 01:57:16.852503 kernel: raid6: using avx2x2 recovery algorithm Mar 12 01:57:16.852515 kernel: xor: automatically using best checksumming function avx Mar 12 01:57:16.852528 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 01:57:16.852543 kernel: BTRFS: device fsid ad1c13c8-099c-49d4-a7a5-b9c588697917 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (182) Mar 12 01:57:16.852698 kernel: BTRFS info (device dm-0): first mount of filesystem ad1c13c8-099c-49d4-a7a5-b9c588697917 Mar 12 01:57:16.852720 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:57:16.852735 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 12 01:57:16.852751 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 12 01:57:16.852766 kernel: loop: module loaded Mar 12 01:57:16.852780 kernel: loop0: detected capacity change from 0 to 100544 Mar 12 01:57:16.852794 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 01:57:16.852902 systemd[1]: Successfully made /usr/ read-only. Mar 12 01:57:16.852922 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 01:57:16.852938 systemd[1]: Detected virtualization kvm. Mar 12 01:57:16.852950 systemd[1]: Detected architecture x86-64. Mar 12 01:57:16.852965 systemd[1]: Running in initrd. Mar 12 01:57:16.853019 systemd[1]: No hostname configured, using default hostname. Mar 12 01:57:16.853036 systemd[1]: Hostname set to . Mar 12 01:57:16.853051 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Mar 12 01:57:16.853066 systemd[1]: Queued start job for default target initrd.target. Mar 12 01:57:16.853081 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:57:16.853096 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:57:16.853111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:57:16.853181 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 01:57:16.853197 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:57:16.853214 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 01:57:16.853227 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 01:57:16.853242 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:57:16.853262 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:57:16.853277 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 12 01:57:16.853293 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:57:16.853305 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:57:16.853318 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:57:16.853331 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:57:16.853343 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:57:16.853363 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:57:16.853375 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Mar 12 01:57:16.853390 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:57:16.853404 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 12 01:57:16.853419 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:57:16.853476 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:57:16.853493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:57:16.853512 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:57:16.853526 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 01:57:16.853541 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 01:57:16.853554 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:57:16.853569 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 01:57:16.853583 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 12 01:57:16.853601 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 01:57:16.853613 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:57:16.870783 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:57:16.870803 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:57:16.871006 systemd-journald[321]: Collecting audit messages is enabled. Mar 12 01:57:16.871046 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 01:57:16.871064 kernel: audit: type=1130 audit(1773280636.704:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:16.871085 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:57:16.871099 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 01:57:16.871115 kernel: audit: type=1130 audit(1773280636.804:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:16.871132 systemd-journald[321]: Journal started Mar 12 01:57:16.871160 systemd-journald[321]: Runtime Journal (/run/log/journal/07a4dfdf54f04a6593ab42ed4a9c545f) is 6M, max 48M, 42M free. Mar 12 01:57:16.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:16.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:16.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:16.910169 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:57:16.910274 kernel: audit: type=1130 audit(1773280636.895:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:16.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:17.017836 kernel: audit: type=1130 audit(1773280636.959:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:17.016010 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:57:17.053164 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:57:17.590493 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 12 01:57:17.682600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:57:17.868762 kernel: audit: type=1130 audit(1773280637.770:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:17.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:17.897444 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:57:18.038882 kernel: audit: type=1130 audit(1773280637.938:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:17.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:18.043022 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:57:18.174974 kernel: audit: type=1130 audit(1773280638.075:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:18.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:18.214986 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:57:18.374381 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:57:19.058282 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:57:19.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:19.149547 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:57:19.378445 kernel: audit: type=1130 audit(1773280639.111:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:19.378529 kernel: audit: type=1130 audit(1773280639.253:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:19.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:19.356554 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 01:57:19.539913 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 01:57:19.634947 kernel: Bridge firewalling registered Mar 12 01:57:19.646945 systemd-modules-load[325]: Inserted module 'br_netfilter' Mar 12 01:57:19.674359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:57:19.705880 dracut-cmdline[353]: dracut-109 Mar 12 01:57:19.705880 dracut-cmdline[353]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=523ecccd411fd77fb4e35365aed2f15bc4e80e4a859c0ead1a8e49984aa5098c Mar 12 01:57:19.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:19.889031 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:57:19.986022 kernel: audit: type=1130 audit(1773280639.874:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:20.171883 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:57:20.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:20.211000 audit: BPF prog-id=6 op=LOAD Mar 12 01:57:20.216472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:57:20.667614 systemd-resolved[412]: Positive Trust Anchors: Mar 12 01:57:20.670797 systemd-resolved[412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:57:20.675802 systemd-resolved[412]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Mar 12 01:57:20.675858 systemd-resolved[412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:57:20.804459 kernel: Loading iSCSI transport class v2.0-870. Mar 12 01:57:20.854232 systemd-resolved[412]: Defaulting to hostname 'linux'. Mar 12 01:57:20.871824 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:57:20.932564 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:57:20.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:21.197009 kernel: iscsi: registered transport (tcp) Mar 12 01:57:21.418237 kernel: iscsi: registered transport (qla4xxx) Mar 12 01:57:21.418338 kernel: QLogic iSCSI HBA Driver Mar 12 01:57:21.963216 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:57:22.205759 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:57:22.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:22.295796 kernel: kauditd_printk_skb: 3 callbacks suppressed Mar 12 01:57:22.295884 kernel: audit: type=1130 audit(1773280642.272:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:22.297584 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:57:23.515991 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 01:57:23.608447 kernel: audit: type=1130 audit(1773280643.539:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:23.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:23.569818 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 01:57:23.770188 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 01:57:24.500574 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:57:24.802138 kernel: audit: type=1130 audit(1773280644.561:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:24.802180 kernel: audit: type=1334 audit(1773280644.561:18): prog-id=7 op=LOAD Mar 12 01:57:24.802202 kernel: audit: type=1334 audit(1773280644.561:19): prog-id=8 op=LOAD Mar 12 01:57:24.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:24.561000 audit: BPF prog-id=7 op=LOAD Mar 12 01:57:24.561000 audit: BPF prog-id=8 op=LOAD Mar 12 01:57:24.587023 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:57:25.201204 systemd-udevd[585]: Using default interface naming scheme 'v257'. Mar 12 01:57:25.327020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:57:25.369943 kernel: audit: type=1130 audit(1773280645.339:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:25.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:25.365116 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 01:57:25.479521 dracut-pre-trigger[643]: rd.md=0: removing MD RAID activation Mar 12 01:57:25.665935 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:57:25.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:25.691809 kernel: audit: type=1130 audit(1773280645.671:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:25.685319 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:57:25.739904 kernel: audit: type=1334 audit(1773280645.673:22): prog-id=9 op=LOAD Mar 12 01:57:25.673000 audit: BPF prog-id=9 op=LOAD Mar 12 01:57:25.824338 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:57:26.058082 kernel: audit: type=1130 audit(1773280645.836:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:25.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:25.869845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:57:26.603341 systemd-networkd[722]: lo: Link UP Mar 12 01:57:26.604368 systemd-networkd[722]: lo: Gained carrier Mar 12 01:57:26.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:26.613957 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:57:26.677963 kernel: audit: type=1130 audit(1773280646.633:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:26.634300 systemd[1]: Reached target network.target - Network. Mar 12 01:57:26.671468 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:57:26.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:26.711120 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 01:57:27.225415 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 01:57:27.345236 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 01:57:27.443338 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 01:57:27.523763 kernel: AES CTR mode by8 optimization enabled Mar 12 01:57:27.522247 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:57:27.603803 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 01:57:27.656227 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 01:57:27.666045 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:57:27.668182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:57:27.726859 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 12 01:57:27.726956 kernel: audit: type=1131 audit(1773280647.696:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:27.726983 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 12 01:57:27.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:27.697530 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:57:27.764083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:57:27.805248 disk-uuid[824]: Primary Header is updated. Mar 12 01:57:27.805248 disk-uuid[824]: Secondary Entries is updated. Mar 12 01:57:27.805248 disk-uuid[824]: Secondary Header is updated. Mar 12 01:57:27.831897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:57:27.832150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:57:27.912022 kernel: audit: type=1130 audit(1773280647.879:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:27.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:27.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:27.920237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:57:27.994404 kernel: audit: type=1131 audit(1773280647.879:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:28.177231 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Mar 12 01:57:28.177247 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:57:28.180583 systemd-networkd[722]: eth0: Link UP Mar 12 01:57:28.184847 systemd-networkd[722]: eth0: Gained carrier Mar 12 01:57:28.184870 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Mar 12 01:57:28.255220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:57:28.309752 kernel: audit: type=1130 audit(1773280648.269:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:28.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:28.314005 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:57:28.714387 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 01:57:28.853157 kernel: audit: type=1130 audit(1773280648.727:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:28.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:28.855488 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:57:28.898952 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:57:28.952139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:57:29.076982 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 01:57:29.182217 disk-uuid[826]: Warning: The kernel is still using the old partition table. Mar 12 01:57:29.182217 disk-uuid[826]: The new table will be used at the next reboot or after you Mar 12 01:57:29.182217 disk-uuid[826]: run partprobe(8) or kpartx(8) Mar 12 01:57:29.182217 disk-uuid[826]: The operation has completed successfully. Mar 12 01:57:29.275471 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 01:57:29.278774 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 01:57:29.314465 systemd-networkd[722]: eth0: Gained IPv6LL Mar 12 01:57:29.434311 kernel: audit: type=1130 audit(1773280649.324:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:29.434355 kernel: audit: type=1131 audit(1773280649.324:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:29.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:29.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:29.346284 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 01:57:29.460146 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:57:29.590224 kernel: audit: type=1130 audit(1773280649.494:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:29.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:29.750084 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Mar 12 01:57:29.780247 kernel: BTRFS info (device vda6): first mount of filesystem d64519df-9ce6-4cf6-bf8e-5fb2a1565a05 Mar 12 01:57:29.780316 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:57:29.810374 kernel: BTRFS info (device vda6): turning on async discard Mar 12 01:57:29.810458 kernel: BTRFS info (device vda6): enabling free space tree Mar 12 01:57:29.882103 kernel: BTRFS info (device vda6): last unmount of filesystem d64519df-9ce6-4cf6-bf8e-5fb2a1565a05 Mar 12 01:57:29.905434 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 01:57:29.968438 kernel: audit: type=1130 audit(1773280649.911:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:29.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:29.963854 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 01:57:31.660522 ignition[882]: Ignition 2.24.0 Mar 12 01:57:31.660570 ignition[882]: Stage: fetch-offline Mar 12 01:57:31.660760 ignition[882]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:57:31.660793 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:57:31.661177 ignition[882]: parsed url from cmdline: "" Mar 12 01:57:31.661184 ignition[882]: no config URL provided Mar 12 01:57:31.661193 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 01:57:31.661210 ignition[882]: no config at "/usr/lib/ignition/user.ign" Mar 12 01:57:31.661427 ignition[882]: op(1): [started] loading QEMU firmware config module Mar 12 01:57:31.661435 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 12 01:57:31.732560 ignition[882]: op(1): [finished] loading QEMU firmware config module Mar 12 01:57:31.871989 ignition[882]: parsing config with SHA512: 4d45f3ce8c60066eb7015949a05b4de61e32d684517fa12851869fb57c7bfcb3a19cd0d6892c14d01da157cac67a217efbe32a493a36cf8dedf3dcf5ec8c7e27 Mar 12 01:57:31.907149 unknown[882]: fetched base config from "system" Mar 12 01:57:31.907325 unknown[882]: fetched user config from "qemu" Mar 12 01:57:31.913278 ignition[882]: fetch-offline: fetch-offline passed Mar 12 01:57:31.913480 ignition[882]: Ignition finished successfully Mar 12 01:57:31.926484 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:57:31.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:31.972289 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 12 01:57:32.048476 kernel: audit: type=1130 audit(1773280651.966:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:31.984130 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 01:57:32.580542 ignition[892]: Ignition 2.24.0 Mar 12 01:57:32.580674 ignition[892]: Stage: kargs Mar 12 01:57:32.581290 ignition[892]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:57:32.581307 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:57:32.588012 ignition[892]: kargs: kargs passed Mar 12 01:57:32.588095 ignition[892]: Ignition finished successfully Mar 12 01:57:32.641012 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 01:57:32.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:32.680066 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 01:57:32.889963 ignition[900]: Ignition 2.24.0 Mar 12 01:57:32.889981 ignition[900]: Stage: disks Mar 12 01:57:32.890181 ignition[900]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:57:32.890198 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:57:32.896270 ignition[900]: disks: disks passed Mar 12 01:57:32.955600 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 01:57:33.180582 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 12 01:57:33.181060 kernel: audit: type=1130 audit(1773280653.111:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:33.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:32.896363 ignition[900]: Ignition finished successfully Mar 12 01:57:33.191057 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 01:57:33.204977 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:57:33.205080 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:57:33.205195 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:57:33.205251 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:57:33.435125 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 01:57:33.831876 systemd-fsck[910]: ROOT: clean, 15/456736 files, 38230/456704 blocks Mar 12 01:57:33.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:33.848300 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 01:57:33.950982 kernel: audit: type=1130 audit(1773280653.902:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:33.914931 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 01:57:34.834446 kernel: EXT4-fs (vda9): mounted filesystem c5c45003-d11d-424b-9b69-4546cce1fe00 r/w with ordered data mode. Quota mode: none. Mar 12 01:57:34.837568 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 01:57:34.850379 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 01:57:34.896354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:57:34.914579 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 01:57:34.924002 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 01:57:34.924081 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 01:57:34.924129 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:57:34.982840 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 01:57:34.993795 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 01:57:35.039201 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Mar 12 01:57:35.050513 kernel: BTRFS info (device vda6): first mount of filesystem d64519df-9ce6-4cf6-bf8e-5fb2a1565a05 Mar 12 01:57:35.050621 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:57:35.116020 kernel: BTRFS info (device vda6): turning on async discard Mar 12 01:57:35.116114 kernel: BTRFS info (device vda6): enabling free space tree Mar 12 01:57:35.126024 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:57:36.196375 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 01:57:36.253585 kernel: audit: type=1130 audit(1773280656.204:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:36.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:36.221210 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 01:57:36.286110 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 01:57:36.364545 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 01:57:36.384535 kernel: BTRFS info (device vda6): last unmount of filesystem d64519df-9ce6-4cf6-bf8e-5fb2a1565a05 Mar 12 01:57:36.535564 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 01:57:36.549547 ignition[1017]: INFO : Ignition 2.24.0 Mar 12 01:57:36.549547 ignition[1017]: INFO : Stage: mount Mar 12 01:57:36.579052 kernel: audit: type=1130 audit(1773280656.548:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:36.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:36.579177 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:57:36.579177 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:57:36.579177 ignition[1017]: INFO : mount: mount passed Mar 12 01:57:36.579177 ignition[1017]: INFO : Ignition finished successfully Mar 12 01:57:36.626314 kernel: audit: type=1130 audit(1773280656.589:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:36.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:36.581114 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 01:57:36.614114 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 01:57:36.715964 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:57:36.793795 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1028) Mar 12 01:57:36.811839 kernel: BTRFS info (device vda6): first mount of filesystem d64519df-9ce6-4cf6-bf8e-5fb2a1565a05 Mar 12 01:57:36.811944 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:57:36.853974 kernel: BTRFS info (device vda6): turning on async discard Mar 12 01:57:36.854066 kernel: BTRFS info (device vda6): enabling free space tree Mar 12 01:57:36.860569 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:57:37.036909 ignition[1044]: INFO : Ignition 2.24.0 Mar 12 01:57:37.036909 ignition[1044]: INFO : Stage: files Mar 12 01:57:37.036909 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:57:37.036909 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:57:37.069444 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Mar 12 01:57:37.069444 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 01:57:37.069444 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 01:57:37.111596 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 01:57:37.111596 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 01:57:37.150132 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 01:57:37.149348 unknown[1044]: wrote ssh authorized keys file for user: core Mar 12 01:57:37.175533 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:57:37.175533 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 01:57:37.460300 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 01:57:38.129879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:57:38.129879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 01:57:38.129879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 01:57:38.129879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:57:38.129879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:57:38.129879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:57:38.270408 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:57:38.270408 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:57:38.270408 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:57:38.270408 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:57:38.270408 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:57:38.270408 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:57:38.270408 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:57:38.270408 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:57:38.270408 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 12 01:57:38.753193 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 01:57:40.940385 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:57:40.940385 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 01:57:40.992829 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:57:41.023837 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:57:41.023837 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 01:57:41.023837 ignition[1044]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 12 01:57:41.023837 ignition[1044]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:57:41.023837 ignition[1044]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:57:41.023837 ignition[1044]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 12 01:57:41.023837 ignition[1044]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 12 01:57:41.228598 ignition[1044]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:57:41.265277 ignition[1044]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:57:41.265277 ignition[1044]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 12 01:57:41.265277 ignition[1044]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 12 01:57:41.265277 ignition[1044]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 01:57:41.265277 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:57:41.265277 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:57:41.265277 ignition[1044]: INFO : files: files passed Mar 12 01:57:41.265277 ignition[1044]: INFO : Ignition finished successfully Mar 12 01:57:41.554286 kernel: audit: type=1130 audit(1773280661.273:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:41.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:41.255884 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 01:57:41.285552 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 01:57:41.423591 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 01:57:41.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:41.551076 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 01:57:41.704954 kernel: audit: type=1130 audit(1773280661.632:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:41.705000 kernel: audit: type=1131 audit(1773280661.632:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:41.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:41.598992 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 01:57:41.714834 initrd-setup-root-after-ignition[1076]: grep: /sysroot/oem/oem-release: No such file or directory Mar 12 01:57:41.731422 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:57:41.731422 initrd-setup-root-after-ignition[1078]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:57:41.751266 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:57:41.830219 kernel: audit: type=1130 audit(1773280661.771:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:41.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:41.747533 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:57:41.782427 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 01:57:41.851117 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 01:57:42.078060 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 01:57:42.078338 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 01:57:42.217262 kernel: audit: type=1130 audit(1773280662.105:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:42.217469 kernel: audit: type=1131 audit(1773280662.108:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:42.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:42.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:42.113960 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 01:57:42.254490 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 01:57:42.281938 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 01:57:42.348277 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 01:57:42.734881 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:57:42.862988 kernel: audit: type=1130 audit(1773280662.776:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:42.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:42.954447 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 01:57:44.264563 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:57:44.615499 kernel: audit: type=1131 audit(1773280664.302:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:44.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:44.265148 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:57:44.302118 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:57:44.302358 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 01:57:44.302541 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 01:57:44.302892 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:57:44.303420 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 01:57:44.303587 systemd[1]: Stopped target basic.target - Basic System. Mar 12 01:57:44.993856 kernel: audit: type=1131 audit(1773280664.936:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:44.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:44.303902 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 01:57:44.304064 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:57:44.304209 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 01:57:44.304344 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 12 01:57:44.304474 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 01:57:44.304604 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:57:44.651595 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 01:57:44.677940 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 01:57:44.705383 systemd[1]: Stopped target swap.target - Swaps. Mar 12 01:57:44.907777 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 01:57:44.908065 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:57:45.180184 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:57:45.210513 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:57:45.248473 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 01:57:45.258391 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:57:45.358030 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 01:57:45.460307 kernel: audit: type=1131 audit(1773280665.392:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:45.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:45.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:45.358425 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 01:57:45.393243 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 01:57:45.393534 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:57:45.451045 systemd[1]: Stopped target paths.target - Path Units. Mar 12 01:57:45.483157 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 01:57:45.506883 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:57:45.550106 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 01:57:45.556932 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 01:57:45.632330 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 01:57:45.637344 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:57:45.659923 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 01:57:45.660150 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:57:45.666486 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Mar 12 01:57:45.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:45.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:45.666721 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Mar 12 01:57:45.696301 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 01:57:45.696535 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:57:45.704537 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 01:57:45.704909 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 01:57:45.709287 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 01:57:45.826101 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 01:57:45.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:45.826504 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:57:45.860494 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 01:57:45.934116 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 01:57:45.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:45.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:45.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:45.934548 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:57:46.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:45.954490 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 01:57:45.954930 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:57:45.967334 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 01:57:45.967612 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:57:46.007523 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 01:57:46.007888 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 01:57:46.056360 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 01:57:46.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.190505 ignition[1102]: INFO : Ignition 2.24.0 Mar 12 01:57:46.190505 ignition[1102]: INFO : Stage: umount Mar 12 01:57:46.190505 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:57:46.190505 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:57:46.190505 ignition[1102]: INFO : umount: umount passed Mar 12 01:57:46.190505 ignition[1102]: INFO : Ignition finished successfully Mar 12 01:57:46.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.153194 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 01:57:46.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.153380 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 01:57:46.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.203183 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 01:57:46.395581 kernel: kauditd_printk_skb: 13 callbacks suppressed Mar 12 01:57:46.395718 kernel: audit: type=1131 audit(1773280666.282:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.395742 kernel: audit: type=1131 audit(1773280666.324:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.395763 kernel: audit: type=1131 audit(1773280666.336:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.209349 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 01:57:46.224043 systemd[1]: Stopped target network.target - Network. Mar 12 01:57:46.241194 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 01:57:46.241365 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 01:57:46.248606 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 01:57:46.249323 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 01:57:46.258265 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 01:57:46.659445 kernel: audit: type=1131 audit(1773280666.585:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.258378 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 01:57:46.729500 kernel: audit: type=1131 audit(1773280666.673:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.729552 kernel: audit: type=1334 audit(1773280666.691:70): prog-id=9 op=UNLOAD Mar 12 01:57:46.729574 kernel: audit: type=1334 audit(1773280666.696:71): prog-id=6 op=UNLOAD Mar 12 01:57:46.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.691000 audit: BPF prog-id=9 op=UNLOAD Mar 12 01:57:46.696000 audit: BPF prog-id=6 op=UNLOAD Mar 12 01:57:46.282772 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 01:57:46.282951 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 01:57:46.325198 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 01:57:46.325329 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 01:57:46.952502 kernel: audit: type=1131 audit(1773280666.862:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.952555 kernel: audit: type=1131 audit(1773280666.945:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.339906 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 01:57:47.133004 kernel: audit: type=1131 audit(1773280667.032:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:47.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.410977 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 01:57:46.560383 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 01:57:47.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.560751 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 01:57:46.627330 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 01:57:46.627541 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 01:57:46.696886 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 12 01:57:47.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.740222 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 01:57:46.740331 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:57:47.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.752301 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 01:57:46.800471 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 01:57:46.801515 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:57:46.869452 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:57:47.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:46.869750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:57:46.946466 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 01:57:46.946587 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 01:57:47.034265 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:57:47.168404 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 01:57:47.168909 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:57:47.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:47.251121 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 01:57:47.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:47.251248 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 01:57:47.280154 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 01:57:47.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:47.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:47.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:47.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:47.280236 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:57:47.280341 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 01:57:47.280439 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:57:47.281717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 01:57:47.281804 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 01:57:47.381339 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:57:47.382490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:57:47.479615 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 01:57:47.507786 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 12 01:57:47.508045 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:57:47.508854 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 01:57:47.508951 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:57:47.583470 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:57:47.583592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:57:47.643955 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 01:57:47.644165 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 01:57:47.646437 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 01:57:47.646577 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 01:57:47.648000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 01:57:47.693695 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 01:57:48.066592 systemd-journald[321]: Received SIGTERM from PID 1 (systemd). Mar 12 01:57:47.898532 systemd[1]: Switching root. Mar 12 01:57:48.070938 systemd-journald[321]: Journal stopped Mar 12 01:57:56.160454 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 01:57:56.160742 kernel: SELinux: policy capability open_perms=1 Mar 12 01:57:56.160839 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 01:57:56.161026 kernel: SELinux: policy capability always_check_network=0 Mar 12 01:57:56.161103 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 01:57:56.161131 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 01:57:56.161150 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 01:57:56.161175 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 01:57:56.161244 kernel: SELinux: policy capability userspace_initial_context=0 Mar 12 01:57:56.161268 systemd[1]: Successfully loaded SELinux policy in 198.066ms. Mar 12 01:57:56.161343 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 37.229ms. Mar 12 01:57:56.161366 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 01:57:56.161385 systemd[1]: Detected virtualization kvm. Mar 12 01:57:56.161403 systemd[1]: Detected architecture x86-64. Mar 12 01:57:56.161422 systemd[1]: Detected first boot. Mar 12 01:57:56.161491 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Mar 12 01:57:56.161512 zram_generator::config[1148]: No configuration found. Mar 12 01:57:56.161578 kernel: Guest personality initialized and is inactive Mar 12 01:57:56.161598 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 12 01:57:56.161616 kernel: Initialized host personality Mar 12 01:57:56.161719 kernel: NET: Registered PF_VSOCK protocol family Mar 12 01:57:56.161740 systemd[1]: Populated /etc with preset unit settings. Mar 12 01:57:56.161811 kernel: kauditd_printk_skb: 16 callbacks suppressed Mar 12 01:57:56.161832 kernel: audit: type=1334 audit(1773280672.745:91): prog-id=12 op=LOAD Mar 12 01:57:56.161857 kernel: audit: type=1334 audit(1773280672.746:92): prog-id=3 op=UNLOAD Mar 12 01:57:56.163203 kernel: audit: type=1334 audit(1773280672.746:93): prog-id=13 op=LOAD Mar 12 01:57:56.163230 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 01:57:56.163258 kernel: audit: type=1334 audit(1773280672.746:94): prog-id=14 op=LOAD Mar 12 01:57:56.163278 kernel: audit: type=1334 audit(1773280672.746:95): prog-id=4 op=UNLOAD Mar 12 01:57:56.163356 kernel: audit: type=1334 audit(1773280672.746:96): prog-id=5 op=UNLOAD Mar 12 01:57:56.163378 kernel: audit: type=1131 audit(1773280672.757:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.163397 kernel: audit: type=1334 audit(1773280672.843:98): prog-id=12 op=UNLOAD Mar 12 01:57:56.163416 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 01:57:56.163436 kernel: audit: type=1130 audit(1773280672.960:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.163454 kernel: audit: type=1131 audit(1773280672.960:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.163531 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 01:57:56.163563 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 01:57:56.163583 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 01:57:56.163603 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 01:57:56.163717 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 01:57:56.163741 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 01:57:56.163813 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 01:57:56.163835 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 01:57:56.163854 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 01:57:56.163926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:57:56.163947 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:57:56.163968 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 01:57:56.164039 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 01:57:56.164060 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 01:57:56.164080 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:57:56.164099 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 01:57:56.164118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:57:56.164136 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:57:56.164156 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 01:57:56.164176 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 01:57:56.164245 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 01:57:56.164268 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 01:57:56.164288 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:57:56.164307 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:57:56.164326 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Mar 12 01:57:56.164345 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:57:56.164364 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:57:56.164433 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 01:57:56.164455 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 01:57:56.164474 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 12 01:57:56.164533 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Mar 12 01:57:56.164554 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Mar 12 01:57:56.164573 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:57:56.164592 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Mar 12 01:57:56.164725 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Mar 12 01:57:56.164747 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:57:56.164766 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:57:56.164785 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 01:57:56.164804 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 01:57:56.164823 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 01:57:56.164841 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 01:57:56.169179 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:57:56.169207 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 01:57:56.169228 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 01:57:56.169247 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 01:57:56.169268 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 01:57:56.169288 systemd[1]: Reached target machines.target - Containers. Mar 12 01:57:56.169308 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 01:57:56.169410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:57:56.169433 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:57:56.169453 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 01:57:56.169471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:57:56.169490 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:57:56.169509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:57:56.169528 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 01:57:56.169601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:57:56.169957 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 01:57:56.169989 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 01:57:56.170009 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 01:57:56.170028 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 01:57:56.170047 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 01:57:56.170124 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 01:57:56.170146 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:57:56.170165 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:57:56.170184 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:57:56.170254 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 01:57:56.170275 kernel: fuse: init (API version 7.41) Mar 12 01:57:56.170295 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 12 01:57:56.170314 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:57:56.170375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:57:56.170395 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 01:57:56.170415 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 01:57:56.170482 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 01:57:56.170503 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 01:57:56.170522 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 01:57:56.170541 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 01:57:56.170560 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 01:57:56.170583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:57:56.170727 kernel: ACPI: bus type drm_connector registered Mar 12 01:57:56.170799 systemd-journald[1234]: Collecting audit messages is enabled. Mar 12 01:57:56.170836 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 01:57:56.173994 systemd-journald[1234]: Journal started Mar 12 01:57:56.174087 systemd-journald[1234]: Runtime Journal (/run/log/journal/07a4dfdf54f04a6593ab42ed4a9c545f) is 6M, max 48M, 42M free. Mar 12 01:57:54.026000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 12 01:57:55.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:55.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:55.455000 audit: BPF prog-id=14 op=UNLOAD Mar 12 01:57:55.455000 audit: BPF prog-id=13 op=UNLOAD Mar 12 01:57:55.464000 audit: BPF prog-id=15 op=LOAD Mar 12 01:57:55.476000 audit: BPF prog-id=16 op=LOAD Mar 12 01:57:55.507000 audit: BPF prog-id=17 op=LOAD Mar 12 01:57:56.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.151000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 12 01:57:56.151000 audit[1234]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff02ea5e50 a2=4000 a3=0 items=0 ppid=1 pid=1234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 12 01:57:56.151000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 12 01:57:56.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:52.647272 systemd[1]: Queued start job for default target multi-user.target. Mar 12 01:57:52.750502 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 01:57:52.757459 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 01:57:52.758400 systemd[1]: systemd-journald.service: Consumed 2.540s CPU time. Mar 12 01:57:56.196718 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 01:57:56.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.267028 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:57:56.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.285124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:57:56.286522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:57:56.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.311812 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:57:56.322806 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:57:56.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.354180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:57:56.354590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:57:56.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.409743 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 01:57:56.439383 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 01:57:56.479476 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:57:56.480041 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:57:56.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.544254 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:57:56.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.575772 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:57:56.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.616199 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 01:57:56.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.656238 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 12 01:57:56.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.697386 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:57:56.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:56.818346 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:57:56.851324 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Mar 12 01:57:56.890390 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 01:57:57.033242 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 01:57:57.066125 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 01:57:57.066262 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:57:57.128459 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 12 01:57:57.150418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:57:57.154504 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Mar 12 01:57:57.177313 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 01:57:57.224285 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 01:57:57.260614 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:57:57.281314 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 01:57:57.324466 systemd-journald[1234]: Time spent on flushing to /var/log/journal/07a4dfdf54f04a6593ab42ed4a9c545f is 124.920ms for 1234 entries. Mar 12 01:57:57.324466 systemd-journald[1234]: System Journal (/var/log/journal/07a4dfdf54f04a6593ab42ed4a9c545f) is 8M, max 163.5M, 155.5M free. Mar 12 01:57:57.494100 systemd-journald[1234]: Received client request to flush runtime journal. Mar 12 01:57:57.349320 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:57:57.379308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:57:57.429335 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 01:57:57.471248 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 01:57:57.495321 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 01:57:57.510611 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 01:57:57.530123 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 01:57:57.566981 kernel: loop1: detected capacity change from 0 to 50784 Mar 12 01:57:57.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:57.588839 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 01:57:57.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:57.629832 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 01:57:57.667737 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 12 01:57:57.707294 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:57:57.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:57.801445 kernel: loop2: detected capacity change from 0 to 111560 Mar 12 01:57:57.929079 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 01:57:57.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:58.009365 kernel: kauditd_printk_skb: 34 callbacks suppressed Mar 12 01:57:58.010527 kernel: audit: type=1130 audit(1773280677.955:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:57.999104 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Mar 12 01:57:57.987000 audit: BPF prog-id=18 op=LOAD Mar 12 01:57:57.987000 audit: BPF prog-id=19 op=LOAD Mar 12 01:57:57.987000 audit: BPF prog-id=20 op=LOAD Mar 12 01:57:58.044076 kernel: audit: type=1334 audit(1773280677.987:134): prog-id=18 op=LOAD Mar 12 01:57:58.044184 kernel: audit: type=1334 audit(1773280677.987:135): prog-id=19 op=LOAD Mar 12 01:57:58.044226 kernel: audit: type=1334 audit(1773280677.987:136): prog-id=20 op=LOAD Mar 12 01:57:58.195873 kernel: loop3: detected capacity change from 0 to 228704 Mar 12 01:57:58.196074 kernel: audit: type=1334 audit(1773280678.153:137): prog-id=21 op=LOAD Mar 12 01:57:58.153000 audit: BPF prog-id=21 op=LOAD Mar 12 01:57:58.179506 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:57:58.263941 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:57:58.278531 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 12 01:57:58.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:58.361828 kernel: audit: type=1130 audit(1773280678.303:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:58.347143 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Mar 12 01:57:58.332000 audit: BPF prog-id=22 op=LOAD Mar 12 01:57:58.363768 kernel: audit: type=1334 audit(1773280678.332:139): prog-id=22 op=LOAD Mar 12 01:57:58.332000 audit: BPF prog-id=23 op=LOAD Mar 12 01:57:58.332000 audit: BPF prog-id=24 op=LOAD Mar 12 01:57:58.367151 kernel: audit: type=1334 audit(1773280678.332:140): prog-id=23 op=LOAD Mar 12 01:57:58.367223 kernel: audit: type=1334 audit(1773280678.332:141): prog-id=24 op=LOAD Mar 12 01:57:58.405000 audit: BPF prog-id=25 op=LOAD Mar 12 01:57:58.412145 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 01:57:58.405000 audit: BPF prog-id=26 op=LOAD Mar 12 01:57:58.405000 audit: BPF prog-id=27 op=LOAD Mar 12 01:57:58.423290 kernel: audit: type=1334 audit(1773280678.405:142): prog-id=25 op=LOAD Mar 12 01:57:58.507514 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 01:57:58.512528 kernel: loop4: detected capacity change from 0 to 50784 Mar 12 01:57:58.757065 kernel: loop5: detected capacity change from 0 to 111560 Mar 12 01:57:58.855731 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Mar 12 01:57:58.855807 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Mar 12 01:57:58.951742 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:57:58.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:59.016379 kernel: loop6: detected capacity change from 0 to 228704 Mar 12 01:57:59.105154 (sd-merge)[1293]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Mar 12 01:57:59.187233 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 01:57:59.213028 (sd-merge)[1293]: Merged extensions into '/usr'. Mar 12 01:57:59.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:57:59.310773 systemd[1]: Reload requested from client PID 1269 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 01:57:59.312299 systemd[1]: Reloading... Mar 12 01:57:59.345105 systemd-nsresourced[1291]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Mar 12 01:58:00.176016 zram_generator::config[1333]: No configuration found. Mar 12 01:58:00.601313 systemd-oomd[1285]: No swap; memory pressure usage will be degraded Mar 12 01:58:00.894313 systemd-resolved[1286]: Positive Trust Anchors: Mar 12 01:58:00.897174 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:58:00.901778 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Mar 12 01:58:00.925082 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:58:01.068157 systemd-resolved[1286]: Defaulting to hostname 'linux'. Mar 12 01:58:01.932205 systemd[1]: Reloading finished in 2617 ms. Mar 12 01:58:02.004727 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Mar 12 01:58:02.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:58:02.055286 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Mar 12 01:58:02.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:58:02.085834 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:58:02.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:58:02.134867 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 01:58:02.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:58:02.155265 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 01:58:02.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:58:02.187534 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:58:02.234783 systemd[1]: Starting ensure-sysext.service... Mar 12 01:58:02.250721 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:58:02.258000 audit: BPF prog-id=8 op=UNLOAD Mar 12 01:58:02.259000 audit: BPF prog-id=7 op=UNLOAD Mar 12 01:58:02.269000 audit: BPF prog-id=28 op=LOAD Mar 12 01:58:02.269000 audit: BPF prog-id=29 op=LOAD Mar 12 01:58:02.274872 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:58:02.293000 audit: BPF prog-id=30 op=LOAD Mar 12 01:58:02.293000 audit: BPF prog-id=22 op=UNLOAD Mar 12 01:58:02.293000 audit: BPF prog-id=31 op=LOAD Mar 12 01:58:02.293000 audit: BPF prog-id=32 op=LOAD Mar 12 01:58:02.296000 audit: BPF prog-id=23 op=UNLOAD Mar 12 01:58:02.296000 audit: BPF prog-id=24 op=UNLOAD Mar 12 01:58:02.297000 audit: BPF prog-id=33 op=LOAD Mar 12 01:58:02.297000 audit: BPF prog-id=15 op=UNLOAD Mar 12 01:58:02.298000 audit: BPF prog-id=34 op=LOAD Mar 12 01:58:02.298000 audit: BPF prog-id=35 op=LOAD Mar 12 01:58:02.298000 audit: BPF prog-id=16 op=UNLOAD Mar 12 01:58:02.298000 audit: BPF prog-id=17 op=UNLOAD Mar 12 01:58:02.301000 audit: BPF prog-id=36 op=LOAD Mar 12 01:58:02.301000 audit: BPF prog-id=18 op=UNLOAD Mar 12 01:58:02.301000 audit: BPF prog-id=37 op=LOAD Mar 12 01:58:02.302000 audit: BPF prog-id=38 op=LOAD Mar 12 01:58:02.304000 audit: BPF prog-id=19 op=UNLOAD Mar 12 01:58:02.304000 audit: BPF prog-id=20 op=UNLOAD Mar 12 01:58:02.304000 audit: BPF prog-id=39 op=LOAD Mar 12 01:58:02.304000 audit: BPF prog-id=25 op=UNLOAD Mar 12 01:58:02.305000 audit: BPF prog-id=40 op=LOAD Mar 12 01:58:02.305000 audit: BPF prog-id=41 op=LOAD Mar 12 01:58:02.319000 audit: BPF prog-id=26 op=UNLOAD Mar 12 01:58:02.319000 audit: BPF prog-id=27 op=UNLOAD Mar 12 01:58:02.321195 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 12 01:58:02.320000 audit: BPF prog-id=42 op=LOAD Mar 12 01:58:02.320000 audit: BPF prog-id=21 op=UNLOAD Mar 12 01:58:02.321446 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 12 01:58:02.323546 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 01:58:02.330508 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Mar 12 01:58:02.330761 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Mar 12 01:58:02.334152 systemd[1]: Reload requested from client PID 1374 ('systemctl') (unit ensure-sysext.service)... Mar 12 01:58:02.334421 systemd[1]: Reloading... Mar 12 01:58:02.348855 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:58:02.351074 systemd-tmpfiles[1375]: Skipping /boot Mar 12 01:58:02.394872 systemd-udevd[1376]: Using default interface naming scheme 'v257'. Mar 12 01:58:02.403024 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:58:02.403047 systemd-tmpfiles[1375]: Skipping /boot Mar 12 01:58:02.581739 zram_generator::config[1409]: No configuration found. Mar 12 01:58:03.060862 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 01:58:03.201810 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 12 01:58:03.271163 kernel: ACPI: button: Power Button [PWRF] Mar 12 01:58:03.326727 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 12 01:58:03.346453 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 01:58:03.347195 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 01:58:03.830335 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 01:58:03.831232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:58:03.848606 systemd[1]: Reloading finished in 1512 ms. Mar 12 01:58:03.920213 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:58:03.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:58:03.966500 kernel: kauditd_printk_skb: 39 callbacks suppressed Mar 12 01:58:03.966572 kernel: audit: type=1130 audit(1773280683.952:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:58:03.957000 audit: BPF prog-id=43 op=LOAD Mar 12 01:58:04.045839 kernel: audit: type=1334 audit(1773280683.957:183): prog-id=43 op=LOAD Mar 12 01:58:04.046003 kernel: audit: type=1334 audit(1773280683.957:184): prog-id=39 op=UNLOAD Mar 12 01:58:03.957000 audit: BPF prog-id=39 op=UNLOAD Mar 12 01:58:04.065996 kernel: audit: type=1334 audit(1773280683.957:185): prog-id=44 op=LOAD Mar 12 01:58:03.957000 audit: BPF prog-id=44 op=LOAD Mar 12 01:58:04.081563 kernel: audit: type=1334 audit(1773280683.957:186): prog-id=45 op=LOAD Mar 12 01:58:03.957000 audit: BPF prog-id=45 op=LOAD Mar 12 01:58:03.957000 audit: BPF prog-id=40 op=UNLOAD Mar 12 01:58:04.095014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:58:04.108292 kernel: audit: type=1334 audit(1773280683.957:187): prog-id=40 op=UNLOAD Mar 12 01:58:04.108334 kernel: audit: type=1334 audit(1773280683.957:188): prog-id=41 op=UNLOAD Mar 12 01:58:04.108363 kernel: audit: type=1334 audit(1773280683.962:189): prog-id=46 op=LOAD Mar 12 01:58:04.108383 kernel: audit: type=1334 audit(1773280683.962:190): prog-id=36 op=UNLOAD Mar 12 01:58:04.108403 kernel: audit: type=1334 audit(1773280683.962:191): prog-id=47 op=LOAD Mar 12 01:58:03.957000 audit: BPF prog-id=41 op=UNLOAD Mar 12 01:58:03.962000 audit: BPF prog-id=46 op=LOAD Mar 12 01:58:03.962000 audit: BPF prog-id=36 op=UNLOAD Mar 12 01:58:03.962000 audit: BPF prog-id=47 op=LOAD Mar 12 01:58:03.962000 audit: BPF prog-id=48 op=LOAD Mar 12 01:58:03.962000 audit: BPF prog-id=37 op=UNLOAD Mar 12 01:58:03.962000 audit: BPF prog-id=38 op=UNLOAD Mar 12 01:58:03.981000 audit: BPF prog-id=49 op=LOAD Mar 12 01:58:03.981000 audit: BPF prog-id=30 op=UNLOAD Mar 12 01:58:03.986000 audit: BPF prog-id=50 op=LOAD Mar 12 01:58:03.986000 audit: BPF prog-id=51 op=LOAD Mar 12 01:58:03.986000 audit: BPF prog-id=31 op=UNLOAD Mar 12 01:58:03.986000 audit: BPF prog-id=32 op=UNLOAD Mar 12 01:58:03.991000 audit: BPF prog-id=52 op=LOAD Mar 12 01:58:03.991000 audit: BPF prog-id=53 op=LOAD Mar 12 01:58:03.991000 audit: BPF prog-id=28 op=UNLOAD Mar 12 01:58:03.991000 audit: BPF prog-id=29 op=UNLOAD Mar 12 01:58:03.996000 audit: BPF prog-id=54 op=LOAD Mar 12 01:58:03.996000 audit: BPF prog-id=33 op=UNLOAD Mar 12 01:58:03.996000 audit: BPF prog-id=55 op=LOAD Mar 12 01:58:03.996000 audit: BPF prog-id=56 op=LOAD Mar 12 01:58:03.996000 audit: BPF prog-id=34 op=UNLOAD Mar 12 01:58:03.996000 audit: BPF prog-id=35 op=UNLOAD Mar 12 01:58:03.996000 audit: BPF prog-id=57 op=LOAD Mar 12 01:58:03.996000 audit: BPF prog-id=42 op=UNLOAD Mar 12 01:58:04.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:58:04.371993 systemd[1]: Finished ensure-sysext.service. Mar 12 01:58:04.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 12 01:58:04.484002 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:58:04.490303 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 01:58:04.598322 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 01:58:04.656607 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:58:04.694470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:58:04.739068 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:58:04.807366 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:58:04.854544 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:58:04.866786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:58:04.867111 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Mar 12 01:58:04.911180 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 01:58:04.952307 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 01:58:04.964432 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 01:58:04.972405 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 01:58:05.015000 audit: BPF prog-id=58 op=LOAD Mar 12 01:58:05.038583 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:58:05.043000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 12 01:58:05.043000 audit[1517]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1b2479d0 a2=420 a3=0 items=0 ppid=1490 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 12 01:58:05.043000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 12 01:58:05.056784 augenrules[1517]: No rules Mar 12 01:58:05.069453 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 01:58:05.089577 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 01:58:05.173288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:58:05.192098 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:58:05.210312 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 01:58:05.217579 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 01:58:05.260909 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:58:05.263214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:58:05.292464 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:58:05.293786 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:58:05.318515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:58:05.328776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:58:05.333477 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:58:05.336448 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:58:05.339149 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 01:58:05.340489 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 01:58:05.398064 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 01:58:05.488557 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 01:58:05.550905 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:58:05.551107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:58:05.551155 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:58:06.055984 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:58:06.119790 systemd-networkd[1523]: lo: Link UP Mar 12 01:58:06.119813 systemd-networkd[1523]: lo: Gained carrier Mar 12 01:58:06.139586 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:58:06.142024 systemd-networkd[1523]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Mar 12 01:58:06.144771 systemd-networkd[1523]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:58:06.152326 systemd-networkd[1523]: eth0: Link UP Mar 12 01:58:06.155500 systemd[1]: Reached target network.target - Network. Mar 12 01:58:06.156412 systemd-networkd[1523]: eth0: Gained carrier Mar 12 01:58:06.156451 systemd-networkd[1523]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Mar 12 01:58:06.190050 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 12 01:58:06.244924 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 01:58:06.303425 systemd-networkd[1523]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:58:06.375617 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 01:58:06.399358 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 01:58:07.875867 systemd-timesyncd[1525]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 12 01:58:07.876039 systemd-timesyncd[1525]: Initial clock synchronization to Thu 2026-03-12 01:58:07.875466 UTC. Mar 12 01:58:07.877201 systemd-resolved[1286]: Clock change detected. Flushing caches. Mar 12 01:58:07.955534 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 12 01:58:09.627752 systemd-networkd[1523]: eth0: Gained IPv6LL Mar 12 01:58:09.729851 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 01:58:09.761006 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 01:58:11.376663 kernel: kvm_amd: TSC scaling supported Mar 12 01:58:11.383552 kernel: kvm_amd: Nested Virtualization enabled Mar 12 01:58:11.400082 kernel: kvm_amd: Nested Paging enabled Mar 12 01:58:11.434711 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 12 01:58:11.434823 kernel: kvm_amd: PMU virtualization is disabled Mar 12 01:58:13.611385 ldconfig[1514]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 01:58:13.686075 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 01:58:13.733722 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 01:58:14.331324 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 01:58:14.359154 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:58:14.392080 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 01:58:14.430221 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 01:58:14.481809 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 12 01:58:14.527839 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 01:58:14.564840 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 01:58:14.608315 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Mar 12 01:58:14.624803 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Mar 12 01:58:14.646287 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 01:58:14.669532 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 01:58:14.670862 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:58:14.690306 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:58:14.713191 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 01:58:14.735030 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 01:58:14.752844 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 12 01:58:14.774405 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 12 01:58:14.814493 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 12 01:58:14.846476 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 01:58:14.863324 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 12 01:58:14.900451 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 01:58:14.937494 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:58:14.961914 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:58:14.982296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:58:14.982355 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:58:14.989458 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 01:58:15.013901 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 12 01:58:15.044799 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 01:58:15.119361 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 01:58:15.172083 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 01:58:15.208473 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 01:58:15.247922 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 01:58:15.270844 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 12 01:58:15.318700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:58:15.333066 jq[1563]: false Mar 12 01:58:15.353255 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 01:58:15.356833 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing passwd entry cache Mar 12 01:58:15.357525 oslogin_cache_refresh[1565]: Refreshing passwd entry cache Mar 12 01:58:15.398372 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting users, quitting Mar 12 01:58:15.398372 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 12 01:58:15.398347 oslogin_cache_refresh[1565]: Failure getting users, quitting Mar 12 01:58:15.398387 oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 12 01:58:15.398801 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing group entry cache Mar 12 01:58:15.398699 oslogin_cache_refresh[1565]: Refreshing group entry cache Mar 12 01:58:15.401567 extend-filesystems[1564]: Found /dev/vda6 Mar 12 01:58:15.413460 extend-filesystems[1564]: Found /dev/vda9 Mar 12 01:58:15.418237 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 01:58:15.426699 extend-filesystems[1564]: Checking size of /dev/vda9 Mar 12 01:58:15.439256 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting groups, quitting Mar 12 01:58:15.439256 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 12 01:58:15.435743 oslogin_cache_refresh[1565]: Failure getting groups, quitting Mar 12 01:58:15.435771 oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 12 01:58:15.466227 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 01:58:15.471314 extend-filesystems[1564]: Resized partition /dev/vda9 Mar 12 01:58:15.498040 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 01:58:15.538153 extend-filesystems[1579]: resize2fs 1.47.3 (8-Jul-2025) Mar 12 01:58:15.911549 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 01:58:16.019817 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 01:58:16.030135 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 01:58:16.039803 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Mar 12 01:58:16.066498 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 01:58:16.089503 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 01:58:16.188306 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 01:58:16.231870 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 01:58:16.267479 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 01:58:16.268408 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 01:58:16.276454 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 12 01:58:16.277335 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 12 01:58:16.402285 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 01:58:16.419268 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Mar 12 01:58:16.425921 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 01:58:16.559461 update_engine[1592]: I20260312 01:58:16.516312 1592 main.cc:92] Flatcar Update Engine starting Mar 12 01:58:16.466490 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 01:58:16.566534 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 01:58:16.566534 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 12 01:58:16.566534 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Mar 12 01:58:16.513807 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 01:58:16.629263 jq[1595]: true Mar 12 01:58:16.630096 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Mar 12 01:58:16.540414 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 01:58:16.582202 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 01:58:16.586193 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 01:58:16.820260 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 12 01:58:16.823251 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 12 01:58:16.884346 jq[1612]: true Mar 12 01:58:16.929401 tar[1608]: linux-amd64/LICENSE Mar 12 01:58:16.954764 systemd-logind[1589]: Watching system buttons on /dev/input/event2 (Power Button) Mar 12 01:58:16.954810 systemd-logind[1589]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 01:58:16.973701 tar[1608]: linux-amd64/helm Mar 12 01:58:16.984425 systemd-logind[1589]: New seat seat0. Mar 12 01:58:17.006429 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 01:58:17.025519 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 01:58:17.030885 dbus-daemon[1561]: [system] SELinux support is enabled Mar 12 01:58:17.031770 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 01:58:17.054097 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 01:58:17.054232 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 01:58:17.113111 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 01:58:17.113433 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 01:58:17.204172 dbus-daemon[1561]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 01:58:17.221486 systemd[1]: Started update-engine.service - Update Engine. Mar 12 01:58:17.227098 update_engine[1592]: I20260312 01:58:17.222877 1592 update_check_scheduler.cc:74] Next update check in 11m57s Mar 12 01:58:17.411725 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 01:58:17.477567 sshd_keygen[1604]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 01:58:17.493677 bash[1646]: Updated "/home/core/.ssh/authorized_keys" Mar 12 01:58:17.497412 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 01:58:17.526437 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 01:58:17.649082 kernel: EDAC MC: Ver: 3.0.0 Mar 12 01:58:17.881519 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 01:58:17.948494 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 01:58:18.247215 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 01:58:18.248698 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 01:58:18.309149 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 01:58:18.578091 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 01:58:18.619831 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:47338.service - OpenSSH per-connection server daemon (10.0.0.1:47338). Mar 12 01:58:19.074096 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 01:58:19.314771 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 01:58:19.346259 locksmithd[1647]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 01:58:19.904456 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 01:58:19.993734 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 01:58:20.728504 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 47338 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 01:58:20.750683 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:58:20.810938 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 01:58:20.847265 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 01:58:20.884183 systemd-logind[1589]: New session 1 of user core. Mar 12 01:58:21.384509 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 01:58:21.478142 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 01:58:21.590889 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:58:21.609960 systemd-logind[1589]: New session 2 of user core. Mar 12 01:58:21.791286 containerd[1613]: time="2026-03-12T01:58:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 12 01:58:21.795354 containerd[1613]: time="2026-03-12T01:58:21.795306102Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Mar 12 01:58:22.183567 containerd[1613]: time="2026-03-12T01:58:22.179698085Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=15.072404ms Mar 12 01:58:22.183567 containerd[1613]: time="2026-03-12T01:58:22.179906142Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 12 01:58:22.183567 containerd[1613]: time="2026-03-12T01:58:22.180067975Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 12 01:58:22.183567 containerd[1613]: time="2026-03-12T01:58:22.180157041Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 12 01:58:22.183567 containerd[1613]: time="2026-03-12T01:58:22.180938129Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 12 01:58:22.183567 containerd[1613]: time="2026-03-12T01:58:22.181340360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 01:58:22.183567 containerd[1613]: time="2026-03-12T01:58:22.181728935Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 01:58:22.183567 containerd[1613]: time="2026-03-12T01:58:22.181754724Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.196118855Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.196169068Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.196190869Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.196203953Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.196773746Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.196795718Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.197145180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.197745260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.197794362Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.197808758Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 12 01:58:22.198311 containerd[1613]: time="2026-03-12T01:58:22.197851188Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 12 01:58:22.204960 containerd[1613]: time="2026-03-12T01:58:22.204039611Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 12 01:58:22.204960 containerd[1613]: time="2026-03-12T01:58:22.204709210Z" level=info msg="metadata content store policy set" policy=shared Mar 12 01:58:22.314550 containerd[1613]: time="2026-03-12T01:58:22.314056285Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.350905756Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359066659Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359210136Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359235273Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359325903Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359415069Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359498665Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359697446Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359782435Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359870960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359889825Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.359905905Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 12 01:58:22.362310 containerd[1613]: time="2026-03-12T01:58:22.360047259Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.360546100Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.360829979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.361052615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.361153474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.361173632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.361188609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.361274489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.361364999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.361390426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.361404352Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.361419480Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 12 01:58:22.363242 containerd[1613]: time="2026-03-12T01:58:22.361456079Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 12 01:58:22.372909 containerd[1613]: time="2026-03-12T01:58:22.370162169Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 12 01:58:22.372909 containerd[1613]: time="2026-03-12T01:58:22.370353145Z" level=info msg="Start snapshots syncer" Mar 12 01:58:22.372909 containerd[1613]: time="2026-03-12T01:58:22.371821226Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 12 01:58:22.390255 containerd[1613]: time="2026-03-12T01:58:22.388500828Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 12 01:58:22.390255 containerd[1613]: time="2026-03-12T01:58:22.388891628Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 12 01:58:22.392042 containerd[1613]: time="2026-03-12T01:58:22.389479254Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.397451976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.397516566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.397538708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.397554026Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.397704538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.397727430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.397743880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.397760242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.397917906Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.398215131Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.398245938Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.398261778Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.398278358Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 01:58:22.407768 containerd[1613]: time="2026-03-12T01:58:22.398289739Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 12 01:58:22.423834 containerd[1613]: time="2026-03-12T01:58:22.398305269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 12 01:58:22.423834 containerd[1613]: time="2026-03-12T01:58:22.398385328Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 12 01:58:22.423834 containerd[1613]: time="2026-03-12T01:58:22.398409824Z" level=info msg="runtime interface created" Mar 12 01:58:22.423834 containerd[1613]: time="2026-03-12T01:58:22.398418860Z" level=info msg="created NRI interface" Mar 12 01:58:22.423834 containerd[1613]: time="2026-03-12T01:58:22.398434470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 12 01:58:22.423834 containerd[1613]: time="2026-03-12T01:58:22.398457092Z" level=info msg="Connect containerd service" Mar 12 01:58:22.423834 containerd[1613]: time="2026-03-12T01:58:22.398549435Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 01:58:22.423834 containerd[1613]: time="2026-03-12T01:58:22.416139536Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:58:23.089511 tar[1608]: linux-amd64/README.md Mar 12 01:58:23.489963 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 01:58:23.728293 systemd[1684]: Queued start job for default target default.target. Mar 12 01:58:23.755063 systemd[1684]: Created slice app.slice - User Application Slice. Mar 12 01:58:23.755125 systemd[1684]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Mar 12 01:58:23.755149 systemd[1684]: Reached target paths.target - Paths. Mar 12 01:58:23.755241 systemd[1684]: Reached target timers.target - Timers. Mar 12 01:58:23.771232 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 01:58:23.805857 systemd[1684]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Mar 12 01:58:24.266339 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 01:58:24.266526 systemd[1684]: Reached target sockets.target - Sockets. Mar 12 01:58:24.343409 systemd[1684]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Mar 12 01:58:24.343788 systemd[1684]: Reached target basic.target - Basic System. Mar 12 01:58:24.348284 systemd[1684]: Reached target default.target - Main User Target. Mar 12 01:58:24.348378 systemd[1684]: Startup finished in 2.644s. Mar 12 01:58:24.352521 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 01:58:24.419555 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 01:58:26.004103 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:44784.service - OpenSSH per-connection server daemon (10.0.0.1:44784). Mar 12 01:58:26.200262 containerd[1613]: time="2026-03-12T01:58:26.199400573Z" level=info msg="Start subscribing containerd event" Mar 12 01:58:26.204158 containerd[1613]: time="2026-03-12T01:58:26.201478562Z" level=info msg="Start recovering state" Mar 12 01:58:26.204158 containerd[1613]: time="2026-03-12T01:58:26.202326615Z" level=info msg="Start event monitor" Mar 12 01:58:26.204158 containerd[1613]: time="2026-03-12T01:58:26.202422914Z" level=info msg="Start cni network conf syncer for default" Mar 12 01:58:26.204158 containerd[1613]: time="2026-03-12T01:58:26.202441690Z" level=info msg="Start streaming server" Mar 12 01:58:26.204158 containerd[1613]: time="2026-03-12T01:58:26.202454293Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 12 01:58:26.204158 containerd[1613]: time="2026-03-12T01:58:26.202517461Z" level=info msg="runtime interface starting up..." Mar 12 01:58:26.204158 containerd[1613]: time="2026-03-12T01:58:26.202528532Z" level=info msg="starting plugins..." Mar 12 01:58:26.204158 containerd[1613]: time="2026-03-12T01:58:26.202675456Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 12 01:58:26.206757 containerd[1613]: time="2026-03-12T01:58:26.206727930Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 01:58:26.206959 containerd[1613]: time="2026-03-12T01:58:26.206938874Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 01:58:26.207251 containerd[1613]: time="2026-03-12T01:58:26.207230728Z" level=info msg="containerd successfully booted in 4.417870s" Mar 12 01:58:26.209354 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 01:58:26.642156 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 44784 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 01:58:26.655480 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:58:26.770472 systemd-logind[1589]: New session 3 of user core. Mar 12 01:58:26.803275 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 01:58:26.947635 sshd[1727]: Connection closed by 10.0.0.1 port 44784 Mar 12 01:58:26.948958 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Mar 12 01:58:27.528353 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:44784.service: Deactivated successfully. Mar 12 01:58:27.548817 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 01:58:27.577443 systemd-logind[1589]: Session 3 logged out. Waiting for processes to exit. Mar 12 01:58:27.605467 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:44790.service - OpenSSH per-connection server daemon (10.0.0.1:44790). Mar 12 01:58:27.619518 systemd-logind[1589]: Removed session 3. Mar 12 01:58:28.148258 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 44790 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 01:58:28.174768 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:58:28.224746 systemd-logind[1589]: New session 4 of user core. Mar 12 01:58:28.237278 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 01:58:28.563164 sshd[1737]: Connection closed by 10.0.0.1 port 44790 Mar 12 01:58:28.565480 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Mar 12 01:58:28.589917 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:44790.service: Deactivated successfully. Mar 12 01:58:28.598355 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 01:58:28.632736 systemd-logind[1589]: Session 4 logged out. Waiting for processes to exit. Mar 12 01:58:28.642554 systemd-logind[1589]: Removed session 4. Mar 12 01:58:31.187952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:58:31.189276 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 01:58:31.192211 systemd[1]: Startup finished in 1min 16.770s (kernel) + 38.663s (initrd) + 41.110s (userspace) = 2min 36.545s. Mar 12 01:58:31.222883 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:58:36.705442 kubelet[1747]: E0312 01:58:36.703268 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:58:36.729736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:58:36.730987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:58:36.737341 systemd[1]: kubelet.service: Consumed 5.294s CPU time, 270.3M memory peak. Mar 12 01:58:38.646734 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:50622.service - OpenSSH per-connection server daemon (10.0.0.1:50622). Mar 12 01:58:39.226230 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 50622 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 01:58:39.232352 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:58:39.265922 systemd-logind[1589]: New session 5 of user core. Mar 12 01:58:39.292486 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 01:58:39.486497 sshd[1761]: Connection closed by 10.0.0.1 port 50622 Mar 12 01:58:39.484369 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Mar 12 01:58:39.525815 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:50622.service: Deactivated successfully. Mar 12 01:58:39.530329 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 01:58:39.540342 systemd-logind[1589]: Session 5 logged out. Waiting for processes to exit. Mar 12 01:58:39.549011 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:48656.service - OpenSSH per-connection server daemon (10.0.0.1:48656). Mar 12 01:58:39.567462 systemd-logind[1589]: Removed session 5. Mar 12 01:58:39.881226 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 48656 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 01:58:39.886270 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:58:39.960207 systemd-logind[1589]: New session 6 of user core. Mar 12 01:58:40.079787 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 01:58:40.143826 sshd[1771]: Connection closed by 10.0.0.1 port 48656 Mar 12 01:58:40.146207 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Mar 12 01:58:40.178229 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:48656.service: Deactivated successfully. Mar 12 01:58:40.185441 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 01:58:40.196481 systemd-logind[1589]: Session 6 logged out. Waiting for processes to exit. Mar 12 01:58:40.206321 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:48658.service - OpenSSH per-connection server daemon (10.0.0.1:48658). Mar 12 01:58:40.212561 systemd-logind[1589]: Removed session 6. Mar 12 01:58:42.328360 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 48658 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 01:58:42.435111 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:58:42.505969 systemd-logind[1589]: New session 7 of user core. Mar 12 01:58:42.533738 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 01:58:42.695932 sshd[1781]: Connection closed by 10.0.0.1 port 48658 Mar 12 01:58:42.701840 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Mar 12 01:58:42.730028 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:48658.service: Deactivated successfully. Mar 12 01:58:42.739314 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 01:58:42.753540 systemd-logind[1589]: Session 7 logged out. Waiting for processes to exit. Mar 12 01:58:42.782456 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:48660.service - OpenSSH per-connection server daemon (10.0.0.1:48660). Mar 12 01:58:42.790364 systemd-logind[1589]: Removed session 7. Mar 12 01:58:43.115258 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 48660 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 01:58:43.107827 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:58:43.761448 systemd-logind[1589]: New session 8 of user core. Mar 12 01:58:43.796520 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 01:58:43.985043 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 01:58:43.985847 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:58:47.023986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 01:58:47.063105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:58:52.507386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:58:52.658182 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:58:52.938188 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 01:58:53.067002 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 01:58:53.963547 kubelet[1820]: E0312 01:58:53.961472 1820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:58:54.117448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:58:54.126083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:58:54.231510 systemd[1]: kubelet.service: Consumed 2.033s CPU time, 108.7M memory peak. Mar 12 01:59:00.517143 dockerd[1827]: time="2026-03-12T01:59:00.515350836Z" level=info msg="Starting up" Mar 12 01:59:00.545889 dockerd[1827]: time="2026-03-12T01:59:00.542917874Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 12 01:59:00.750301 dockerd[1827]: time="2026-03-12T01:59:00.747406505Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 12 01:59:02.008004 dockerd[1827]: time="2026-03-12T01:59:01.957870721Z" level=info msg="Loading containers: start." Mar 12 01:59:03.802797 update_engine[1592]: I20260312 01:59:02.937721 1592 update_attempter.cc:509] Updating boot flags... Mar 12 01:59:03.951842 kernel: Initializing XFRM netlink socket Mar 12 01:59:04.154159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 01:59:04.193331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:59:09.289486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:59:09.338229 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:59:09.678791 kubelet[1943]: E0312 01:59:09.678724 1943 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:59:09.708747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:59:09.709160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:59:09.712238 systemd[1]: kubelet.service: Consumed 2.100s CPU time, 109.5M memory peak. Mar 12 01:59:10.870368 systemd-networkd[1523]: docker0: Link UP Mar 12 01:59:10.931015 dockerd[1827]: time="2026-03-12T01:59:10.928144856Z" level=info msg="Loading containers: done." Mar 12 01:59:11.136742 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck161922304-merged.mount: Deactivated successfully. Mar 12 01:59:11.182296 dockerd[1827]: time="2026-03-12T01:59:11.182097300Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 01:59:11.182296 dockerd[1827]: time="2026-03-12T01:59:11.182285632Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 12 01:59:11.182870 dockerd[1827]: time="2026-03-12T01:59:11.182418453Z" level=info msg="Initializing buildkit" Mar 12 01:59:11.751827 dockerd[1827]: time="2026-03-12T01:59:11.751075268Z" level=info msg="Completed buildkit initialization" Mar 12 01:59:11.831027 dockerd[1827]: time="2026-03-12T01:59:11.830451931Z" level=info msg="Daemon has completed initialization" Mar 12 01:59:11.832928 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 01:59:11.835225 dockerd[1827]: time="2026-03-12T01:59:11.834374311Z" level=info msg="API listen on /run/docker.sock" Mar 12 01:59:15.202911 containerd[1613]: time="2026-03-12T01:59:15.198060578Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 12 01:59:18.257998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341685667.mount: Deactivated successfully. Mar 12 01:59:19.941868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 12 01:59:19.964040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:59:32.493722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:59:32.578720 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:59:34.581390 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1898157569 wd_nsec: 1898156704 Mar 12 01:59:35.043768 kubelet[2146]: E0312 01:59:35.041120 2146 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:59:35.057848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:59:35.058988 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:59:35.062118 systemd[1]: kubelet.service: Consumed 3.929s CPU time, 110.8M memory peak. Mar 12 01:59:46.741244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 12 01:59:46.771538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:59:51.467533 containerd[1613]: time="2026-03-12T01:59:51.460288409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:59:51.467533 containerd[1613]: time="2026-03-12T01:59:51.476425313Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=29141427" Mar 12 01:59:51.503460 containerd[1613]: time="2026-03-12T01:59:51.503267004Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:59:51.524812 containerd[1613]: time="2026-03-12T01:59:51.524533729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:59:51.532781 containerd[1613]: time="2026-03-12T01:59:51.532209481Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 36.324187932s" Mar 12 01:59:51.536065 containerd[1613]: time="2026-03-12T01:59:51.535880272Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 12 01:59:51.876866 containerd[1613]: time="2026-03-12T01:59:51.859894556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 12 01:59:54.149486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:59:54.210358 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:59:58.553510 kubelet[2164]: E0312 01:59:58.552535 2164 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:59:58.589136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:59:58.589861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:59:58.604302 systemd[1]: kubelet.service: Consumed 3.833s CPU time, 109.9M memory peak. Mar 12 02:00:08.755284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 12 02:00:08.813285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:00:14.321125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:00:14.412518 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:00:14.916242 kubelet[2184]: E0312 02:00:14.909964 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:00:14.921553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:00:14.923180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:00:14.927524 systemd[1]: kubelet.service: Consumed 2.954s CPU time, 108.8M memory peak. Mar 12 02:00:25.690413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 12 02:00:25.737928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:00:29.217384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:00:29.297744 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:00:29.396974 containerd[1613]: time="2026-03-12T02:00:29.390458080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:00:29.432219 containerd[1613]: time="2026-03-12T02:00:29.432081632Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26017770" Mar 12 02:00:29.453322 containerd[1613]: time="2026-03-12T02:00:29.453262880Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:00:29.511014 containerd[1613]: time="2026-03-12T02:00:29.510495834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:00:29.529030 containerd[1613]: time="2026-03-12T02:00:29.526877434Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 37.666572699s" Mar 12 02:00:29.529030 containerd[1613]: time="2026-03-12T02:00:29.527026723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 12 02:00:29.575259 containerd[1613]: time="2026-03-12T02:00:29.575199511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 12 02:00:30.046241 kubelet[2200]: E0312 02:00:30.044086 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:00:30.061336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:00:30.062151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:00:30.073546 systemd[1]: kubelet.service: Consumed 1.582s CPU time, 107.6M memory peak. Mar 12 02:00:40.247443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 12 02:00:40.328268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:00:41.520335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:00:41.570800 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:00:42.182544 kubelet[2220]: E0312 02:00:42.180549 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:00:42.191754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:00:42.192240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:00:42.197299 systemd[1]: kubelet.service: Consumed 843ms CPU time, 110.2M memory peak. Mar 12 02:00:50.686397 containerd[1613]: time="2026-03-12T02:00:50.682260182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:00:50.690269 containerd[1613]: time="2026-03-12T02:00:50.689078885Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20157361" Mar 12 02:00:50.710831 containerd[1613]: time="2026-03-12T02:00:50.709381310Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:00:50.738308 containerd[1613]: time="2026-03-12T02:00:50.736912975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:00:50.751287 containerd[1613]: time="2026-03-12T02:00:50.744849816Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 21.169366374s" Mar 12 02:00:50.751287 containerd[1613]: time="2026-03-12T02:00:50.748440757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 12 02:00:50.789205 containerd[1613]: time="2026-03-12T02:00:50.787961921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 12 02:00:52.436855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 12 02:00:52.485503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:00:53.485831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:00:53.571202 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:00:54.003881 kubelet[2235]: E0312 02:00:54.003000 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:00:54.035533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:00:54.041156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:00:54.050238 systemd[1]: kubelet.service: Consumed 576ms CPU time, 108.4M memory peak. Mar 12 02:01:04.210719 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 12 02:01:04.322552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:01:05.094347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708558694.mount: Deactivated successfully. Mar 12 02:01:06.534344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:01:06.705928 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:01:08.088528 kubelet[2260]: E0312 02:01:08.088060 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:01:08.127333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:01:08.137858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:01:08.139880 systemd[1]: kubelet.service: Consumed 1.139s CPU time, 110.3M memory peak. Mar 12 02:01:18.191844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 12 02:01:18.214088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:01:19.672050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:01:19.716300 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:01:20.649372 kubelet[2277]: E0312 02:01:20.648550 2277 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:01:20.663022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:01:20.664498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:01:20.677692 systemd[1]: kubelet.service: Consumed 1.207s CPU time, 108.8M memory peak. Mar 12 02:01:20.779012 containerd[1613]: time="2026-03-12T02:01:20.778924002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:01:20.794058 containerd[1613]: time="2026-03-12T02:01:20.793972179Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31825567" Mar 12 02:01:20.805867 containerd[1613]: time="2026-03-12T02:01:20.800451564Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:01:20.823392 containerd[1613]: time="2026-03-12T02:01:20.821466819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:01:20.839673 containerd[1613]: time="2026-03-12T02:01:20.829483578Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 30.041348021s" Mar 12 02:01:20.839673 containerd[1613]: time="2026-03-12T02:01:20.832919887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 12 02:01:20.883477 containerd[1613]: time="2026-03-12T02:01:20.880158962Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 12 02:01:24.514908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776532624.mount: Deactivated successfully. Mar 12 02:01:30.686778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 12 02:01:30.750768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:01:32.808272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:01:33.034914 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:01:34.441018 kubelet[2344]: E0312 02:01:34.438494 2344 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:01:34.495030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:01:34.501026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:01:34.510250 systemd[1]: kubelet.service: Consumed 1.388s CPU time, 110.2M memory peak. Mar 12 02:01:36.393781 containerd[1613]: time="2026-03-12T02:01:36.391184621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:01:36.406028 containerd[1613]: time="2026-03-12T02:01:36.404033862Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20931441" Mar 12 02:01:36.415975 containerd[1613]: time="2026-03-12T02:01:36.415778116Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:01:36.440658 containerd[1613]: time="2026-03-12T02:01:36.439812430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:01:36.442960 containerd[1613]: time="2026-03-12T02:01:36.441466993Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 15.555674909s" Mar 12 02:01:36.442960 containerd[1613]: time="2026-03-12T02:01:36.441709937Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 12 02:01:36.450169 containerd[1613]: time="2026-03-12T02:01:36.449486149Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 12 02:01:38.443901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883991193.mount: Deactivated successfully. Mar 12 02:01:38.504798 containerd[1613]: time="2026-03-12T02:01:38.504362540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:01:38.511919 containerd[1613]: time="2026-03-12T02:01:38.511051369Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 02:01:38.528772 containerd[1613]: time="2026-03-12T02:01:38.522290555Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:01:38.644963 containerd[1613]: time="2026-03-12T02:01:38.643833435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:01:38.653237 containerd[1613]: time="2026-03-12T02:01:38.646033891Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.196491938s" Mar 12 02:01:38.653237 containerd[1613]: time="2026-03-12T02:01:38.646078173Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 12 02:01:38.719839 containerd[1613]: time="2026-03-12T02:01:38.695448993Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 12 02:01:42.844030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2406146770.mount: Deactivated successfully. Mar 12 02:01:44.727190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 12 02:01:44.791957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:01:47.239326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:01:47.303129 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:01:48.578260 kubelet[2377]: E0312 02:01:48.577393 2377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:01:48.606118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:01:48.607941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:01:48.610436 systemd[1]: kubelet.service: Consumed 1.590s CPU time, 110.5M memory peak. Mar 12 02:01:58.744331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Mar 12 02:01:58.818979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:01:59.531004 containerd[1613]: time="2026-03-12T02:01:59.522039030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:01:59.539243 containerd[1613]: time="2026-03-12T02:01:59.539196449Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23707508" Mar 12 02:01:59.564259 containerd[1613]: time="2026-03-12T02:01:59.564188135Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:01:59.628338 containerd[1613]: time="2026-03-12T02:01:59.628262502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:01:59.630857 containerd[1613]: time="2026-03-12T02:01:59.629765413Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 20.924384158s" Mar 12 02:01:59.630857 containerd[1613]: time="2026-03-12T02:01:59.630805964Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 12 02:02:00.012028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:02:00.050197 (kubelet)[2451]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:02:01.439789 kubelet[2451]: E0312 02:02:01.437519 2451 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:02:01.501494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:02:01.502088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:02:01.509929 systemd[1]: kubelet.service: Consumed 1.245s CPU time, 108.8M memory peak. Mar 12 02:02:11.685062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Mar 12 02:02:11.699869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:02:14.232135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:02:14.306043 (kubelet)[2486]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:02:14.671948 kubelet[2486]: E0312 02:02:14.670277 2486 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:02:14.685568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:02:14.688881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:02:14.689932 systemd[1]: kubelet.service: Consumed 972ms CPU time, 109.4M memory peak. Mar 12 02:02:16.910995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:02:16.914010 systemd[1]: kubelet.service: Consumed 972ms CPU time, 109.4M memory peak. Mar 12 02:02:16.933323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:02:17.106998 systemd[1]: Reload requested from client PID 2503 ('systemctl') (unit session-8.scope)... Mar 12 02:02:17.107726 systemd[1]: Reloading... Mar 12 02:02:17.568361 zram_generator::config[2549]: No configuration found. Mar 12 02:02:18.650073 systemd[1]: Reloading finished in 1541 ms. Mar 12 02:02:19.057241 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:02:19.065902 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 02:02:19.066769 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:02:19.066861 systemd[1]: kubelet.service: Consumed 366ms CPU time, 98.4M memory peak. Mar 12 02:02:19.072539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:02:19.937387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:02:19.997958 (kubelet)[2599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 02:02:20.296331 kubelet[2599]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 02:02:20.296331 kubelet[2599]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 02:02:20.296331 kubelet[2599]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 02:02:20.299046 kubelet[2599]: I0312 02:02:20.296850 2599 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 02:02:21.739733 kubelet[2599]: I0312 02:02:21.737359 2599 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 02:02:21.739733 kubelet[2599]: I0312 02:02:21.738406 2599 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 02:02:21.743455 kubelet[2599]: I0312 02:02:21.742531 2599 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 02:02:21.977448 kubelet[2599]: I0312 02:02:21.976895 2599 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 02:02:21.981432 kubelet[2599]: E0312 02:02:21.979430 2599 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 02:02:22.083876 kubelet[2599]: I0312 02:02:22.079479 2599 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 02:02:22.122406 kubelet[2599]: I0312 02:02:22.118501 2599 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 02:02:22.125341 kubelet[2599]: I0312 02:02:22.121876 2599 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 02:02:22.131200 kubelet[2599]: I0312 02:02:22.124762 2599 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 02:02:22.131200 kubelet[2599]: I0312 02:02:22.127811 2599 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 02:02:22.131200 kubelet[2599]: I0312 02:02:22.130391 2599 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 02:02:22.131200 kubelet[2599]: I0312 02:02:22.130934 2599 state_mem.go:36] "Initialized new in-memory state store" Mar 12 02:02:22.160427 kubelet[2599]: I0312 02:02:22.160002 2599 kubelet.go:480] "Attempting to sync node with API server" Mar 12 02:02:22.160427 kubelet[2599]: I0312 02:02:22.160251 2599 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 02:02:22.160427 kubelet[2599]: I0312 02:02:22.160305 2599 kubelet.go:386] "Adding apiserver pod source" Mar 12 02:02:22.166227 kubelet[2599]: I0312 02:02:22.164015 2599 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 02:02:22.178480 kubelet[2599]: E0312 02:02:22.172888 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 02:02:22.178480 kubelet[2599]: E0312 02:02:22.174426 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 02:02:22.195190 kubelet[2599]: I0312 02:02:22.194158 2599 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Mar 12 02:02:22.209890 kubelet[2599]: I0312 02:02:22.202498 2599 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 02:02:22.215869 kubelet[2599]: W0312 02:02:22.212823 2599 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 02:02:22.258236 kubelet[2599]: I0312 02:02:22.257567 2599 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 02:02:22.259342 kubelet[2599]: I0312 02:02:22.259164 2599 server.go:1289] "Started kubelet" Mar 12 02:02:22.262357 kubelet[2599]: I0312 02:02:22.261326 2599 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 02:02:22.262357 kubelet[2599]: I0312 02:02:22.264346 2599 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 02:02:22.271278 kubelet[2599]: I0312 02:02:22.267493 2599 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 02:02:22.272896 kubelet[2599]: I0312 02:02:22.271939 2599 server.go:317] "Adding debug handlers to kubelet server" Mar 12 02:02:22.272896 kubelet[2599]: E0312 02:02:22.269245 2599 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf59f2a400275 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 02:02:22.257865333 +0000 UTC m=+2.223492897,LastTimestamp:2026-03-12 02:02:22.257865333 +0000 UTC m=+2.223492897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 02:02:22.277757 kubelet[2599]: I0312 02:02:22.277381 2599 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 02:02:22.329844 kubelet[2599]: I0312 02:02:22.278534 2599 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 02:02:22.354422 kubelet[2599]: I0312 02:02:22.328309 2599 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 02:02:22.382470 kubelet[2599]: I0312 02:02:22.381370 2599 factory.go:223] Registration of the systemd container factory successfully Mar 12 02:02:22.382470 kubelet[2599]: I0312 02:02:22.382009 2599 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 02:02:22.386235 kubelet[2599]: E0312 02:02:22.331350 2599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" Mar 12 02:02:22.386235 kubelet[2599]: E0312 02:02:22.385921 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 02:02:22.386235 kubelet[2599]: E0312 02:02:22.328451 2599 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 02:02:22.386533 kubelet[2599]: I0312 02:02:22.328412 2599 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 02:02:22.387559 kubelet[2599]: I0312 02:02:22.386985 2599 reconciler.go:26] "Reconciler: start to sync state" Mar 12 02:02:22.405800 kubelet[2599]: E0312 02:02:22.405150 2599 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 02:02:22.430537 kubelet[2599]: I0312 02:02:22.426970 2599 factory.go:223] Registration of the containerd container factory successfully Mar 12 02:02:22.465295 kubelet[2599]: I0312 02:02:22.463356 2599 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 02:02:22.490518 kubelet[2599]: E0312 02:02:22.489837 2599 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 02:02:22.591936 kubelet[2599]: E0312 02:02:22.591454 2599 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 02:02:22.602981 kubelet[2599]: E0312 02:02:22.601160 2599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" Mar 12 02:02:22.635389 kubelet[2599]: I0312 02:02:22.634159 2599 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 02:02:22.635389 kubelet[2599]: I0312 02:02:22.634261 2599 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 02:02:22.635389 kubelet[2599]: I0312 02:02:22.634295 2599 state_mem.go:36] "Initialized new in-memory state store" Mar 12 02:02:22.683345 kubelet[2599]: I0312 02:02:22.683155 2599 policy_none.go:49] "None policy: Start" Mar 12 02:02:22.683505 kubelet[2599]: I0312 02:02:22.683424 2599 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 02:02:22.684039 kubelet[2599]: I0312 02:02:22.683865 2599 state_mem.go:35] "Initializing new in-memory state store" Mar 12 02:02:22.685549 kubelet[2599]: I0312 02:02:22.685380 2599 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 02:02:22.685800 kubelet[2599]: I0312 02:02:22.685555 2599 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 02:02:22.685800 kubelet[2599]: I0312 02:02:22.685744 2599 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 02:02:22.685800 kubelet[2599]: I0312 02:02:22.685759 2599 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 02:02:22.686363 kubelet[2599]: E0312 02:02:22.685854 2599 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 02:02:22.696772 kubelet[2599]: E0312 02:02:22.692719 2599 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 02:02:22.703517 kubelet[2599]: E0312 02:02:22.701549 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 02:02:22.753737 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 02:02:22.790936 kubelet[2599]: E0312 02:02:22.790235 2599 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 02:02:22.800153 kubelet[2599]: E0312 02:02:22.799562 2599 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 02:02:22.808327 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 02:02:22.830041 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 02:02:22.867827 kubelet[2599]: E0312 02:02:22.867330 2599 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 02:02:22.874371 kubelet[2599]: I0312 02:02:22.868537 2599 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 02:02:22.874371 kubelet[2599]: I0312 02:02:22.869444 2599 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 02:02:22.874371 kubelet[2599]: I0312 02:02:22.870465 2599 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 02:02:22.882508 kubelet[2599]: E0312 02:02:22.882006 2599 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 02:02:22.882936 kubelet[2599]: E0312 02:02:22.882520 2599 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 02:02:22.982947 kubelet[2599]: I0312 02:02:22.981952 2599 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 02:02:22.982947 kubelet[2599]: E0312 02:02:22.982877 2599 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Mar 12 02:02:23.006495 kubelet[2599]: E0312 02:02:23.006197 2599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" Mar 12 02:02:23.101381 kubelet[2599]: I0312 02:02:23.096972 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9b2c318e2916da5cdd0bf868525d4ad-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f9b2c318e2916da5cdd0bf868525d4ad\") " pod="kube-system/kube-apiserver-localhost" Mar 12 02:02:23.101381 kubelet[2599]: I0312 02:02:23.097193 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:23.101381 kubelet[2599]: I0312 02:02:23.097224 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:23.101381 kubelet[2599]: I0312 02:02:23.097245 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:23.101381 kubelet[2599]: I0312 02:02:23.097271 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:23.101895 kubelet[2599]: I0312 02:02:23.097495 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 12 02:02:23.101895 kubelet[2599]: I0312 02:02:23.097524 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9b2c318e2916da5cdd0bf868525d4ad-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f9b2c318e2916da5cdd0bf868525d4ad\") " pod="kube-system/kube-apiserver-localhost" Mar 12 02:02:23.130906 kubelet[2599]: I0312 02:02:23.103765 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:23.133467 kubelet[2599]: I0312 02:02:23.131328 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9b2c318e2916da5cdd0bf868525d4ad-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f9b2c318e2916da5cdd0bf868525d4ad\") " pod="kube-system/kube-apiserver-localhost" Mar 12 02:02:23.184040 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 12 02:02:23.185839 systemd[1]: Created slice kubepods-burstable-podf9b2c318e2916da5cdd0bf868525d4ad.slice - libcontainer container kubepods-burstable-podf9b2c318e2916da5cdd0bf868525d4ad.slice. Mar 12 02:02:23.198364 kubelet[2599]: I0312 02:02:23.198294 2599 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 02:02:23.221889 kubelet[2599]: E0312 02:02:23.214507 2599 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Mar 12 02:02:23.246521 kubelet[2599]: E0312 02:02:23.243545 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:23.256215 kubelet[2599]: E0312 02:02:23.252432 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:23.263399 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 12 02:02:23.287445 kubelet[2599]: E0312 02:02:23.285898 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:23.292407 kubelet[2599]: E0312 02:02:23.291232 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:23.295709 containerd[1613]: time="2026-03-12T02:02:23.294941629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 12 02:02:23.550198 kubelet[2599]: E0312 02:02:23.547826 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:23.550350 containerd[1613]: time="2026-03-12T02:02:23.550153103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 12 02:02:23.558855 kubelet[2599]: E0312 02:02:23.558011 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:23.563151 containerd[1613]: time="2026-03-12T02:02:23.563011104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f9b2c318e2916da5cdd0bf868525d4ad,Namespace:kube-system,Attempt:0,}" Mar 12 02:02:23.563367 containerd[1613]: time="2026-03-12T02:02:23.563187815Z" level=info msg="connecting to shim a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405" address="unix:///run/containerd/s/d2fdb522566ad107655458427599cd4d50ce7f91008909a20bf9af0a29a4896c" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:02:23.585380 kubelet[2599]: E0312 02:02:23.584840 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 02:02:23.636321 kubelet[2599]: E0312 02:02:23.636175 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 02:02:23.645799 kubelet[2599]: I0312 02:02:23.644830 2599 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 02:02:23.654971 kubelet[2599]: E0312 02:02:23.652442 2599 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Mar 12 02:02:23.674314 kubelet[2599]: E0312 02:02:23.672852 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 02:02:23.746557 kubelet[2599]: E0312 02:02:23.746381 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 02:02:23.812781 kubelet[2599]: E0312 02:02:23.808851 2599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="1.6s" Mar 12 02:02:23.867829 containerd[1613]: time="2026-03-12T02:02:23.867764574Z" level=info msg="connecting to shim 94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5" address="unix:///run/containerd/s/9e0ec35891eed3399cf93a4554fac23dc348508b312db97a662cd42b8c614eac" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:02:23.981740 containerd[1613]: time="2026-03-12T02:02:23.981500828Z" level=info msg="connecting to shim 60463a50810cd14bf1ba83d258d5991db734ce0bcac0fce0b0195ec58472a16a" address="unix:///run/containerd/s/2b116200580f01b5c504e34b77c6795f9f8d5033770b169b9ab28e4ee075fc04" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:02:24.038499 systemd[1]: Started cri-containerd-a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405.scope - libcontainer container a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405. Mar 12 02:02:24.106364 kubelet[2599]: E0312 02:02:24.105143 2599 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 02:02:24.107949 systemd[1]: Started cri-containerd-94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5.scope - libcontainer container 94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5. Mar 12 02:02:24.227482 systemd[1]: Started cri-containerd-60463a50810cd14bf1ba83d258d5991db734ce0bcac0fce0b0195ec58472a16a.scope - libcontainer container 60463a50810cd14bf1ba83d258d5991db734ce0bcac0fce0b0195ec58472a16a. Mar 12 02:02:24.466517 kubelet[2599]: I0312 02:02:24.466458 2599 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 02:02:24.558954 kubelet[2599]: E0312 02:02:24.558335 2599 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Mar 12 02:02:24.698761 containerd[1613]: time="2026-03-12T02:02:24.698181443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5\"" Mar 12 02:02:24.715731 containerd[1613]: time="2026-03-12T02:02:24.714486246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405\"" Mar 12 02:02:24.720349 kubelet[2599]: E0312 02:02:24.718141 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:24.720349 kubelet[2599]: E0312 02:02:24.719902 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:24.942939 containerd[1613]: time="2026-03-12T02:02:24.925827077Z" level=info msg="CreateContainer within sandbox \"94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 02:02:24.960242 containerd[1613]: time="2026-03-12T02:02:24.954242174Z" level=info msg="CreateContainer within sandbox \"a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 02:02:25.290488 containerd[1613]: time="2026-03-12T02:02:25.289951203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f9b2c318e2916da5cdd0bf868525d4ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"60463a50810cd14bf1ba83d258d5991db734ce0bcac0fce0b0195ec58472a16a\"" Mar 12 02:02:25.316442 kubelet[2599]: E0312 02:02:25.316399 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:25.330364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1302236573.mount: Deactivated successfully. Mar 12 02:02:25.360996 containerd[1613]: time="2026-03-12T02:02:25.359353042Z" level=info msg="Container b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:02:25.491952 containerd[1613]: time="2026-03-12T02:02:25.398903630Z" level=info msg="CreateContainer within sandbox \"60463a50810cd14bf1ba83d258d5991db734ce0bcac0fce0b0195ec58472a16a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 02:02:25.536721 kubelet[2599]: E0312 02:02:25.534460 2599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="3.2s" Mar 12 02:02:25.564172 containerd[1613]: time="2026-03-12T02:02:25.549986906Z" level=info msg="Container f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:02:25.584756 kubelet[2599]: E0312 02:02:25.578737 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 02:02:25.607872 containerd[1613]: time="2026-03-12T02:02:25.607529843Z" level=info msg="CreateContainer within sandbox \"94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0\"" Mar 12 02:02:25.664168 containerd[1613]: time="2026-03-12T02:02:25.663267631Z" level=info msg="StartContainer for \"b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0\"" Mar 12 02:02:25.725137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263004486.mount: Deactivated successfully. Mar 12 02:02:25.751930 containerd[1613]: time="2026-03-12T02:02:25.749831858Z" level=info msg="connecting to shim b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0" address="unix:///run/containerd/s/9e0ec35891eed3399cf93a4554fac23dc348508b312db97a662cd42b8c614eac" protocol=ttrpc version=3 Mar 12 02:02:25.886251 kubelet[2599]: E0312 02:02:25.841867 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 02:02:25.889977 kubelet[2599]: E0312 02:02:25.889920 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 02:02:25.905920 containerd[1613]: time="2026-03-12T02:02:25.902014325Z" level=info msg="CreateContainer within sandbox \"a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8\"" Mar 12 02:02:25.943466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1616076038.mount: Deactivated successfully. Mar 12 02:02:25.963446 containerd[1613]: time="2026-03-12T02:02:25.963354569Z" level=info msg="Container 871b243ed1c4ed9f86c50e24f2cc5562bbac04e797b8b4c1b34b8ba495bbd81e: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:02:26.007272 containerd[1613]: time="2026-03-12T02:02:25.965974181Z" level=info msg="StartContainer for \"f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8\"" Mar 12 02:02:26.057366 containerd[1613]: time="2026-03-12T02:02:26.049006987Z" level=info msg="connecting to shim f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8" address="unix:///run/containerd/s/d2fdb522566ad107655458427599cd4d50ce7f91008909a20bf9af0a29a4896c" protocol=ttrpc version=3 Mar 12 02:02:26.125173 kubelet[2599]: E0312 02:02:26.122222 2599 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 02:02:26.135805 containerd[1613]: time="2026-03-12T02:02:26.134450926Z" level=info msg="CreateContainer within sandbox \"60463a50810cd14bf1ba83d258d5991db734ce0bcac0fce0b0195ec58472a16a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"871b243ed1c4ed9f86c50e24f2cc5562bbac04e797b8b4c1b34b8ba495bbd81e\"" Mar 12 02:02:26.140441 containerd[1613]: time="2026-03-12T02:02:26.138166869Z" level=info msg="StartContainer for \"871b243ed1c4ed9f86c50e24f2cc5562bbac04e797b8b4c1b34b8ba495bbd81e\"" Mar 12 02:02:26.158479 containerd[1613]: time="2026-03-12T02:02:26.155452403Z" level=info msg="connecting to shim 871b243ed1c4ed9f86c50e24f2cc5562bbac04e797b8b4c1b34b8ba495bbd81e" address="unix:///run/containerd/s/2b116200580f01b5c504e34b77c6795f9f8d5033770b169b9ab28e4ee075fc04" protocol=ttrpc version=3 Mar 12 02:02:26.322551 systemd[1]: Started cri-containerd-b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0.scope - libcontainer container b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0. Mar 12 02:02:26.598995 kubelet[2599]: I0312 02:02:26.597407 2599 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 02:02:26.620918 systemd[1]: Started cri-containerd-f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8.scope - libcontainer container f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8. Mar 12 02:02:26.696709 kubelet[2599]: E0312 02:02:26.694829 2599 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Mar 12 02:02:26.727955 systemd[1]: Started cri-containerd-871b243ed1c4ed9f86c50e24f2cc5562bbac04e797b8b4c1b34b8ba495bbd81e.scope - libcontainer container 871b243ed1c4ed9f86c50e24f2cc5562bbac04e797b8b4c1b34b8ba495bbd81e. Mar 12 02:02:27.998094 containerd[1613]: time="2026-03-12T02:02:27.983461337Z" level=info msg="StartContainer for \"b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0\" returns successfully" Mar 12 02:02:28.161284 containerd[1613]: time="2026-03-12T02:02:28.154846933Z" level=info msg="StartContainer for \"f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8\" returns successfully" Mar 12 02:02:28.161284 containerd[1613]: time="2026-03-12T02:02:28.157560763Z" level=info msg="StartContainer for \"871b243ed1c4ed9f86c50e24f2cc5562bbac04e797b8b4c1b34b8ba495bbd81e\" returns successfully" Mar 12 02:02:28.387364 kubelet[2599]: E0312 02:02:28.380820 2599 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 02:02:28.786770 kubelet[2599]: E0312 02:02:28.786418 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:28.786770 kubelet[2599]: E0312 02:02:28.789413 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:28.810173 kubelet[2599]: E0312 02:02:28.809492 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:28.812420 kubelet[2599]: E0312 02:02:28.812256 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:28.831737 kubelet[2599]: E0312 02:02:28.826317 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:28.831737 kubelet[2599]: E0312 02:02:28.826498 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:29.914977 kubelet[2599]: E0312 02:02:29.914322 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:29.914977 kubelet[2599]: I0312 02:02:29.914893 2599 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 02:02:29.929755 kubelet[2599]: E0312 02:02:29.928945 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:29.930303 kubelet[2599]: E0312 02:02:29.930182 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:29.930907 kubelet[2599]: E0312 02:02:29.930799 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:29.935832 kubelet[2599]: E0312 02:02:29.935559 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:29.936286 kubelet[2599]: E0312 02:02:29.935969 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:31.008324 kubelet[2599]: E0312 02:02:31.002179 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:31.112892 kubelet[2599]: E0312 02:02:31.030742 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:31.112892 kubelet[2599]: E0312 02:02:31.031904 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:31.112892 kubelet[2599]: E0312 02:02:31.033210 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:32.917059 kubelet[2599]: E0312 02:02:32.899931 2599 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 02:02:34.259064 kubelet[2599]: E0312 02:02:34.234505 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:34.340358 kubelet[2599]: E0312 02:02:34.339077 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:38.414291 kubelet[2599]: E0312 02:02:38.411035 2599 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 02:02:38.414291 kubelet[2599]: E0312 02:02:38.412744 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:38.536875 kubelet[2599]: E0312 02:02:38.534451 2599 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 12 02:02:38.676095 kubelet[2599]: E0312 02:02:38.674888 2599 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189bf59f2a400275 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 02:02:22.257865333 +0000 UTC m=+2.223492897,LastTimestamp:2026-03-12 02:02:22.257865333 +0000 UTC m=+2.223492897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 02:02:38.714238 kubelet[2599]: I0312 02:02:38.714120 2599 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 02:02:38.714238 kubelet[2599]: E0312 02:02:38.714239 2599 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 12 02:02:38.732125 kubelet[2599]: I0312 02:02:38.730816 2599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 02:02:38.793899 kubelet[2599]: E0312 02:02:38.788480 2599 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189bf59f32b5378a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 02:02:22.399764362 +0000 UTC m=+2.365391906,LastTimestamp:2026-03-12 02:02:22.399764362 +0000 UTC m=+2.365391906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 02:02:38.829363 kubelet[2599]: E0312 02:02:38.818310 2599 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 12 02:02:38.829363 kubelet[2599]: I0312 02:02:38.818442 2599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 02:02:38.855869 kubelet[2599]: E0312 02:02:38.853340 2599 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 02:02:38.855869 kubelet[2599]: I0312 02:02:38.853465 2599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:39.010085 kubelet[2599]: E0312 02:02:39.008329 2599 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:39.234863 kubelet[2599]: I0312 02:02:39.232371 2599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:39.244693 kubelet[2599]: E0312 02:02:39.243290 2599 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:39.244693 kubelet[2599]: E0312 02:02:39.243544 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:39.276792 kubelet[2599]: I0312 02:02:39.262151 2599 apiserver.go:52] "Watching apiserver" Mar 12 02:02:39.293521 kubelet[2599]: I0312 02:02:39.287860 2599 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 02:02:39.368776 kubelet[2599]: I0312 02:02:39.368409 2599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 02:02:39.414040 kubelet[2599]: E0312 02:02:39.410891 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:40.259452 kubelet[2599]: E0312 02:02:40.252781 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:42.816472 kubelet[2599]: I0312 02:02:42.815543 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.8154123110000002 podStartE2EDuration="3.815412311s" podCreationTimestamp="2026-03-12 02:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:02:42.810149446 +0000 UTC m=+22.775777009" watchObservedRunningTime="2026-03-12 02:02:42.815412311 +0000 UTC m=+22.781039855" Mar 12 02:02:48.834003 systemd[1]: Reload requested from client PID 2890 ('systemctl') (unit session-8.scope)... Mar 12 02:02:48.834034 systemd[1]: Reloading... Mar 12 02:02:49.436271 kubelet[2599]: E0312 02:02:49.436228 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:49.657347 zram_generator::config[2936]: No configuration found. Mar 12 02:02:52.065239 systemd[1]: Reloading finished in 3230 ms. Mar 12 02:02:52.583067 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:02:52.717299 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 02:02:52.724425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:02:52.724517 systemd[1]: kubelet.service: Consumed 6.719s CPU time, 136.9M memory peak. Mar 12 02:02:52.757408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:02:54.114183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:02:54.162945 (kubelet)[2981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 02:02:54.904180 kubelet[2981]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 02:02:54.904180 kubelet[2981]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 02:02:54.904180 kubelet[2981]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 02:02:54.904180 kubelet[2981]: I0312 02:02:54.900464 2981 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 02:02:55.041948 kubelet[2981]: I0312 02:02:55.041396 2981 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 02:02:55.041948 kubelet[2981]: I0312 02:02:55.041537 2981 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 02:02:55.042203 kubelet[2981]: I0312 02:02:55.042145 2981 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 02:02:55.064987 kubelet[2981]: I0312 02:02:55.062006 2981 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 02:02:55.107506 kubelet[2981]: I0312 02:02:55.102217 2981 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 02:02:55.736744 kubelet[2981]: I0312 02:02:55.736229 2981 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 02:02:55.916956 kubelet[2981]: I0312 02:02:55.916389 2981 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 02:02:55.925379 kubelet[2981]: I0312 02:02:55.917530 2981 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 02:02:55.925379 kubelet[2981]: I0312 02:02:55.918164 2981 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 02:02:55.925379 kubelet[2981]: I0312 02:02:55.918372 2981 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 02:02:55.925379 kubelet[2981]: I0312 02:02:55.918388 2981 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 02:02:55.925379 kubelet[2981]: I0312 02:02:55.918455 2981 state_mem.go:36] "Initialized new in-memory state store" Mar 12 02:02:55.926169 kubelet[2981]: I0312 02:02:55.925234 2981 kubelet.go:480] "Attempting to sync node with API server" Mar 12 02:02:55.926169 kubelet[2981]: I0312 02:02:55.925260 2981 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 02:02:55.934020 kubelet[2981]: I0312 02:02:55.929318 2981 kubelet.go:386] "Adding apiserver pod source" Mar 12 02:02:55.934020 kubelet[2981]: I0312 02:02:55.929452 2981 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 02:02:55.983798 kubelet[2981]: I0312 02:02:55.983310 2981 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Mar 12 02:02:56.000408 kubelet[2981]: I0312 02:02:55.996426 2981 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 02:02:56.202945 kubelet[2981]: I0312 02:02:56.197521 2981 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 02:02:56.236920 kubelet[2981]: I0312 02:02:56.236307 2981 server.go:1289] "Started kubelet" Mar 12 02:02:56.242490 kubelet[2981]: I0312 02:02:56.238782 2981 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 02:02:56.290107 kubelet[2981]: I0312 02:02:56.263371 2981 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 02:02:56.295419 kubelet[2981]: I0312 02:02:56.293787 2981 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 02:02:56.352974 kubelet[2981]: I0312 02:02:56.352096 2981 server.go:317] "Adding debug handlers to kubelet server" Mar 12 02:02:56.356908 kubelet[2981]: I0312 02:02:56.356431 2981 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 02:02:56.357016 kubelet[2981]: I0312 02:02:56.356926 2981 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 02:02:56.383201 kubelet[2981]: I0312 02:02:56.382221 2981 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 02:02:56.386799 kubelet[2981]: I0312 02:02:56.386080 2981 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 02:02:56.394756 kubelet[2981]: I0312 02:02:56.394134 2981 reconciler.go:26] "Reconciler: start to sync state" Mar 12 02:02:56.396078 kubelet[2981]: I0312 02:02:56.395787 2981 factory.go:223] Registration of the systemd container factory successfully Mar 12 02:02:56.396450 kubelet[2981]: I0312 02:02:56.396326 2981 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 02:02:56.413774 kubelet[2981]: E0312 02:02:56.413183 2981 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 02:02:56.414398 kubelet[2981]: I0312 02:02:56.414280 2981 factory.go:223] Registration of the containerd container factory successfully Mar 12 02:02:56.907344 kubelet[2981]: I0312 02:02:56.907205 2981 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 02:02:56.936173 kubelet[2981]: I0312 02:02:56.935545 2981 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 02:02:56.951134 kubelet[2981]: I0312 02:02:56.939314 2981 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 02:02:56.951134 kubelet[2981]: I0312 02:02:56.939359 2981 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 02:02:56.951134 kubelet[2981]: I0312 02:02:56.939373 2981 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 02:02:56.951134 kubelet[2981]: E0312 02:02:56.939443 2981 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 02:02:56.951134 kubelet[2981]: I0312 02:02:56.951104 2981 apiserver.go:52] "Watching apiserver" Mar 12 02:02:57.048152 kubelet[2981]: E0312 02:02:57.046964 2981 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 02:02:57.165207 kubelet[2981]: I0312 02:02:57.156518 2981 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 02:02:57.165207 kubelet[2981]: I0312 02:02:57.157195 2981 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 02:02:57.165207 kubelet[2981]: I0312 02:02:57.157227 2981 state_mem.go:36] "Initialized new in-memory state store" Mar 12 02:02:57.165207 kubelet[2981]: I0312 02:02:57.157923 2981 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 02:02:57.165207 kubelet[2981]: I0312 02:02:57.157942 2981 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 02:02:57.165207 kubelet[2981]: I0312 02:02:57.157969 2981 policy_none.go:49] "None policy: Start" Mar 12 02:02:57.165207 kubelet[2981]: I0312 02:02:57.157984 2981 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 02:02:57.165207 kubelet[2981]: I0312 02:02:57.157999 2981 state_mem.go:35] "Initializing new in-memory state store" Mar 12 02:02:57.165207 kubelet[2981]: I0312 02:02:57.158126 2981 state_mem.go:75] "Updated machine memory state" Mar 12 02:02:57.249455 kubelet[2981]: E0312 02:02:57.248350 2981 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 02:02:57.380407 kubelet[2981]: E0312 02:02:57.380370 2981 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 02:02:57.385047 kubelet[2981]: I0312 02:02:57.383007 2981 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 02:02:57.385047 kubelet[2981]: I0312 02:02:57.383030 2981 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 02:02:57.385047 kubelet[2981]: I0312 02:02:57.383453 2981 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 02:02:57.385047 kubelet[2981]: I0312 02:02:57.384258 2981 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 02:02:57.407749 containerd[1613]: time="2026-03-12T02:02:57.406006990Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 02:02:57.426942 kubelet[2981]: I0312 02:02:57.426481 2981 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 02:02:57.430987 kubelet[2981]: E0312 02:02:57.430955 2981 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 02:02:57.687986 kubelet[2981]: I0312 02:02:57.683536 2981 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 02:02:57.692453 kubelet[2981]: I0312 02:02:57.692428 2981 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 02:02:57.702521 kubelet[2981]: I0312 02:02:57.697041 2981 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:57.754446 kubelet[2981]: I0312 02:02:57.754251 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9b2c318e2916da5cdd0bf868525d4ad-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f9b2c318e2916da5cdd0bf868525d4ad\") " pod="kube-system/kube-apiserver-localhost" Mar 12 02:02:57.754446 kubelet[2981]: I0312 02:02:57.754325 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9b2c318e2916da5cdd0bf868525d4ad-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f9b2c318e2916da5cdd0bf868525d4ad\") " pod="kube-system/kube-apiserver-localhost" Mar 12 02:02:57.754446 kubelet[2981]: I0312 02:02:57.754364 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9b2c318e2916da5cdd0bf868525d4ad-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f9b2c318e2916da5cdd0bf868525d4ad\") " pod="kube-system/kube-apiserver-localhost" Mar 12 02:02:57.838101 kubelet[2981]: I0312 02:02:57.837250 2981 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 02:02:57.868215 kubelet[2981]: I0312 02:02:57.867226 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1f449f1-b1d9-4596-98a2-386d02db86c2-xtables-lock\") pod \"kube-proxy-2xdhh\" (UID: \"b1f449f1-b1d9-4596-98a2-386d02db86c2\") " pod="kube-system/kube-proxy-2xdhh" Mar 12 02:02:57.868215 kubelet[2981]: I0312 02:02:57.867387 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1f449f1-b1d9-4596-98a2-386d02db86c2-lib-modules\") pod \"kube-proxy-2xdhh\" (UID: \"b1f449f1-b1d9-4596-98a2-386d02db86c2\") " pod="kube-system/kube-proxy-2xdhh" Mar 12 02:02:57.868215 kubelet[2981]: I0312 02:02:57.867426 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n9c8\" (UniqueName: \"kubernetes.io/projected/b1f449f1-b1d9-4596-98a2-386d02db86c2-kube-api-access-7n9c8\") pod \"kube-proxy-2xdhh\" (UID: \"b1f449f1-b1d9-4596-98a2-386d02db86c2\") " pod="kube-system/kube-proxy-2xdhh" Mar 12 02:02:57.868215 kubelet[2981]: I0312 02:02:57.867458 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:57.876166 kubelet[2981]: I0312 02:02:57.869502 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:57.876166 kubelet[2981]: I0312 02:02:57.869975 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 12 02:02:57.876166 kubelet[2981]: I0312 02:02:57.870100 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b1f449f1-b1d9-4596-98a2-386d02db86c2-kube-proxy\") pod \"kube-proxy-2xdhh\" (UID: \"b1f449f1-b1d9-4596-98a2-386d02db86c2\") " pod="kube-system/kube-proxy-2xdhh" Mar 12 02:02:57.877480 kubelet[2981]: I0312 02:02:57.877126 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:57.877542 kubelet[2981]: I0312 02:02:57.877520 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:57.877767 kubelet[2981]: I0312 02:02:57.877555 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 02:02:57.937003 kubelet[2981]: I0312 02:02:57.879515 2981 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 12 02:02:57.937003 kubelet[2981]: I0312 02:02:57.879997 2981 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 02:02:57.884530 systemd[1]: Created slice kubepods-besteffort-podb1f449f1_b1d9_4596_98a2_386d02db86c2.slice - libcontainer container kubepods-besteffort-podb1f449f1_b1d9_4596_98a2_386d02db86c2.slice. Mar 12 02:02:58.018440 kubelet[2981]: E0312 02:02:58.017473 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:58.400143 kubelet[2981]: E0312 02:02:58.220321 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:58.524902 kubelet[2981]: E0312 02:02:58.523789 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:58.641356 kubelet[2981]: E0312 02:02:58.641149 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:58.663144 containerd[1613]: time="2026-03-12T02:02:58.653780738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xdhh,Uid:b1f449f1-b1d9-4596-98a2-386d02db86c2,Namespace:kube-system,Attempt:0,}" Mar 12 02:02:59.021261 kubelet[2981]: I0312 02:02:59.019166 2981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.019043824 podStartE2EDuration="2.019043824s" podCreationTimestamp="2026-03-12 02:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:02:59.016537953 +0000 UTC m=+4.827140987" watchObservedRunningTime="2026-03-12 02:02:59.019043824 +0000 UTC m=+4.829646808" Mar 12 02:02:59.120319 kubelet[2981]: E0312 02:02:59.037430 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:02:59.556270 kubelet[2981]: I0312 02:02:59.556096 2981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.556076273 podStartE2EDuration="2.556076273s" podCreationTimestamp="2026-03-12 02:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:02:59.330026302 +0000 UTC m=+5.140629286" watchObservedRunningTime="2026-03-12 02:02:59.556076273 +0000 UTC m=+5.366679266" Mar 12 02:03:00.067359 kubelet[2981]: E0312 02:03:00.064357 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:00.088201 kubelet[2981]: E0312 02:03:00.087959 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:00.088201 kubelet[2981]: E0312 02:03:00.088095 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:00.311315 containerd[1613]: time="2026-03-12T02:03:00.309973905Z" level=info msg="connecting to shim 062f1329590c460eae2de863a87934d8a6e674df95bec9bffdde83ff4377719b" address="unix:///run/containerd/s/2bccbd8016466ddc86ad2042ec58b7a47ba1244daab1dbe45a66d56e40d2a188" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:03:01.090083 kubelet[2981]: E0312 02:03:01.086284 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:01.098753 kubelet[2981]: E0312 02:03:01.098528 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:01.725329 systemd[1]: Started cri-containerd-062f1329590c460eae2de863a87934d8a6e674df95bec9bffdde83ff4377719b.scope - libcontainer container 062f1329590c460eae2de863a87934d8a6e674df95bec9bffdde83ff4377719b. Mar 12 02:03:02.781162 containerd[1613]: time="2026-03-12T02:03:02.780157009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xdhh,Uid:b1f449f1-b1d9-4596-98a2-386d02db86c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"062f1329590c460eae2de863a87934d8a6e674df95bec9bffdde83ff4377719b\"" Mar 12 02:03:02.790475 kubelet[2981]: E0312 02:03:02.785344 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:02.809971 containerd[1613]: time="2026-03-12T02:03:02.807718471Z" level=info msg="CreateContainer within sandbox \"062f1329590c460eae2de863a87934d8a6e674df95bec9bffdde83ff4377719b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 02:03:03.230281 containerd[1613]: time="2026-03-12T02:03:03.224022912Z" level=info msg="Container 5c817250469d2ed14f9c5588e41c9c2251cf48ef614be23f3a82af53043e577d: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:03:03.248562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331977703.mount: Deactivated successfully. Mar 12 02:03:03.309650 containerd[1613]: time="2026-03-12T02:03:03.308315971Z" level=info msg="CreateContainer within sandbox \"062f1329590c460eae2de863a87934d8a6e674df95bec9bffdde83ff4377719b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c817250469d2ed14f9c5588e41c9c2251cf48ef614be23f3a82af53043e577d\"" Mar 12 02:03:03.401515 containerd[1613]: time="2026-03-12T02:03:03.398115253Z" level=info msg="StartContainer for \"5c817250469d2ed14f9c5588e41c9c2251cf48ef614be23f3a82af53043e577d\"" Mar 12 02:03:03.457256 containerd[1613]: time="2026-03-12T02:03:03.444736355Z" level=info msg="connecting to shim 5c817250469d2ed14f9c5588e41c9c2251cf48ef614be23f3a82af53043e577d" address="unix:///run/containerd/s/2bccbd8016466ddc86ad2042ec58b7a47ba1244daab1dbe45a66d56e40d2a188" protocol=ttrpc version=3 Mar 12 02:03:03.716186 systemd[1]: Started cri-containerd-5c817250469d2ed14f9c5588e41c9c2251cf48ef614be23f3a82af53043e577d.scope - libcontainer container 5c817250469d2ed14f9c5588e41c9c2251cf48ef614be23f3a82af53043e577d. Mar 12 02:03:04.412182 kubelet[2981]: E0312 02:03:04.412091 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:04.592125 kubelet[2981]: E0312 02:03:04.590062 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:05.193924 containerd[1613]: time="2026-03-12T02:03:05.193168759Z" level=info msg="StartContainer for \"5c817250469d2ed14f9c5588e41c9c2251cf48ef614be23f3a82af53043e577d\" returns successfully" Mar 12 02:03:05.638554 kubelet[2981]: E0312 02:03:05.635156 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:05.733770 kubelet[2981]: I0312 02:03:05.732218 2981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2xdhh" podStartSLOduration=9.732196277 podStartE2EDuration="9.732196277s" podCreationTimestamp="2026-03-12 02:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:03:05.728027746 +0000 UTC m=+11.538630759" watchObservedRunningTime="2026-03-12 02:03:05.732196277 +0000 UTC m=+11.542799270" Mar 12 02:03:06.352494 systemd[1]: Created slice kubepods-burstable-pod71e234d3_42ce_4288_b45d_54487a188b83.slice - libcontainer container kubepods-burstable-pod71e234d3_42ce_4288_b45d_54487a188b83.slice. Mar 12 02:03:06.373474 kubelet[2981]: I0312 02:03:06.357872 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/71e234d3-42ce-4288-b45d-54487a188b83-run\") pod \"kube-flannel-ds-n6z8r\" (UID: \"71e234d3-42ce-4288-b45d-54487a188b83\") " pod="kube-flannel/kube-flannel-ds-n6z8r" Mar 12 02:03:06.373474 kubelet[2981]: I0312 02:03:06.357925 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txsbl\" (UniqueName: \"kubernetes.io/projected/71e234d3-42ce-4288-b45d-54487a188b83-kube-api-access-txsbl\") pod \"kube-flannel-ds-n6z8r\" (UID: \"71e234d3-42ce-4288-b45d-54487a188b83\") " pod="kube-flannel/kube-flannel-ds-n6z8r" Mar 12 02:03:06.373474 kubelet[2981]: I0312 02:03:06.357958 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/71e234d3-42ce-4288-b45d-54487a188b83-cni-plugin\") pod \"kube-flannel-ds-n6z8r\" (UID: \"71e234d3-42ce-4288-b45d-54487a188b83\") " pod="kube-flannel/kube-flannel-ds-n6z8r" Mar 12 02:03:06.373474 kubelet[2981]: I0312 02:03:06.357989 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/71e234d3-42ce-4288-b45d-54487a188b83-cni\") pod \"kube-flannel-ds-n6z8r\" (UID: \"71e234d3-42ce-4288-b45d-54487a188b83\") " pod="kube-flannel/kube-flannel-ds-n6z8r" Mar 12 02:03:06.373474 kubelet[2981]: I0312 02:03:06.358013 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71e234d3-42ce-4288-b45d-54487a188b83-xtables-lock\") pod \"kube-flannel-ds-n6z8r\" (UID: \"71e234d3-42ce-4288-b45d-54487a188b83\") " pod="kube-flannel/kube-flannel-ds-n6z8r" Mar 12 02:03:06.376129 kubelet[2981]: I0312 02:03:06.358041 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/71e234d3-42ce-4288-b45d-54487a188b83-flannel-cfg\") pod \"kube-flannel-ds-n6z8r\" (UID: \"71e234d3-42ce-4288-b45d-54487a188b83\") " pod="kube-flannel/kube-flannel-ds-n6z8r" Mar 12 02:03:06.651952 kubelet[2981]: E0312 02:03:06.650477 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:06.697034 kubelet[2981]: E0312 02:03:06.696963 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:06.703092 containerd[1613]: time="2026-03-12T02:03:06.702976873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-n6z8r,Uid:71e234d3-42ce-4288-b45d-54487a188b83,Namespace:kube-flannel,Attempt:0,}" Mar 12 02:03:06.758336 sudo[1792]: pam_unix(sudo:session): session closed for user root Mar 12 02:03:06.775128 sshd[1791]: Connection closed by 10.0.0.1 port 48660 Mar 12 02:03:06.780306 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Mar 12 02:03:06.846993 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:48660.service: Deactivated successfully. Mar 12 02:03:06.877434 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 02:03:06.881471 systemd[1]: session-8.scope: Consumed 17.709s CPU time, 231M memory peak. Mar 12 02:03:06.894221 systemd-logind[1589]: Session 8 logged out. Waiting for processes to exit. Mar 12 02:03:06.909871 systemd-logind[1589]: Removed session 8. Mar 12 02:03:06.953738 containerd[1613]: time="2026-03-12T02:03:06.953131763Z" level=info msg="connecting to shim a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47" address="unix:///run/containerd/s/a3e8c70cbf0714ce6d28f091683efc2235a13ffa718af8a14e0ba76ecc04542e" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:03:07.168081 systemd[1]: Started cri-containerd-a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47.scope - libcontainer container a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47. Mar 12 02:03:07.422916 containerd[1613]: time="2026-03-12T02:03:07.422325046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-n6z8r,Uid:71e234d3-42ce-4288-b45d-54487a188b83,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47\"" Mar 12 02:03:07.433307 kubelet[2981]: E0312 02:03:07.432677 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:07.442877 containerd[1613]: time="2026-03-12T02:03:07.441299225Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 12 02:03:09.105301 kubelet[2981]: E0312 02:03:09.103180 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:09.321141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306277659.mount: Deactivated successfully. Mar 12 02:03:10.006476 containerd[1613]: time="2026-03-12T02:03:10.004746522Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:03:10.012523 containerd[1613]: time="2026-03-12T02:03:10.010963215Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=3641610" Mar 12 02:03:10.129241 containerd[1613]: time="2026-03-12T02:03:10.118271071Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:03:10.305414 containerd[1613]: time="2026-03-12T02:03:10.303368900Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:03:10.381345 containerd[1613]: time="2026-03-12T02:03:10.361730209Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 2.920171162s" Mar 12 02:03:10.381345 containerd[1613]: time="2026-03-12T02:03:10.363511317Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 12 02:03:10.803041 containerd[1613]: time="2026-03-12T02:03:10.802358859Z" level=info msg="CreateContainer within sandbox \"a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 12 02:03:11.261972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3860143663.mount: Deactivated successfully. Mar 12 02:03:11.276172 containerd[1613]: time="2026-03-12T02:03:11.262368162Z" level=info msg="Container aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:03:11.268226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2060273196.mount: Deactivated successfully. Mar 12 02:03:11.317110 containerd[1613]: time="2026-03-12T02:03:11.316526860Z" level=info msg="CreateContainer within sandbox \"a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478\"" Mar 12 02:03:11.325997 containerd[1613]: time="2026-03-12T02:03:11.324854463Z" level=info msg="StartContainer for \"aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478\"" Mar 12 02:03:11.326830 containerd[1613]: time="2026-03-12T02:03:11.326568878Z" level=info msg="connecting to shim aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478" address="unix:///run/containerd/s/a3e8c70cbf0714ce6d28f091683efc2235a13ffa718af8a14e0ba76ecc04542e" protocol=ttrpc version=3 Mar 12 02:03:11.458755 systemd[1]: Started cri-containerd-aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478.scope - libcontainer container aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478. Mar 12 02:03:11.763362 containerd[1613]: time="2026-03-12T02:03:11.761489873Z" level=info msg="StartContainer for \"aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478\" returns successfully" Mar 12 02:03:11.815267 systemd[1]: cri-containerd-aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478.scope: Deactivated successfully. Mar 12 02:03:11.824942 containerd[1613]: time="2026-03-12T02:03:11.819567271Z" level=info msg="received container exit event container_id:\"aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478\" id:\"aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478\" pid:3343 exited_at:{seconds:1773280991 nanos:814753870}" Mar 12 02:03:11.996883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478-rootfs.mount: Deactivated successfully. Mar 12 02:03:12.269379 kubelet[2981]: E0312 02:03:12.265373 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:12.305950 containerd[1613]: time="2026-03-12T02:03:12.304550622Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 12 02:03:23.501007 containerd[1613]: time="2026-03-12T02:03:23.497172314Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:03:23.509339 containerd[1613]: time="2026-03-12T02:03:23.507836517Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=28415638" Mar 12 02:03:23.516125 containerd[1613]: time="2026-03-12T02:03:23.515346117Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:03:23.525986 containerd[1613]: time="2026-03-12T02:03:23.522992756Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:03:23.525986 containerd[1613]: time="2026-03-12T02:03:23.525888687Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 11.221152711s" Mar 12 02:03:23.525986 containerd[1613]: time="2026-03-12T02:03:23.525928190Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 12 02:03:23.557873 containerd[1613]: time="2026-03-12T02:03:23.557060888Z" level=info msg="CreateContainer within sandbox \"a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 02:03:23.609405 systemd[1684]: Created slice background.slice - User Background Tasks Slice. Mar 12 02:03:23.614884 systemd[1684]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Mar 12 02:03:23.621827 containerd[1613]: time="2026-03-12T02:03:23.621556284Z" level=info msg="Container 2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:03:23.659948 containerd[1613]: time="2026-03-12T02:03:23.657473486Z" level=info msg="CreateContainer within sandbox \"a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0\"" Mar 12 02:03:23.667512 containerd[1613]: time="2026-03-12T02:03:23.665481903Z" level=info msg="StartContainer for \"2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0\"" Mar 12 02:03:23.668064 containerd[1613]: time="2026-03-12T02:03:23.667824918Z" level=info msg="connecting to shim 2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0" address="unix:///run/containerd/s/a3e8c70cbf0714ce6d28f091683efc2235a13ffa718af8a14e0ba76ecc04542e" protocol=ttrpc version=3 Mar 12 02:03:23.759320 systemd[1]: Started cri-containerd-2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0.scope - libcontainer container 2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0. Mar 12 02:03:23.766332 systemd[1684]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Mar 12 02:03:23.988168 systemd[1]: cri-containerd-2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0.scope: Deactivated successfully. Mar 12 02:03:24.004400 kubelet[2981]: I0312 02:03:24.003427 2981 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 12 02:03:24.052282 containerd[1613]: time="2026-03-12T02:03:24.051283467Z" level=info msg="received container exit event container_id:\"2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0\" id:\"2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0\" pid:3428 exited_at:{seconds:1773281004 nanos:10466075}" Mar 12 02:03:24.063522 containerd[1613]: time="2026-03-12T02:03:24.063353476Z" level=info msg="StartContainer for \"2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0\" returns successfully" Mar 12 02:03:24.249916 systemd[1]: Created slice kubepods-burstable-podbbe23a46_b13d_4115_898f_f66fb335e2b9.slice - libcontainer container kubepods-burstable-podbbe23a46_b13d_4115_898f_f66fb335e2b9.slice. Mar 12 02:03:24.285244 systemd[1]: Created slice kubepods-burstable-poddaa242c6_5f6b_48b3_ade5_5db15d7a2cf6.slice - libcontainer container kubepods-burstable-poddaa242c6_5f6b_48b3_ade5_5db15d7a2cf6.slice. Mar 12 02:03:24.315958 kubelet[2981]: I0312 02:03:24.315110 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbe23a46-b13d-4115-898f-f66fb335e2b9-config-volume\") pod \"coredns-674b8bbfcf-jqqnk\" (UID: \"bbe23a46-b13d-4115-898f-f66fb335e2b9\") " pod="kube-system/coredns-674b8bbfcf-jqqnk" Mar 12 02:03:24.315958 kubelet[2981]: I0312 02:03:24.315243 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v7bm\" (UniqueName: \"kubernetes.io/projected/bbe23a46-b13d-4115-898f-f66fb335e2b9-kube-api-access-5v7bm\") pod \"coredns-674b8bbfcf-jqqnk\" (UID: \"bbe23a46-b13d-4115-898f-f66fb335e2b9\") " pod="kube-system/coredns-674b8bbfcf-jqqnk" Mar 12 02:03:24.315958 kubelet[2981]: I0312 02:03:24.315294 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sggpd\" (UniqueName: \"kubernetes.io/projected/daa242c6-5f6b-48b3-ade5-5db15d7a2cf6-kube-api-access-sggpd\") pod \"coredns-674b8bbfcf-vcnmz\" (UID: \"daa242c6-5f6b-48b3-ade5-5db15d7a2cf6\") " pod="kube-system/coredns-674b8bbfcf-vcnmz" Mar 12 02:03:24.315958 kubelet[2981]: I0312 02:03:24.315323 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daa242c6-5f6b-48b3-ade5-5db15d7a2cf6-config-volume\") pod \"coredns-674b8bbfcf-vcnmz\" (UID: \"daa242c6-5f6b-48b3-ade5-5db15d7a2cf6\") " pod="kube-system/coredns-674b8bbfcf-vcnmz" Mar 12 02:03:24.350160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0-rootfs.mount: Deactivated successfully. Mar 12 02:03:24.565176 kubelet[2981]: E0312 02:03:24.565121 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:24.566534 containerd[1613]: time="2026-03-12T02:03:24.566339696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqqnk,Uid:bbe23a46-b13d-4115-898f-f66fb335e2b9,Namespace:kube-system,Attempt:0,}" Mar 12 02:03:24.602484 kubelet[2981]: E0312 02:03:24.602430 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:24.617990 containerd[1613]: time="2026-03-12T02:03:24.617833715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vcnmz,Uid:daa242c6-5f6b-48b3-ade5-5db15d7a2cf6,Namespace:kube-system,Attempt:0,}" Mar 12 02:03:24.681902 kubelet[2981]: E0312 02:03:24.681551 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:24.703908 containerd[1613]: time="2026-03-12T02:03:24.700489753Z" level=info msg="CreateContainer within sandbox \"a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 12 02:03:24.816128 containerd[1613]: time="2026-03-12T02:03:24.815186746Z" level=info msg="Container fbea47a6685148b3bbee69172aef28a6e568131633cbae3f5bad66bc53f3d3dd: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:03:24.905341 systemd[1]: run-netns-cni\x2da49cc297\x2dec37\x2d18fe\x2dcdcf\x2d8dbf5f1c1b57.mount: Deactivated successfully. Mar 12 02:03:24.917203 containerd[1613]: time="2026-03-12T02:03:24.916008128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vcnmz,Uid:daa242c6-5f6b-48b3-ade5-5db15d7a2cf6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c236b48a1f0d36598f7727932d13f5a8ddbea9fe96f4c7d586b67a3f2e4af179\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 12 02:03:24.906373 systemd[1]: run-netns-cni\x2df8daaef0\x2d69b1\x2d626a\x2d7a71\x2d3a59d3d96889.mount: Deactivated successfully. Mar 12 02:03:24.927262 kubelet[2981]: E0312 02:03:24.918337 2981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c236b48a1f0d36598f7727932d13f5a8ddbea9fe96f4c7d586b67a3f2e4af179\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 12 02:03:24.927426 containerd[1613]: time="2026-03-12T02:03:24.926541088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqqnk,Uid:bbe23a46-b13d-4115-898f-f66fb335e2b9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a72ad607eb1550d57619dd13c7ab08657389ed83d5f9dd5c7c26984f2b5eef4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 12 02:03:24.930844 kubelet[2981]: E0312 02:03:24.930310 2981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a72ad607eb1550d57619dd13c7ab08657389ed83d5f9dd5c7c26984f2b5eef4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 12 02:03:24.930844 kubelet[2981]: E0312 02:03:24.930456 2981 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a72ad607eb1550d57619dd13c7ab08657389ed83d5f9dd5c7c26984f2b5eef4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-jqqnk" Mar 12 02:03:24.930996 kubelet[2981]: E0312 02:03:24.930922 2981 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a72ad607eb1550d57619dd13c7ab08657389ed83d5f9dd5c7c26984f2b5eef4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-jqqnk" Mar 12 02:03:24.932474 kubelet[2981]: E0312 02:03:24.931840 2981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jqqnk_kube-system(bbe23a46-b13d-4115-898f-f66fb335e2b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jqqnk_kube-system(bbe23a46-b13d-4115-898f-f66fb335e2b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a72ad607eb1550d57619dd13c7ab08657389ed83d5f9dd5c7c26984f2b5eef4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-jqqnk" podUID="bbe23a46-b13d-4115-898f-f66fb335e2b9" Mar 12 02:03:24.932474 kubelet[2981]: E0312 02:03:24.932528 2981 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c236b48a1f0d36598f7727932d13f5a8ddbea9fe96f4c7d586b67a3f2e4af179\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-vcnmz" Mar 12 02:03:24.932474 kubelet[2981]: E0312 02:03:24.932566 2981 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c236b48a1f0d36598f7727932d13f5a8ddbea9fe96f4c7d586b67a3f2e4af179\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-vcnmz" Mar 12 02:03:24.934090 containerd[1613]: time="2026-03-12T02:03:24.928536288Z" level=info msg="CreateContainer within sandbox \"a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"fbea47a6685148b3bbee69172aef28a6e568131633cbae3f5bad66bc53f3d3dd\"" Mar 12 02:03:24.939355 kubelet[2981]: E0312 02:03:24.938940 2981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vcnmz_kube-system(daa242c6-5f6b-48b3-ade5-5db15d7a2cf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vcnmz_kube-system(daa242c6-5f6b-48b3-ade5-5db15d7a2cf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c236b48a1f0d36598f7727932d13f5a8ddbea9fe96f4c7d586b67a3f2e4af179\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-vcnmz" podUID="daa242c6-5f6b-48b3-ade5-5db15d7a2cf6" Mar 12 02:03:24.943267 containerd[1613]: time="2026-03-12T02:03:24.942502369Z" level=info msg="StartContainer for \"fbea47a6685148b3bbee69172aef28a6e568131633cbae3f5bad66bc53f3d3dd\"" Mar 12 02:03:24.946933 containerd[1613]: time="2026-03-12T02:03:24.946456209Z" level=info msg="connecting to shim fbea47a6685148b3bbee69172aef28a6e568131633cbae3f5bad66bc53f3d3dd" address="unix:///run/containerd/s/a3e8c70cbf0714ce6d28f091683efc2235a13ffa718af8a14e0ba76ecc04542e" protocol=ttrpc version=3 Mar 12 02:03:25.318039 systemd[1]: Started cri-containerd-fbea47a6685148b3bbee69172aef28a6e568131633cbae3f5bad66bc53f3d3dd.scope - libcontainer container fbea47a6685148b3bbee69172aef28a6e568131633cbae3f5bad66bc53f3d3dd. Mar 12 02:03:25.927302 containerd[1613]: time="2026-03-12T02:03:25.926105331Z" level=info msg="StartContainer for \"fbea47a6685148b3bbee69172aef28a6e568131633cbae3f5bad66bc53f3d3dd\" returns successfully" Mar 12 02:03:28.369854 kubelet[2981]: E0312 02:03:28.368533 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:28.692273 systemd-networkd[1523]: flannel.1: Link UP Mar 12 02:03:28.693247 systemd-networkd[1523]: flannel.1: Gained carrier Mar 12 02:03:29.250044 kubelet[2981]: E0312 02:03:29.249328 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:30.315365 systemd-networkd[1523]: flannel.1: Gained IPv6LL Mar 12 02:03:37.952132 kubelet[2981]: E0312 02:03:37.944892 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:37.978503 containerd[1613]: time="2026-03-12T02:03:37.968269673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vcnmz,Uid:daa242c6-5f6b-48b3-ade5-5db15d7a2cf6,Namespace:kube-system,Attempt:0,}" Mar 12 02:03:39.028110 kubelet[2981]: E0312 02:03:39.025430 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:39.066566 containerd[1613]: time="2026-03-12T02:03:39.064496805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqqnk,Uid:bbe23a46-b13d-4115-898f-f66fb335e2b9,Namespace:kube-system,Attempt:0,}" Mar 12 02:03:39.252207 kernel: cni0: port 1(veth089a8445) entered blocking state Mar 12 02:03:39.252368 kernel: cni0: port 1(veth089a8445) entered disabled state Mar 12 02:03:39.258996 systemd-networkd[1523]: cni0: Link UP Mar 12 02:03:39.259009 systemd-networkd[1523]: cni0: Gained carrier Mar 12 02:03:39.316141 kernel: veth089a8445: entered allmulticast mode Mar 12 02:03:39.359455 kernel: veth089a8445: entered promiscuous mode Mar 12 02:03:39.498867 systemd-networkd[1523]: veth089a8445: Link UP Mar 12 02:03:39.500386 systemd-networkd[1523]: cni0: Lost carrier Mar 12 02:03:39.982443 kernel: cni0: port 1(veth089a8445) entered blocking state Mar 12 02:03:39.982568 kernel: cni0: port 1(veth089a8445) entered forwarding state Mar 12 02:03:40.478115 systemd-networkd[1523]: veth089a8445: Gained carrier Mar 12 02:03:41.146385 systemd-networkd[1523]: cni0: Gained carrier Mar 12 02:03:41.183192 systemd-networkd[1523]: cni0: Gained IPv6LL Mar 12 02:03:42.063898 kernel: cni0: port 2(veth4a3ae576) entered blocking state Mar 12 02:03:42.522379 kernel: cni0: port 2(veth4a3ae576) entered disabled state Mar 12 02:03:42.522489 kernel: veth4a3ae576: entered allmulticast mode Mar 12 02:03:42.522528 kernel: veth4a3ae576: entered promiscuous mode Mar 12 02:03:42.679078 systemd-networkd[1523]: veth089a8445: Gained IPv6LL Mar 12 02:03:42.787181 kubelet[2981]: E0312 02:03:42.787091 2981 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.818s" Mar 12 02:03:42.899871 systemd-networkd[1523]: veth4a3ae576: Link UP Mar 12 02:03:42.975531 containerd[1613]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000104950), "name":"cbr0", "type":"bridge"} Mar 12 02:03:42.975531 containerd[1613]: delegateAdd: netconf sent to delegate plugin: Mar 12 02:03:43.305362 kernel: cni0: port 2(veth4a3ae576) entered blocking state Mar 12 02:03:43.818047 kernel: cni0: port 2(veth4a3ae576) entered forwarding state Mar 12 02:03:43.800427 systemd-networkd[1523]: veth4a3ae576: Gained carrier Mar 12 02:03:44.393329 containerd[1613]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Mar 12 02:03:44.393329 containerd[1613]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000012870), "name":"cbr0", "type":"bridge"} Mar 12 02:03:44.393329 containerd[1613]: delegateAdd: netconf sent to delegate plugin: Mar 12 02:03:44.823159 systemd-networkd[1523]: veth4a3ae576: Gained IPv6LL Mar 12 02:03:45.704895 containerd[1613]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-12T02:03:45.704827380Z" level=info msg="connecting to shim c47ecfcc90fb635ccf52bfbe1936c39f4e72a93f0cabe0c48b976a6e1c64052e" address="unix:///run/containerd/s/0a26ab3b64030d73fe58fe7b9dd974d9f2559203830a79770a8e7155adc55846" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:03:47.458500 systemd[1]: Started cri-containerd-c47ecfcc90fb635ccf52bfbe1936c39f4e72a93f0cabe0c48b976a6e1c64052e.scope - libcontainer container c47ecfcc90fb635ccf52bfbe1936c39f4e72a93f0cabe0c48b976a6e1c64052e. Mar 12 02:03:47.934071 containerd[1613]: time="2026-03-12T02:03:47.917885072Z" level=info msg="connecting to shim dc140b4da21d3893f97f8c818874fee84e462dfe8b65427f476126a4678f5ebf" address="unix:///run/containerd/s/2dc4c6de71813a6ebf0edf5d760045fa7ed198893af0284f68cec66cc1c573ba" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:03:48.477409 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 02:03:50.320418 systemd[1]: Started cri-containerd-dc140b4da21d3893f97f8c818874fee84e462dfe8b65427f476126a4678f5ebf.scope - libcontainer container dc140b4da21d3893f97f8c818874fee84e462dfe8b65427f476126a4678f5ebf. Mar 12 02:03:51.495057 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 02:03:52.349858 containerd[1613]: time="2026-03-12T02:03:52.341532001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vcnmz,Uid:daa242c6-5f6b-48b3-ade5-5db15d7a2cf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c47ecfcc90fb635ccf52bfbe1936c39f4e72a93f0cabe0c48b976a6e1c64052e\"" Mar 12 02:03:52.434477 containerd[1613]: time="2026-03-12T02:03:52.434102443Z" level=error msg="get state for dc140b4da21d3893f97f8c818874fee84e462dfe8b65427f476126a4678f5ebf" error="context deadline exceeded" Mar 12 02:03:52.434477 containerd[1613]: time="2026-03-12T02:03:52.434165290Z" level=warning msg="unknown status" status=0 Mar 12 02:03:52.503376 kubelet[2981]: E0312 02:03:52.503201 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:52.897884 containerd[1613]: time="2026-03-12T02:03:52.897555820Z" level=info msg="CreateContainer within sandbox \"c47ecfcc90fb635ccf52bfbe1936c39f4e72a93f0cabe0c48b976a6e1c64052e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 02:03:53.389082 containerd[1613]: time="2026-03-12T02:03:53.388300998Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Mar 12 02:03:54.046436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042855887.mount: Deactivated successfully. Mar 12 02:03:54.450413 kubelet[2981]: E0312 02:03:54.265360 2981 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbe23a46_b13d_4115_898f_f66fb335e2b9.slice/cri-containerd-dc140b4da21d3893f97f8c818874fee84e462dfe8b65427f476126a4678f5ebf.scope\": RecentStats: unable to find data in memory cache]" Mar 12 02:03:54.896349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount990407222.mount: Deactivated successfully. Mar 12 02:03:54.923054 containerd[1613]: time="2026-03-12T02:03:54.920505350Z" level=info msg="Container cda89d7a2130029cf6fc1358fe9effedad1d0736ad39fe67eaa5cd6b5f274056: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:03:55.588218 containerd[1613]: time="2026-03-12T02:03:55.574972806Z" level=info msg="CreateContainer within sandbox \"c47ecfcc90fb635ccf52bfbe1936c39f4e72a93f0cabe0c48b976a6e1c64052e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cda89d7a2130029cf6fc1358fe9effedad1d0736ad39fe67eaa5cd6b5f274056\"" Mar 12 02:03:55.588218 containerd[1613]: time="2026-03-12T02:03:55.577000670Z" level=info msg="StartContainer for \"cda89d7a2130029cf6fc1358fe9effedad1d0736ad39fe67eaa5cd6b5f274056\"" Mar 12 02:03:55.634558 containerd[1613]: time="2026-03-12T02:03:55.634126847Z" level=info msg="connecting to shim cda89d7a2130029cf6fc1358fe9effedad1d0736ad39fe67eaa5cd6b5f274056" address="unix:///run/containerd/s/0a26ab3b64030d73fe58fe7b9dd974d9f2559203830a79770a8e7155adc55846" protocol=ttrpc version=3 Mar 12 02:03:55.728273 containerd[1613]: time="2026-03-12T02:03:55.680023798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqqnk,Uid:bbe23a46-b13d-4115-898f-f66fb335e2b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc140b4da21d3893f97f8c818874fee84e462dfe8b65427f476126a4678f5ebf\"" Mar 12 02:03:55.728422 kubelet[2981]: E0312 02:03:55.696854 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:03:55.886135 containerd[1613]: time="2026-03-12T02:03:55.854337997Z" level=info msg="CreateContainer within sandbox \"dc140b4da21d3893f97f8c818874fee84e462dfe8b65427f476126a4678f5ebf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 02:03:56.830448 systemd[1]: Started cri-containerd-cda89d7a2130029cf6fc1358fe9effedad1d0736ad39fe67eaa5cd6b5f274056.scope - libcontainer container cda89d7a2130029cf6fc1358fe9effedad1d0736ad39fe67eaa5cd6b5f274056. Mar 12 02:03:57.146484 containerd[1613]: time="2026-03-12T02:03:57.140518673Z" level=info msg="Container c43e0feeee0c75e4a989044de1bf34d18583f88fdeb08925b3b627319303062c: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:03:58.239191 containerd[1613]: time="2026-03-12T02:03:58.164088039Z" level=info msg="CreateContainer within sandbox \"dc140b4da21d3893f97f8c818874fee84e462dfe8b65427f476126a4678f5ebf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c43e0feeee0c75e4a989044de1bf34d18583f88fdeb08925b3b627319303062c\"" Mar 12 02:03:58.305343 containerd[1613]: time="2026-03-12T02:03:58.302224493Z" level=info msg="StartContainer for \"c43e0feeee0c75e4a989044de1bf34d18583f88fdeb08925b3b627319303062c\"" Mar 12 02:03:58.400488 containerd[1613]: time="2026-03-12T02:03:58.400434977Z" level=info msg="connecting to shim c43e0feeee0c75e4a989044de1bf34d18583f88fdeb08925b3b627319303062c" address="unix:///run/containerd/s/2dc4c6de71813a6ebf0edf5d760045fa7ed198893af0284f68cec66cc1c573ba" protocol=ttrpc version=3 Mar 12 02:03:58.930391 containerd[1613]: time="2026-03-12T02:03:58.923232916Z" level=error msg="get state for cda89d7a2130029cf6fc1358fe9effedad1d0736ad39fe67eaa5cd6b5f274056" error="context deadline exceeded" Mar 12 02:03:58.930391 containerd[1613]: time="2026-03-12T02:03:58.923466671Z" level=warning msg="unknown status" status=0 Mar 12 02:03:59.277257 containerd[1613]: time="2026-03-12T02:03:59.276320683Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Mar 12 02:03:59.717499 systemd[1]: Started cri-containerd-c43e0feeee0c75e4a989044de1bf34d18583f88fdeb08925b3b627319303062c.scope - libcontainer container c43e0feeee0c75e4a989044de1bf34d18583f88fdeb08925b3b627319303062c. Mar 12 02:04:00.059137 containerd[1613]: time="2026-03-12T02:04:00.050470749Z" level=info msg="StartContainer for \"cda89d7a2130029cf6fc1358fe9effedad1d0736ad39fe67eaa5cd6b5f274056\" returns successfully" Mar 12 02:04:00.463047 kubelet[2981]: E0312 02:04:00.455492 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:00.989401 kubelet[2981]: I0312 02:04:00.985217 2981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-n6z8r" podStartSLOduration=38.892004431 podStartE2EDuration="54.985189907s" podCreationTimestamp="2026-03-12 02:03:06 +0000 UTC" firstStartedPulling="2026-03-12 02:03:07.438236897 +0000 UTC m=+13.248839880" lastFinishedPulling="2026-03-12 02:03:23.531422373 +0000 UTC m=+29.342025356" observedRunningTime="2026-03-12 02:03:28.494183881 +0000 UTC m=+34.304786863" watchObservedRunningTime="2026-03-12 02:04:00.985189907 +0000 UTC m=+66.795792920" Mar 12 02:04:01.620532 kubelet[2981]: E0312 02:04:01.615199 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:01.771373 containerd[1613]: time="2026-03-12T02:04:01.769407561Z" level=error msg="get state for c43e0feeee0c75e4a989044de1bf34d18583f88fdeb08925b3b627319303062c" error="context deadline exceeded" Mar 12 02:04:01.771373 containerd[1613]: time="2026-03-12T02:04:01.769538274Z" level=warning msg="unknown status" status=0 Mar 12 02:04:02.056256 containerd[1613]: time="2026-03-12T02:04:02.038243768Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Mar 12 02:04:03.145562 kubelet[2981]: E0312 02:04:03.137198 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:04.278326 containerd[1613]: time="2026-03-12T02:04:04.278266193Z" level=info msg="StartContainer for \"c43e0feeee0c75e4a989044de1bf34d18583f88fdeb08925b3b627319303062c\" returns successfully" Mar 12 02:04:05.822245 kubelet[2981]: E0312 02:04:05.821211 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:06.159243 kubelet[2981]: I0312 02:04:06.152475 2981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vcnmz" podStartSLOduration=70.152454943 podStartE2EDuration="1m10.152454943s" podCreationTimestamp="2026-03-12 02:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:04:01.070567143 +0000 UTC m=+66.881170136" watchObservedRunningTime="2026-03-12 02:04:06.152454943 +0000 UTC m=+71.963057926" Mar 12 02:04:07.063137 kubelet[2981]: E0312 02:04:07.053253 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:13.088417 kubelet[2981]: E0312 02:04:13.088001 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:13.425134 kubelet[2981]: I0312 02:04:13.409304 2981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jqqnk" podStartSLOduration=77.409282713 podStartE2EDuration="1m17.409282713s" podCreationTimestamp="2026-03-12 02:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:04:06.168382295 +0000 UTC m=+71.978985278" watchObservedRunningTime="2026-03-12 02:04:13.409282713 +0000 UTC m=+79.219885716" Mar 12 02:04:17.178970 kubelet[2981]: E0312 02:04:17.159462 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:18.025310 kubelet[2981]: E0312 02:04:17.957544 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:21.952203 kubelet[2981]: E0312 02:04:21.952091 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:28.682422 kubelet[2981]: E0312 02:04:28.679051 2981 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.681s" Mar 12 02:04:30.095170 kubelet[2981]: E0312 02:04:30.094342 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:32.067531 kubelet[2981]: E0312 02:04:32.020447 2981 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.726s" Mar 12 02:04:32.067531 kubelet[2981]: E0312 02:04:32.097445 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:40.181555 kubelet[2981]: E0312 02:04:40.167511 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:04:50.014355 kubelet[2981]: E0312 02:04:50.009331 2981 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.056s" Mar 12 02:05:08.182885 kubelet[2981]: E0312 02:05:08.179233 2981 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.052s" Mar 12 02:05:18.982489 kubelet[2981]: E0312 02:05:18.978862 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:31.949292 kubelet[2981]: E0312 02:05:31.947400 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:34.942562 kubelet[2981]: E0312 02:05:34.941523 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:35.942452 kubelet[2981]: E0312 02:05:35.942171 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:38.967902 kubelet[2981]: E0312 02:05:38.960871 2981 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-controller-manager-localhost.189bf5a789c51153 kube-system 652 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:8747e1f8a49a618fbc1324a8fe2d3754,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 02:02:58 +0000 UTC,LastTimestamp:2026-03-12 02:05:31.947280102 +0000 UTC m=+157.757883246,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 02:05:41.609818 kubelet[2981]: E0312 02:05:41.609537 2981 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Mar 12 02:05:42.069224 systemd[1]: cri-containerd-b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0.scope: Deactivated successfully. Mar 12 02:05:42.070213 systemd[1]: cri-containerd-b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0.scope: Consumed 13.527s CPU time, 56.9M memory peak. Mar 12 02:05:42.081313 containerd[1613]: time="2026-03-12T02:05:42.081022643Z" level=info msg="received container exit event container_id:\"b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0\" id:\"b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0\" pid:2804 exit_status:1 exited_at:{seconds:1773281142 nanos:77397716}" Mar 12 02:05:42.180166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0-rootfs.mount: Deactivated successfully. Mar 12 02:05:42.950035 kubelet[2981]: E0312 02:05:42.949403 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:43.716543 kubelet[2981]: E0312 02:05:43.716313 2981 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Mar 12 02:05:43.732860 kubelet[2981]: E0312 02:05:43.732476 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:43.853904 systemd[1]: cri-containerd-f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8.scope: Deactivated successfully. Mar 12 02:05:43.855029 systemd[1]: cri-containerd-f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8.scope: Consumed 12.271s CPU time, 22.2M memory peak. Mar 12 02:05:43.917014 containerd[1613]: time="2026-03-12T02:05:43.916965028Z" level=info msg="received container exit event container_id:\"f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8\" id:\"f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8\" pid:2822 exit_status:1 exited_at:{seconds:1773281143 nanos:862254297}" Mar 12 02:05:44.373182 kubelet[2981]: I0312 02:05:44.372480 2981 scope.go:117] "RemoveContainer" containerID="b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0" Mar 12 02:05:44.373182 kubelet[2981]: E0312 02:05:44.373035 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:44.380044 kubelet[2981]: E0312 02:05:44.378014 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:44.450855 containerd[1613]: time="2026-03-12T02:05:44.448899928Z" level=info msg="CreateContainer within sandbox \"94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 12 02:05:44.791818 containerd[1613]: time="2026-03-12T02:05:44.790276377Z" level=info msg="Container f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:05:44.800930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2934501105.mount: Deactivated successfully. Mar 12 02:05:44.956462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8-rootfs.mount: Deactivated successfully. Mar 12 02:05:45.406514 containerd[1613]: time="2026-03-12T02:05:45.406197541Z" level=info msg="CreateContainer within sandbox \"94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\"" Mar 12 02:05:45.473809 containerd[1613]: time="2026-03-12T02:05:45.472333258Z" level=info msg="StartContainer for \"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\"" Mar 12 02:05:45.489486 containerd[1613]: time="2026-03-12T02:05:45.489431861Z" level=info msg="connecting to shim f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2" address="unix:///run/containerd/s/9e0ec35891eed3399cf93a4554fac23dc348508b312db97a662cd42b8c614eac" protocol=ttrpc version=3 Mar 12 02:05:45.929966 kubelet[2981]: I0312 02:05:45.929911 2981 scope.go:117] "RemoveContainer" containerID="f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8" Mar 12 02:05:45.939974 kubelet[2981]: E0312 02:05:45.939923 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:46.087267 containerd[1613]: time="2026-03-12T02:05:46.087209108Z" level=info msg="CreateContainer within sandbox \"a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 12 02:05:46.286452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount949720054.mount: Deactivated successfully. Mar 12 02:05:46.301984 containerd[1613]: time="2026-03-12T02:05:46.289855371Z" level=info msg="Container 51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:05:46.331326 systemd[1]: Started cri-containerd-f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2.scope - libcontainer container f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2. Mar 12 02:05:46.561832 containerd[1613]: time="2026-03-12T02:05:46.526963093Z" level=info msg="CreateContainer within sandbox \"a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f\"" Mar 12 02:05:46.666859 containerd[1613]: time="2026-03-12T02:05:46.658868640Z" level=info msg="StartContainer for \"51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f\"" Mar 12 02:05:46.699936 containerd[1613]: time="2026-03-12T02:05:46.699456344Z" level=info msg="connecting to shim 51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f" address="unix:///run/containerd/s/d2fdb522566ad107655458427599cd4d50ce7f91008909a20bf9af0a29a4896c" protocol=ttrpc version=3 Mar 12 02:05:48.583913 containerd[1613]: time="2026-03-12T02:05:48.580843087Z" level=error msg="get state for f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2" error="context deadline exceeded" Mar 12 02:05:48.583913 containerd[1613]: time="2026-03-12T02:05:48.580910532Z" level=warning msg="unknown status" status=0 Mar 12 02:05:48.912888 containerd[1613]: time="2026-03-12T02:05:48.907014493Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Mar 12 02:05:49.051366 systemd[1]: Started cri-containerd-51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f.scope - libcontainer container 51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f. Mar 12 02:05:49.810897 containerd[1613]: time="2026-03-12T02:05:49.808302790Z" level=info msg="StartContainer for \"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\" returns successfully" Mar 12 02:05:50.141916 kubelet[2981]: E0312 02:05:50.140246 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:50.996472 kubelet[2981]: E0312 02:05:50.985564 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:51.061456 containerd[1613]: time="2026-03-12T02:05:51.061216143Z" level=info msg="StartContainer for \"51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f\" returns successfully" Mar 12 02:05:51.296414 kubelet[2981]: E0312 02:05:51.289548 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:52.354533 kubelet[2981]: E0312 02:05:52.341543 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:53.382371 kubelet[2981]: E0312 02:05:53.354490 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:54.417966 kubelet[2981]: E0312 02:05:54.416000 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:05:59.830033 kubelet[2981]: E0312 02:05:59.829948 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:06:04.485339 kubelet[2981]: E0312 02:06:04.470081 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:06:09.814514 kubelet[2981]: E0312 02:06:09.807411 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:06:10.402998 kubelet[2981]: E0312 02:06:10.400885 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:06:38.584187 kubelet[2981]: I0312 02:06:38.404491 2981 request.go:752] "Waited before sending request" delay="4.544195229s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://10.0.0.28:6443/api/v1/namespaces/kube-system/events" Mar 12 02:06:39.363889 systemd[1]: cri-containerd-f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2.scope: Deactivated successfully. Mar 12 02:06:40.140825 systemd[1]: cri-containerd-f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2.scope: Consumed 4.540s CPU time, 18.4M memory peak. Mar 12 02:06:44.846998 kubelet[2981]: E0312 02:06:44.609095 2981 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Mar 12 02:06:59.129265 containerd[1613]: time="2026-03-12T02:06:50.223290407Z" level=error msg="post event" error="context deadline exceeded" Mar 12 02:06:59.129265 containerd[1613]: time="2026-03-12T02:06:57.457235020Z" level=error msg="forward event" error="context deadline exceeded" Mar 12 02:07:04.137850 containerd[1613]: time="2026-03-12T02:07:03.548946974Z" level=error msg="ttrpc: received message on inactive stream" stream=17 Mar 12 02:07:05.865911 containerd[1613]: time="2026-03-12T02:07:05.094308597Z" level=error msg="forward event" error="context deadline exceeded" Mar 12 02:07:09.450143 containerd[1613]: time="2026-03-12T02:06:58.754278958Z" level=info msg="received container exit event container_id:\"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\" id:\"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\" pid:4345 exit_status:1 exited_at:{seconds:1773281204 nanos:874072531}" Mar 12 02:07:11.528202 containerd[1613]: time="2026-03-12T02:07:09.691409354Z" level=error msg="ttrpc: received message on inactive stream" stream=21 Mar 12 02:07:11.528202 containerd[1613]: time="2026-03-12T02:07:09.691553953Z" level=error msg="ttrpc: received message on inactive stream" stream=19 Mar 12 02:07:18.040431 kubelet[2981]: E0312 02:07:18.038525 2981 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Mar 12 02:07:20.231394 containerd[1613]: time="2026-03-12T02:07:20.226911108Z" level=error msg="get state for f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2" error="context deadline exceeded" Mar 12 02:07:20.935030 containerd[1613]: time="2026-03-12T02:07:20.310048123Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Mar 12 02:07:20.935030 containerd[1613]: time="2026-03-12T02:07:20.558377103Z" level=warning msg="unknown status" status=0 Mar 12 02:07:26.013424 containerd[1613]: time="2026-03-12T02:07:25.629054361Z" level=info msg="container event discarded" container=94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5 type=CONTAINER_CREATED_EVENT Mar 12 02:07:27.146413 containerd[1613]: time="2026-03-12T02:07:26.153354736Z" level=info msg="container event discarded" container=94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5 type=CONTAINER_STARTED_EVENT Mar 12 02:07:27.711431 containerd[1613]: time="2026-03-12T02:07:27.707390155Z" level=error msg="get state for f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2" error="context deadline exceeded" Mar 12 02:07:27.711431 containerd[1613]: time="2026-03-12T02:07:27.707793828Z" level=warning msg="unknown status" status=0 Mar 12 02:07:27.711431 containerd[1613]: time="2026-03-12T02:07:27.745925746Z" level=info msg="container event discarded" container=a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405 type=CONTAINER_CREATED_EVENT Mar 12 02:07:27.711431 containerd[1613]: time="2026-03-12T02:07:27.746210256Z" level=info msg="container event discarded" container=a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405 type=CONTAINER_STARTED_EVENT Mar 12 02:07:27.711431 containerd[1613]: time="2026-03-12T02:07:27.746223681Z" level=info msg="container event discarded" container=60463a50810cd14bf1ba83d258d5991db734ce0bcac0fce0b0195ec58472a16a type=CONTAINER_CREATED_EVENT Mar 12 02:07:27.711431 containerd[1613]: time="2026-03-12T02:07:27.746406683Z" level=info msg="container event discarded" container=60463a50810cd14bf1ba83d258d5991db734ce0bcac0fce0b0195ec58472a16a type=CONTAINER_STARTED_EVENT Mar 12 02:07:27.711431 containerd[1613]: time="2026-03-12T02:07:27.746422652Z" level=info msg="container event discarded" container=b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0 type=CONTAINER_CREATED_EVENT Mar 12 02:07:27.711431 containerd[1613]: time="2026-03-12T02:07:27.746478547Z" level=info msg="container event discarded" container=f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8 type=CONTAINER_CREATED_EVENT Mar 12 02:07:27.942977 containerd[1613]: time="2026-03-12T02:07:27.834271969Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Mar 12 02:07:28.006084 containerd[1613]: time="2026-03-12T02:07:27.924483249Z" level=info msg="container event discarded" container=871b243ed1c4ed9f86c50e24f2cc5562bbac04e797b8b4c1b34b8ba495bbd81e type=CONTAINER_CREATED_EVENT Mar 12 02:07:28.020883 containerd[1613]: time="2026-03-12T02:07:28.010140167Z" level=info msg="container event discarded" container=b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0 type=CONTAINER_STARTED_EVENT Mar 12 02:07:28.362439 containerd[1613]: time="2026-03-12T02:07:28.150502523Z" level=info msg="container event discarded" container=f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8 type=CONTAINER_STARTED_EVENT Mar 12 02:07:29.318884 containerd[1613]: time="2026-03-12T02:07:28.695096761Z" level=info msg="container event discarded" container=871b243ed1c4ed9f86c50e24f2cc5562bbac04e797b8b4c1b34b8ba495bbd81e type=CONTAINER_STARTED_EVENT Mar 12 02:07:29.343416 containerd[1613]: time="2026-03-12T02:07:29.327417572Z" level=error msg="failed to drain init process f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Mar 12 02:07:29.350808 containerd[1613]: time="2026-03-12T02:07:29.348029503Z" level=error msg="failed to handle container TaskExit event container_id:\"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\" id:\"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\" pid:4345 exit_status:1 exited_at:{seconds:1773281204 nanos:874072531}" error="failed to stop container: failed to delete task: context deadline exceeded" Mar 12 02:07:29.395659 containerd[1613]: time="2026-03-12T02:07:29.395415456Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Mar 12 02:07:29.405878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2-rootfs.mount: Deactivated successfully. Mar 12 02:07:29.537312 kubelet[2981]: E0312 02:07:29.537264 2981 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m14.569s" Mar 12 02:07:29.545681 kubelet[2981]: E0312 02:07:29.544565 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:29.553687 kubelet[2981]: E0312 02:07:29.553157 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:29.554124 kubelet[2981]: E0312 02:07:29.554099 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:29.559120 kubelet[2981]: E0312 02:07:29.559086 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:29.575270 kubelet[2981]: E0312 02:07:29.559943 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:29.575270 kubelet[2981]: E0312 02:07:29.563866 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:29.593986 kubelet[2981]: E0312 02:07:29.593942 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:30.441301 kubelet[2981]: E0312 02:07:30.440393 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:30.441301 kubelet[2981]: E0312 02:07:30.440497 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:30.441301 kubelet[2981]: E0312 02:07:30.441023 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:31.272496 containerd[1613]: time="2026-03-12T02:07:31.266266282Z" level=info msg="TaskExit event container_id:\"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\" id:\"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\" pid:4345 exit_status:1 exited_at:{seconds:1773281204 nanos:874072531}" Mar 12 02:07:32.576945 kubelet[2981]: I0312 02:07:32.571387 2981 scope.go:117] "RemoveContainer" containerID="b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0" Mar 12 02:07:32.576945 kubelet[2981]: I0312 02:07:32.571913 2981 scope.go:117] "RemoveContainer" containerID="f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2" Mar 12 02:07:32.576945 kubelet[2981]: E0312 02:07:32.572035 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:32.617861 containerd[1613]: time="2026-03-12T02:07:32.616767283Z" level=info msg="CreateContainer within sandbox \"94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Mar 12 02:07:32.636807 containerd[1613]: time="2026-03-12T02:07:32.634274797Z" level=info msg="RemoveContainer for \"b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0\"" Mar 12 02:07:32.679296 containerd[1613]: time="2026-03-12T02:07:32.679095245Z" level=info msg="RemoveContainer for \"b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0\" returns successfully" Mar 12 02:07:32.718720 containerd[1613]: time="2026-03-12T02:07:32.718079475Z" level=info msg="Container ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:07:32.768121 containerd[1613]: time="2026-03-12T02:07:32.767740174Z" level=info msg="CreateContainer within sandbox \"94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0\"" Mar 12 02:07:32.785066 containerd[1613]: time="2026-03-12T02:07:32.772465715Z" level=info msg="StartContainer for \"ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0\"" Mar 12 02:07:32.785066 containerd[1613]: time="2026-03-12T02:07:32.779457991Z" level=info msg="connecting to shim ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0" address="unix:///run/containerd/s/9e0ec35891eed3399cf93a4554fac23dc348508b312db97a662cd42b8c614eac" protocol=ttrpc version=3 Mar 12 02:07:33.017856 systemd[1]: Started cri-containerd-ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0.scope - libcontainer container ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0. Mar 12 02:07:33.320855 containerd[1613]: time="2026-03-12T02:07:33.319502754Z" level=info msg="StartContainer for \"ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0\" returns successfully" Mar 12 02:07:33.613045 kubelet[2981]: E0312 02:07:33.610331 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:34.706400 kubelet[2981]: E0312 02:07:34.705015 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:07:44.442227 kubelet[2981]: E0312 02:07:44.442177 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:08:02.864858 containerd[1613]: time="2026-03-12T02:08:02.790967374Z" level=info msg="container event discarded" container=062f1329590c460eae2de863a87934d8a6e674df95bec9bffdde83ff4377719b type=CONTAINER_CREATED_EVENT Mar 12 02:08:02.864858 containerd[1613]: time="2026-03-12T02:08:02.791381376Z" level=info msg="container event discarded" container=062f1329590c460eae2de863a87934d8a6e674df95bec9bffdde83ff4377719b type=CONTAINER_STARTED_EVENT Mar 12 02:08:03.620399 containerd[1613]: time="2026-03-12T02:08:03.517231836Z" level=info msg="container event discarded" container=5c817250469d2ed14f9c5588e41c9c2251cf48ef614be23f3a82af53043e577d type=CONTAINER_CREATED_EVENT Mar 12 02:08:03.742117 kubelet[2981]: E0312 02:08:03.742071 2981 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.694s" Mar 12 02:08:06.131149 containerd[1613]: time="2026-03-12T02:08:06.120951987Z" level=info msg="container event discarded" container=5c817250469d2ed14f9c5588e41c9c2251cf48ef614be23f3a82af53043e577d type=CONTAINER_STARTED_EVENT Mar 12 02:08:07.443437 containerd[1613]: time="2026-03-12T02:08:07.437040384Z" level=info msg="container event discarded" container=a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47 type=CONTAINER_CREATED_EVENT Mar 12 02:08:07.443437 containerd[1613]: time="2026-03-12T02:08:07.437409031Z" level=info msg="container event discarded" container=a8571d521643a4f99d36e9d1574c74ae5e76a44d0d4b823e0825b4794fe43c47 type=CONTAINER_STARTED_EVENT Mar 12 02:08:11.323237 containerd[1613]: time="2026-03-12T02:08:11.322086488Z" level=info msg="container event discarded" container=aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478 type=CONTAINER_CREATED_EVENT Mar 12 02:08:11.775550 containerd[1613]: time="2026-03-12T02:08:11.774012577Z" level=info msg="container event discarded" container=aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478 type=CONTAINER_STARTED_EVENT Mar 12 02:08:12.119068 containerd[1613]: time="2026-03-12T02:08:12.118490003Z" level=info msg="container event discarded" container=aefc11a9cdfe372b2487ed4ec6d1bbac82d54db6efae6a07683dc4c7aea57478 type=CONTAINER_STOPPED_EVENT Mar 12 02:08:23.669174 containerd[1613]: time="2026-03-12T02:08:23.669042604Z" level=info msg="container event discarded" container=2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0 type=CONTAINER_CREATED_EVENT Mar 12 02:08:24.071261 containerd[1613]: time="2026-03-12T02:08:24.071163115Z" level=info msg="container event discarded" container=2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0 type=CONTAINER_STARTED_EVENT Mar 12 02:08:24.549898 containerd[1613]: time="2026-03-12T02:08:24.548290248Z" level=info msg="container event discarded" container=2852c93fb08be53f627874b2e52448b9d6fcd68d09662a023ee57c2837e466c0 type=CONTAINER_STOPPED_EVENT Mar 12 02:08:24.935378 containerd[1613]: time="2026-03-12T02:08:24.935289572Z" level=info msg="container event discarded" container=fbea47a6685148b3bbee69172aef28a6e568131633cbae3f5bad66bc53f3d3dd type=CONTAINER_CREATED_EVENT Mar 12 02:08:25.954067 containerd[1613]: time="2026-03-12T02:08:25.946759180Z" level=info msg="container event discarded" container=fbea47a6685148b3bbee69172aef28a6e568131633cbae3f5bad66bc53f3d3dd type=CONTAINER_STARTED_EVENT Mar 12 02:08:31.945327 kubelet[2981]: E0312 02:08:31.943191 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:08:39.955412 kubelet[2981]: E0312 02:08:39.951191 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:08:43.286395 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:38336.service - OpenSSH per-connection server daemon (10.0.0.1:38336). Mar 12 02:08:44.785505 sshd[4850]: Accepted publickey for core from 10.0.0.1 port 38336 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:08:44.838191 sshd-session[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:08:45.120028 systemd-logind[1589]: New session 9 of user core. Mar 12 02:08:45.285425 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 02:08:46.599233 sshd[4857]: Connection closed by 10.0.0.1 port 38336 Mar 12 02:08:46.597344 sshd-session[4850]: pam_unix(sshd:session): session closed for user core Mar 12 02:08:46.626228 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:38336.service: Deactivated successfully. Mar 12 02:08:46.635992 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 02:08:46.642398 systemd-logind[1589]: Session 9 logged out. Waiting for processes to exit. Mar 12 02:08:46.654968 systemd-logind[1589]: Removed session 9. Mar 12 02:08:47.945355 kubelet[2981]: E0312 02:08:47.945062 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:08:47.950291 kubelet[2981]: E0312 02:08:47.948324 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:08:51.724505 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:56434.service - OpenSSH per-connection server daemon (10.0.0.1:56434). Mar 12 02:08:51.977807 kubelet[2981]: E0312 02:08:51.977231 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:08:52.287206 sshd[4895]: Accepted publickey for core from 10.0.0.1 port 56434 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:08:52.294381 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:08:52.328791 containerd[1613]: time="2026-03-12T02:08:52.328139511Z" level=info msg="container event discarded" container=c47ecfcc90fb635ccf52bfbe1936c39f4e72a93f0cabe0c48b976a6e1c64052e type=CONTAINER_CREATED_EVENT Mar 12 02:08:52.379395 containerd[1613]: time="2026-03-12T02:08:52.376422728Z" level=info msg="container event discarded" container=c47ecfcc90fb635ccf52bfbe1936c39f4e72a93f0cabe0c48b976a6e1c64052e type=CONTAINER_STARTED_EVENT Mar 12 02:08:52.389374 systemd-logind[1589]: New session 10 of user core. Mar 12 02:08:52.434267 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 02:08:53.727537 sshd[4913]: Connection closed by 10.0.0.1 port 56434 Mar 12 02:08:53.710544 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Mar 12 02:08:53.795551 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:56434.service: Deactivated successfully. Mar 12 02:08:53.815229 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 02:08:53.830565 systemd-logind[1589]: Session 10 logged out. Waiting for processes to exit. Mar 12 02:08:53.849494 systemd-logind[1589]: Removed session 10. Mar 12 02:08:55.429412 containerd[1613]: time="2026-03-12T02:08:55.428388816Z" level=info msg="container event discarded" container=cda89d7a2130029cf6fc1358fe9effedad1d0736ad39fe67eaa5cd6b5f274056 type=CONTAINER_CREATED_EVENT Mar 12 02:08:55.694440 containerd[1613]: time="2026-03-12T02:08:55.691849138Z" level=info msg="container event discarded" container=dc140b4da21d3893f97f8c818874fee84e462dfe8b65427f476126a4678f5ebf type=CONTAINER_CREATED_EVENT Mar 12 02:08:55.694440 containerd[1613]: time="2026-03-12T02:08:55.692039083Z" level=info msg="container event discarded" container=dc140b4da21d3893f97f8c818874fee84e462dfe8b65427f476126a4678f5ebf type=CONTAINER_STARTED_EVENT Mar 12 02:08:55.961552 kubelet[2981]: E0312 02:08:55.956350 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:08:58.106902 containerd[1613]: time="2026-03-12T02:08:58.104160909Z" level=info msg="container event discarded" container=c43e0feeee0c75e4a989044de1bf34d18583f88fdeb08925b3b627319303062c type=CONTAINER_CREATED_EVENT Mar 12 02:08:58.775554 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:56446.service - OpenSSH per-connection server daemon (10.0.0.1:56446). Mar 12 02:08:59.475870 sshd[4950]: Accepted publickey for core from 10.0.0.1 port 56446 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:08:59.487142 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:08:59.636520 systemd-logind[1589]: New session 11 of user core. Mar 12 02:08:59.668280 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 02:09:00.046118 containerd[1613]: time="2026-03-12T02:09:00.040516954Z" level=info msg="container event discarded" container=cda89d7a2130029cf6fc1358fe9effedad1d0736ad39fe67eaa5cd6b5f274056 type=CONTAINER_STARTED_EVENT Mar 12 02:09:00.657891 sshd[4954]: Connection closed by 10.0.0.1 port 56446 Mar 12 02:09:00.655865 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Mar 12 02:09:00.725462 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:56446.service: Deactivated successfully. Mar 12 02:09:00.738049 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 02:09:00.763113 systemd-logind[1589]: Session 11 logged out. Waiting for processes to exit. Mar 12 02:09:00.780071 systemd-logind[1589]: Removed session 11. Mar 12 02:09:03.951197 kubelet[2981]: E0312 02:09:03.951150 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:09:04.177836 containerd[1613]: time="2026-03-12T02:09:04.173147388Z" level=info msg="container event discarded" container=c43e0feeee0c75e4a989044de1bf34d18583f88fdeb08925b3b627319303062c type=CONTAINER_STARTED_EVENT Mar 12 02:09:05.787105 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:59256.service - OpenSSH per-connection server daemon (10.0.0.1:59256). Mar 12 02:09:06.555851 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 59256 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:09:06.593515 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:09:06.695086 systemd-logind[1589]: New session 12 of user core. Mar 12 02:09:06.748472 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 02:09:08.717180 sshd[5005]: Connection closed by 10.0.0.1 port 59256 Mar 12 02:09:08.718364 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Mar 12 02:09:08.775546 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:59256.service: Deactivated successfully. Mar 12 02:09:08.853893 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 02:09:08.902149 systemd-logind[1589]: Session 12 logged out. Waiting for processes to exit. Mar 12 02:09:08.935875 systemd-logind[1589]: Removed session 12. Mar 12 02:09:13.961362 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:47114.service - OpenSSH per-connection server daemon (10.0.0.1:47114). Mar 12 02:09:15.249122 sshd[5040]: Accepted publickey for core from 10.0.0.1 port 47114 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:09:15.309320 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:09:15.461401 systemd-logind[1589]: New session 13 of user core. Mar 12 02:09:15.522894 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 02:09:17.703189 sshd[5058]: Connection closed by 10.0.0.1 port 47114 Mar 12 02:09:17.726856 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Mar 12 02:09:17.854246 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:47114.service: Deactivated successfully. Mar 12 02:09:17.869304 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 02:09:17.882388 systemd-logind[1589]: Session 13 logged out. Waiting for processes to exit. Mar 12 02:09:18.006836 systemd-logind[1589]: Removed session 13. Mar 12 02:09:22.904836 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:42138.service - OpenSSH per-connection server daemon (10.0.0.1:42138). Mar 12 02:09:24.016819 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 42138 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:09:24.067017 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:09:24.320394 systemd-logind[1589]: New session 14 of user core. Mar 12 02:09:24.438471 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 02:09:25.806877 sshd[5102]: Connection closed by 10.0.0.1 port 42138 Mar 12 02:09:25.818814 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Mar 12 02:09:25.930525 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:42138.service: Deactivated successfully. Mar 12 02:09:25.960772 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 02:09:25.994021 systemd-logind[1589]: Session 14 logged out. Waiting for processes to exit. Mar 12 02:09:26.081224 systemd-logind[1589]: Removed session 14. Mar 12 02:09:30.997567 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:48492.service - OpenSSH per-connection server daemon (10.0.0.1:48492). Mar 12 02:09:31.761044 sshd[5138]: Accepted publickey for core from 10.0.0.1 port 48492 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:09:31.759565 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:09:31.886350 systemd-logind[1589]: New session 15 of user core. Mar 12 02:09:31.979356 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 02:09:34.195528 sshd[5156]: Connection closed by 10.0.0.1 port 48492 Mar 12 02:09:34.192874 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Mar 12 02:09:34.263463 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:48492.service: Deactivated successfully. Mar 12 02:09:34.293793 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 02:09:34.352856 systemd-logind[1589]: Session 15 logged out. Waiting for processes to exit. Mar 12 02:09:34.376969 systemd-logind[1589]: Removed session 15. Mar 12 02:09:39.388932 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:53784.service - OpenSSH per-connection server daemon (10.0.0.1:53784). Mar 12 02:09:40.362500 sshd[5202]: Accepted publickey for core from 10.0.0.1 port 53784 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:09:40.369990 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:09:40.443945 systemd-logind[1589]: New session 16 of user core. Mar 12 02:09:40.499990 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 02:09:41.865879 sshd[5206]: Connection closed by 10.0.0.1 port 53784 Mar 12 02:09:41.857371 sshd-session[5202]: pam_unix(sshd:session): session closed for user core Mar 12 02:09:41.946461 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:53784.service: Deactivated successfully. Mar 12 02:09:42.019554 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 02:09:42.046460 systemd-logind[1589]: Session 16 logged out. Waiting for processes to exit. Mar 12 02:09:42.092827 systemd-logind[1589]: Removed session 16. Mar 12 02:09:43.087702 kubelet[2981]: E0312 02:09:43.043059 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:09:47.126566 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:53794.service - OpenSSH per-connection server daemon (10.0.0.1:53794). Mar 12 02:09:48.644558 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 53794 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:09:48.679900 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:09:48.811806 systemd-logind[1589]: New session 17 of user core. Mar 12 02:09:48.857283 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 02:09:50.762938 sshd[5260]: Connection closed by 10.0.0.1 port 53794 Mar 12 02:09:50.764366 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Mar 12 02:09:50.836337 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:53794.service: Deactivated successfully. Mar 12 02:09:50.887944 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 02:09:50.903082 systemd-logind[1589]: Session 17 logged out. Waiting for processes to exit. Mar 12 02:09:50.944859 systemd-logind[1589]: Removed session 17. Mar 12 02:09:56.047109 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:47784.service - OpenSSH per-connection server daemon (10.0.0.1:47784). Mar 12 02:09:56.863789 sshd[5299]: Accepted publickey for core from 10.0.0.1 port 47784 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:09:56.928980 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:09:57.019503 systemd-logind[1589]: New session 18 of user core. Mar 12 02:09:57.096449 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 02:09:59.073989 sshd[5305]: Connection closed by 10.0.0.1 port 47784 Mar 12 02:09:59.071036 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Mar 12 02:09:59.250401 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:47784.service: Deactivated successfully. Mar 12 02:09:59.313179 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 02:09:59.365023 systemd-logind[1589]: Session 18 logged out. Waiting for processes to exit. Mar 12 02:09:59.394014 systemd-logind[1589]: Removed session 18. Mar 12 02:10:03.948997 kubelet[2981]: E0312 02:10:03.946453 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:10:04.183842 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:57420.service - OpenSSH per-connection server daemon (10.0.0.1:57420). Mar 12 02:10:04.765818 sshd[5341]: Accepted publickey for core from 10.0.0.1 port 57420 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:10:04.800868 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:10:04.966151 systemd-logind[1589]: New session 19 of user core. Mar 12 02:10:05.011099 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 02:10:06.567126 sshd[5351]: Connection closed by 10.0.0.1 port 57420 Mar 12 02:10:06.568042 sshd-session[5341]: pam_unix(sshd:session): session closed for user core Mar 12 02:10:06.581082 systemd-logind[1589]: Session 19 logged out. Waiting for processes to exit. Mar 12 02:10:06.582531 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:57420.service: Deactivated successfully. Mar 12 02:10:06.594026 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 02:10:06.625075 systemd-logind[1589]: Removed session 19. Mar 12 02:10:08.984430 kubelet[2981]: E0312 02:10:08.959036 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:10:10.995240 kubelet[2981]: E0312 02:10:10.995192 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:10:11.822164 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:44432.service - OpenSSH per-connection server daemon (10.0.0.1:44432). Mar 12 02:10:12.620747 sshd[5400]: Accepted publickey for core from 10.0.0.1 port 44432 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:10:12.645255 sshd-session[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:10:12.719038 systemd-logind[1589]: New session 20 of user core. Mar 12 02:10:12.771476 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 02:10:12.992793 kubelet[2981]: E0312 02:10:12.992397 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:10:14.432907 sshd[5406]: Connection closed by 10.0.0.1 port 44432 Mar 12 02:10:14.424140 sshd-session[5400]: pam_unix(sshd:session): session closed for user core Mar 12 02:10:14.515273 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:44432.service: Deactivated successfully. Mar 12 02:10:14.552241 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 02:10:14.581082 systemd-logind[1589]: Session 20 logged out. Waiting for processes to exit. Mar 12 02:10:14.596922 systemd-logind[1589]: Removed session 20. Mar 12 02:10:14.736419 update_engine[1592]: I20260312 02:10:14.713217 1592 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 12 02:10:14.736419 update_engine[1592]: I20260312 02:10:14.728189 1592 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 12 02:10:14.767156 update_engine[1592]: I20260312 02:10:14.767102 1592 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 12 02:10:14.843885 update_engine[1592]: I20260312 02:10:14.842190 1592 omaha_request_params.cc:62] Current group set to beta Mar 12 02:10:15.026534 update_engine[1592]: I20260312 02:10:14.997431 1592 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 12 02:10:15.026534 update_engine[1592]: I20260312 02:10:15.011099 1592 update_attempter.cc:643] Scheduling an action processor start. Mar 12 02:10:15.026534 update_engine[1592]: I20260312 02:10:15.011169 1592 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 12 02:10:15.043853 update_engine[1592]: I20260312 02:10:15.043802 1592 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 12 02:10:15.048464 update_engine[1592]: I20260312 02:10:15.048415 1592 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 12 02:10:15.048809 update_engine[1592]: I20260312 02:10:15.048561 1592 omaha_request_action.cc:272] Request: Mar 12 02:10:15.048809 update_engine[1592]: Mar 12 02:10:15.048809 update_engine[1592]: Mar 12 02:10:15.048809 update_engine[1592]: Mar 12 02:10:15.048809 update_engine[1592]: Mar 12 02:10:15.048809 update_engine[1592]: Mar 12 02:10:15.048809 update_engine[1592]: Mar 12 02:10:15.048809 update_engine[1592]: Mar 12 02:10:15.048809 update_engine[1592]: Mar 12 02:10:15.052118 update_engine[1592]: I20260312 02:10:15.049242 1592 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 02:10:15.217128 locksmithd[1647]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 12 02:10:15.578890 update_engine[1592]: I20260312 02:10:15.397879 1592 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 02:10:16.460073 update_engine[1592]: I20260312 02:10:16.451857 1592 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 02:10:16.843884 update_engine[1592]: E20260312 02:10:16.705948 1592 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Mar 12 02:10:16.911874 update_engine[1592]: I20260312 02:10:16.872232 1592 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 12 02:10:18.148892 kubelet[2981]: E0312 02:10:18.148523 2981 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.105s" Mar 12 02:10:19.067224 kubelet[2981]: E0312 02:10:19.026489 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:10:19.674810 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:48270.service - OpenSSH per-connection server daemon (10.0.0.1:48270). Mar 12 02:10:21.056018 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 48270 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:10:21.090427 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:10:21.274223 systemd-logind[1589]: New session 21 of user core. Mar 12 02:10:21.335450 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 02:10:23.490950 sshd[5450]: Connection closed by 10.0.0.1 port 48270 Mar 12 02:10:23.494949 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Mar 12 02:10:23.558064 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:48270.service: Deactivated successfully. Mar 12 02:10:23.583934 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 02:10:23.596043 systemd-logind[1589]: Session 21 logged out. Waiting for processes to exit. Mar 12 02:10:23.599506 systemd-logind[1589]: Removed session 21. Mar 12 02:10:26.661157 update_engine[1592]: I20260312 02:10:26.653100 1592 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 02:10:26.661157 update_engine[1592]: I20260312 02:10:26.653497 1592 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 02:10:26.687731 update_engine[1592]: I20260312 02:10:26.685269 1592 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 02:10:26.713862 update_engine[1592]: E20260312 02:10:26.712010 1592 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Mar 12 02:10:26.720123 update_engine[1592]: I20260312 02:10:26.718858 1592 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 12 02:10:28.769918 systemd[1]: Started sshd@20-10.0.0.28:22-10.0.0.1:48282.service - OpenSSH per-connection server daemon (10.0.0.1:48282). Mar 12 02:10:30.561762 sshd[5486]: Accepted publickey for core from 10.0.0.1 port 48282 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:10:30.581993 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:10:30.788130 systemd-logind[1589]: New session 22 of user core. Mar 12 02:10:30.913199 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 02:10:32.009812 kubelet[2981]: E0312 02:10:32.008971 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:10:34.162043 sshd[5490]: Connection closed by 10.0.0.1 port 48282 Mar 12 02:10:34.152305 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Mar 12 02:10:34.247226 systemd[1]: sshd@20-10.0.0.28:22-10.0.0.1:48282.service: Deactivated successfully. Mar 12 02:10:34.280167 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 02:10:34.327279 systemd-logind[1589]: Session 22 logged out. Waiting for processes to exit. Mar 12 02:10:34.344842 systemd-logind[1589]: Removed session 22. Mar 12 02:10:36.658795 update_engine[1592]: I20260312 02:10:36.657186 1592 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 02:10:36.658795 update_engine[1592]: I20260312 02:10:36.657549 1592 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 02:10:36.715090 update_engine[1592]: I20260312 02:10:36.712935 1592 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 02:10:36.735953 update_engine[1592]: E20260312 02:10:36.731215 1592 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Mar 12 02:10:36.735953 update_engine[1592]: I20260312 02:10:36.731353 1592 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 12 02:10:39.263297 systemd[1]: Started sshd@21-10.0.0.28:22-10.0.0.1:50832.service - OpenSSH per-connection server daemon (10.0.0.1:50832). Mar 12 02:10:40.211564 sshd[5547]: Accepted publickey for core from 10.0.0.1 port 50832 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:10:40.232179 sshd-session[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:10:40.386123 systemd-logind[1589]: New session 23 of user core. Mar 12 02:10:40.461182 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 02:10:42.758767 sshd[5551]: Connection closed by 10.0.0.1 port 50832 Mar 12 02:10:42.749850 sshd-session[5547]: pam_unix(sshd:session): session closed for user core Mar 12 02:10:42.835893 systemd[1]: sshd@21-10.0.0.28:22-10.0.0.1:50832.service: Deactivated successfully. Mar 12 02:10:42.887089 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 02:10:42.911722 systemd-logind[1589]: Session 23 logged out. Waiting for processes to exit. Mar 12 02:10:42.929355 systemd-logind[1589]: Removed session 23. Mar 12 02:10:43.850248 containerd[1613]: time="2026-03-12T02:10:43.850164285Z" level=info msg="container event discarded" container=b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0 type=CONTAINER_STOPPED_EVENT Mar 12 02:10:45.089295 containerd[1613]: time="2026-03-12T02:10:45.080537372Z" level=info msg="container event discarded" container=f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2 type=CONTAINER_CREATED_EVENT Mar 12 02:10:45.545260 containerd[1613]: time="2026-03-12T02:10:45.520290833Z" level=info msg="container event discarded" container=f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8 type=CONTAINER_STOPPED_EVENT Mar 12 02:10:46.527946 containerd[1613]: time="2026-03-12T02:10:46.527380501Z" level=info msg="container event discarded" container=51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f type=CONTAINER_CREATED_EVENT Mar 12 02:10:46.661854 update_engine[1592]: I20260312 02:10:46.651281 1592 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 02:10:46.667878 update_engine[1592]: I20260312 02:10:46.663916 1592 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 02:10:46.667878 update_engine[1592]: I20260312 02:10:46.665093 1592 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 02:10:46.708968 update_engine[1592]: E20260312 02:10:46.706037 1592 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Mar 12 02:10:46.708968 update_engine[1592]: I20260312 02:10:46.706183 1592 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 12 02:10:46.708968 update_engine[1592]: I20260312 02:10:46.706203 1592 omaha_request_action.cc:617] Omaha request response: Mar 12 02:10:46.709384 update_engine[1592]: E20260312 02:10:46.709352 1592 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.720010 1592 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.720041 1592 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.720053 1592 update_attempter.cc:306] Processing Done. Mar 12 02:10:46.727733 update_engine[1592]: E20260312 02:10:46.720271 1592 update_attempter.cc:619] Update failed. Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.720804 1592 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.720822 1592 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.720835 1592 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.720920 1592 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.720953 1592 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.720965 1592 omaha_request_action.cc:272] Request: Mar 12 02:10:46.727733 update_engine[1592]: Mar 12 02:10:46.727733 update_engine[1592]: Mar 12 02:10:46.727733 update_engine[1592]: Mar 12 02:10:46.727733 update_engine[1592]: Mar 12 02:10:46.727733 update_engine[1592]: Mar 12 02:10:46.727733 update_engine[1592]: Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.720979 1592 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 02:10:46.727733 update_engine[1592]: I20260312 02:10:46.721267 1592 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 02:10:46.732945 locksmithd[1647]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 12 02:10:46.734329 update_engine[1592]: I20260312 02:10:46.734292 1592 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 02:10:46.763122 update_engine[1592]: E20260312 02:10:46.763057 1592 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Mar 12 02:10:46.763392 update_engine[1592]: I20260312 02:10:46.763362 1592 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 12 02:10:46.763803 update_engine[1592]: I20260312 02:10:46.763778 1592 omaha_request_action.cc:617] Omaha request response: Mar 12 02:10:46.763899 update_engine[1592]: I20260312 02:10:46.763877 1592 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 02:10:46.763978 update_engine[1592]: I20260312 02:10:46.763957 1592 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 02:10:46.764050 update_engine[1592]: I20260312 02:10:46.764027 1592 update_attempter.cc:306] Processing Done. Mar 12 02:10:46.764135 update_engine[1592]: I20260312 02:10:46.764113 1592 update_attempter.cc:310] Error event sent. Mar 12 02:10:46.764566 update_engine[1592]: I20260312 02:10:46.764314 1592 update_check_scheduler.cc:74] Next update check in 47m34s Mar 12 02:10:46.766134 locksmithd[1647]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 12 02:10:47.888271 systemd[1]: Started sshd@22-10.0.0.28:22-10.0.0.1:50840.service - OpenSSH per-connection server daemon (10.0.0.1:50840). Mar 12 02:10:48.660143 sshd[5591]: Accepted publickey for core from 10.0.0.1 port 50840 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:10:48.677546 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:10:48.891398 systemd-logind[1589]: New session 24 of user core. Mar 12 02:10:48.930159 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 02:10:49.756054 containerd[1613]: time="2026-03-12T02:10:49.755951460Z" level=info msg="container event discarded" container=f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2 type=CONTAINER_STARTED_EVENT Mar 12 02:10:51.086793 containerd[1613]: time="2026-03-12T02:10:51.066906335Z" level=info msg="container event discarded" container=51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f type=CONTAINER_STARTED_EVENT Mar 12 02:10:51.250088 sshd[5604]: Connection closed by 10.0.0.1 port 50840 Mar 12 02:10:51.255146 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Mar 12 02:10:51.315095 systemd[1]: sshd@22-10.0.0.28:22-10.0.0.1:50840.service: Deactivated successfully. Mar 12 02:10:51.342028 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 02:10:51.417375 systemd-logind[1589]: Session 24 logged out. Waiting for processes to exit. Mar 12 02:10:51.438213 systemd-logind[1589]: Removed session 24. Mar 12 02:10:56.361045 systemd[1]: Started sshd@23-10.0.0.28:22-10.0.0.1:34332.service - OpenSSH per-connection server daemon (10.0.0.1:34332). Mar 12 02:10:56.392790 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Mar 12 02:10:56.882206 systemd-tmpfiles[5645]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 12 02:10:56.882353 systemd-tmpfiles[5645]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 12 02:10:56.917192 systemd-tmpfiles[5645]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 02:10:57.055155 systemd-tmpfiles[5645]: ACLs are not supported, ignoring. Mar 12 02:10:57.055280 systemd-tmpfiles[5645]: ACLs are not supported, ignoring. Mar 12 02:10:57.273847 systemd-tmpfiles[5645]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 02:10:57.274189 systemd-tmpfiles[5645]: Skipping /boot Mar 12 02:10:57.455376 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Mar 12 02:10:57.472301 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Mar 12 02:10:57.589466 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 34332 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:10:57.649204 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:10:57.727992 systemd-logind[1589]: New session 25 of user core. Mar 12 02:10:57.762047 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 02:10:59.572203 sshd[5654]: Connection closed by 10.0.0.1 port 34332 Mar 12 02:10:59.570405 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Mar 12 02:10:59.656998 systemd-logind[1589]: Session 25 logged out. Waiting for processes to exit. Mar 12 02:10:59.665171 systemd[1]: sshd@23-10.0.0.28:22-10.0.0.1:34332.service: Deactivated successfully. Mar 12 02:10:59.715345 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 02:10:59.735483 systemd-logind[1589]: Removed session 25. Mar 12 02:10:59.947267 kubelet[2981]: E0312 02:10:59.946387 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:11:04.647274 systemd[1]: Started sshd@24-10.0.0.28:22-10.0.0.1:55046.service - OpenSSH per-connection server daemon (10.0.0.1:55046). Mar 12 02:11:05.017944 sshd[5695]: Accepted publickey for core from 10.0.0.1 port 55046 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:05.036010 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:05.090324 systemd-logind[1589]: New session 26 of user core. Mar 12 02:11:05.122559 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 12 02:11:05.618355 sshd[5703]: Connection closed by 10.0.0.1 port 55046 Mar 12 02:11:05.619702 sshd-session[5695]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:05.638934 systemd[1]: sshd@24-10.0.0.28:22-10.0.0.1:55046.service: Deactivated successfully. Mar 12 02:11:05.649054 systemd[1]: session-26.scope: Deactivated successfully. Mar 12 02:11:05.662503 systemd-logind[1589]: Session 26 logged out. Waiting for processes to exit. Mar 12 02:11:05.668103 systemd-logind[1589]: Removed session 26. Mar 12 02:11:10.679176 systemd[1]: Started sshd@25-10.0.0.28:22-10.0.0.1:48900.service - OpenSSH per-connection server daemon (10.0.0.1:48900). Mar 12 02:11:11.108528 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 48900 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:11.122301 sshd-session[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:11.158710 systemd-logind[1589]: New session 27 of user core. Mar 12 02:11:11.177132 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 12 02:11:11.835165 sshd[5753]: Connection closed by 10.0.0.1 port 48900 Mar 12 02:11:11.836237 sshd-session[5749]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:11.859326 systemd[1]: sshd@25-10.0.0.28:22-10.0.0.1:48900.service: Deactivated successfully. Mar 12 02:11:11.903458 systemd[1]: session-27.scope: Deactivated successfully. Mar 12 02:11:11.932013 systemd-logind[1589]: Session 27 logged out. Waiting for processes to exit. Mar 12 02:11:11.947134 systemd-logind[1589]: Removed session 27. Mar 12 02:11:12.947858 kubelet[2981]: E0312 02:11:12.945379 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:11:16.941010 systemd[1]: Started sshd@26-10.0.0.28:22-10.0.0.1:48908.service - OpenSSH per-connection server daemon (10.0.0.1:48908). Mar 12 02:11:17.344997 sshd[5787]: Accepted publickey for core from 10.0.0.1 port 48908 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:17.340245 sshd-session[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:17.376996 systemd-logind[1589]: New session 28 of user core. Mar 12 02:11:17.395177 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 12 02:11:17.755066 sshd[5791]: Connection closed by 10.0.0.1 port 48908 Mar 12 02:11:17.757990 sshd-session[5787]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:17.780316 systemd[1]: sshd@26-10.0.0.28:22-10.0.0.1:48908.service: Deactivated successfully. Mar 12 02:11:17.794116 systemd[1]: session-28.scope: Deactivated successfully. Mar 12 02:11:17.800782 systemd-logind[1589]: Session 28 logged out. Waiting for processes to exit. Mar 12 02:11:17.819498 systemd-logind[1589]: Removed session 28. Mar 12 02:11:20.961220 kubelet[2981]: E0312 02:11:20.961172 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:11:22.930266 systemd[1]: Started sshd@27-10.0.0.28:22-10.0.0.1:36232.service - OpenSSH per-connection server daemon (10.0.0.1:36232). Mar 12 02:11:23.491789 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 36232 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:23.503387 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:23.603078 systemd-logind[1589]: New session 29 of user core. Mar 12 02:11:23.632243 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 12 02:11:24.206419 sshd[5835]: Connection closed by 10.0.0.1 port 36232 Mar 12 02:11:24.206295 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:24.265087 systemd[1]: sshd@27-10.0.0.28:22-10.0.0.1:36232.service: Deactivated successfully. Mar 12 02:11:24.273495 systemd[1]: session-29.scope: Deactivated successfully. Mar 12 02:11:24.289245 systemd-logind[1589]: Session 29 logged out. Waiting for processes to exit. Mar 12 02:11:24.295894 systemd[1]: Started sshd@28-10.0.0.28:22-10.0.0.1:36238.service - OpenSSH per-connection server daemon (10.0.0.1:36238). Mar 12 02:11:24.312858 systemd-logind[1589]: Removed session 29. Mar 12 02:11:24.614885 sshd[5851]: Accepted publickey for core from 10.0.0.1 port 36238 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:24.620227 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:24.666249 systemd-logind[1589]: New session 30 of user core. Mar 12 02:11:24.690244 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 12 02:11:25.961167 sshd[5855]: Connection closed by 10.0.0.1 port 36238 Mar 12 02:11:25.958159 sshd-session[5851]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:26.076978 systemd[1]: sshd@28-10.0.0.28:22-10.0.0.1:36238.service: Deactivated successfully. Mar 12 02:11:26.100103 systemd[1]: session-30.scope: Deactivated successfully. Mar 12 02:11:26.110891 systemd-logind[1589]: Session 30 logged out. Waiting for processes to exit. Mar 12 02:11:26.131183 systemd[1]: Started sshd@29-10.0.0.28:22-10.0.0.1:36242.service - OpenSSH per-connection server daemon (10.0.0.1:36242). Mar 12 02:11:26.140925 systemd-logind[1589]: Removed session 30. Mar 12 02:11:26.421031 sshd[5869]: Accepted publickey for core from 10.0.0.1 port 36242 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:26.420135 sshd-session[5869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:26.449483 systemd-logind[1589]: New session 31 of user core. Mar 12 02:11:26.503516 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 12 02:11:26.887433 sshd[5885]: Connection closed by 10.0.0.1 port 36242 Mar 12 02:11:26.889494 sshd-session[5869]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:26.915521 systemd[1]: sshd@29-10.0.0.28:22-10.0.0.1:36242.service: Deactivated successfully. Mar 12 02:11:26.933295 systemd[1]: session-31.scope: Deactivated successfully. Mar 12 02:11:26.959135 systemd-logind[1589]: Session 31 logged out. Waiting for processes to exit. Mar 12 02:11:26.986904 systemd-logind[1589]: Removed session 31. Mar 12 02:11:30.990414 kubelet[2981]: E0312 02:11:30.967996 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:11:31.003184 kubelet[2981]: E0312 02:11:31.001727 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:11:31.936214 systemd[1]: Started sshd@30-10.0.0.28:22-10.0.0.1:49338.service - OpenSSH per-connection server daemon (10.0.0.1:49338). Mar 12 02:11:32.309135 sshd[5918]: Accepted publickey for core from 10.0.0.1 port 49338 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:32.315884 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:32.396846 systemd-logind[1589]: New session 32 of user core. Mar 12 02:11:32.429323 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 12 02:11:33.305370 sshd[5922]: Connection closed by 10.0.0.1 port 49338 Mar 12 02:11:33.314249 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:33.349197 systemd-logind[1589]: Session 32 logged out. Waiting for processes to exit. Mar 12 02:11:33.349431 systemd[1]: sshd@30-10.0.0.28:22-10.0.0.1:49338.service: Deactivated successfully. Mar 12 02:11:33.380259 systemd[1]: session-32.scope: Deactivated successfully. Mar 12 02:11:33.407994 systemd-logind[1589]: Removed session 32. Mar 12 02:11:38.392969 systemd[1]: Started sshd@31-10.0.0.28:22-10.0.0.1:49350.service - OpenSSH per-connection server daemon (10.0.0.1:49350). Mar 12 02:11:39.003505 sshd[5957]: Accepted publickey for core from 10.0.0.1 port 49350 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:39.016512 sshd-session[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:39.129147 systemd-logind[1589]: New session 33 of user core. Mar 12 02:11:39.151818 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 12 02:11:40.468867 sshd[5967]: Connection closed by 10.0.0.1 port 49350 Mar 12 02:11:40.500016 sshd-session[5957]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:40.596148 systemd[1]: sshd@31-10.0.0.28:22-10.0.0.1:49350.service: Deactivated successfully. Mar 12 02:11:40.643224 systemd[1]: session-33.scope: Deactivated successfully. Mar 12 02:11:40.652923 systemd-logind[1589]: Session 33 logged out. Waiting for processes to exit. Mar 12 02:11:40.697900 systemd-logind[1589]: Removed session 33. Mar 12 02:11:42.016022 kubelet[2981]: E0312 02:11:42.007081 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:11:45.512370 systemd[1]: Started sshd@32-10.0.0.28:22-10.0.0.1:49830.service - OpenSSH per-connection server daemon (10.0.0.1:49830). Mar 12 02:11:45.939039 sshd[6001]: Accepted publickey for core from 10.0.0.1 port 49830 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:45.951410 sshd-session[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:46.007028 systemd-logind[1589]: New session 34 of user core. Mar 12 02:11:46.040906 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 12 02:11:46.547986 sshd[6005]: Connection closed by 10.0.0.1 port 49830 Mar 12 02:11:46.545184 sshd-session[6001]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:46.560224 systemd[1]: sshd@32-10.0.0.28:22-10.0.0.1:49830.service: Deactivated successfully. Mar 12 02:11:46.564067 systemd[1]: session-34.scope: Deactivated successfully. Mar 12 02:11:46.588507 systemd-logind[1589]: Session 34 logged out. Waiting for processes to exit. Mar 12 02:11:46.610149 systemd-logind[1589]: Removed session 34. Mar 12 02:11:51.632709 systemd[1]: Started sshd@33-10.0.0.28:22-10.0.0.1:37390.service - OpenSSH per-connection server daemon (10.0.0.1:37390). Mar 12 02:11:52.224242 sshd[6039]: Accepted publickey for core from 10.0.0.1 port 37390 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:52.234501 sshd-session[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:52.288501 systemd-logind[1589]: New session 35 of user core. Mar 12 02:11:52.309934 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 12 02:11:53.328076 sshd[6043]: Connection closed by 10.0.0.1 port 37390 Mar 12 02:11:53.329119 sshd-session[6039]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:53.346491 systemd[1]: sshd@33-10.0.0.28:22-10.0.0.1:37390.service: Deactivated successfully. Mar 12 02:11:53.525100 systemd[1]: session-35.scope: Deactivated successfully. Mar 12 02:11:53.541858 systemd-logind[1589]: Session 35 logged out. Waiting for processes to exit. Mar 12 02:11:53.589977 systemd-logind[1589]: Removed session 35. Mar 12 02:11:58.413161 systemd[1]: Started sshd@34-10.0.0.28:22-10.0.0.1:37398.service - OpenSSH per-connection server daemon (10.0.0.1:37398). Mar 12 02:11:58.838147 sshd[6093]: Accepted publickey for core from 10.0.0.1 port 37398 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:11:58.852418 sshd-session[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:11:58.908198 systemd-logind[1589]: New session 36 of user core. Mar 12 02:11:58.937146 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 12 02:11:59.857381 sshd[6097]: Connection closed by 10.0.0.1 port 37398 Mar 12 02:11:59.859198 sshd-session[6093]: pam_unix(sshd:session): session closed for user core Mar 12 02:11:59.907397 systemd-logind[1589]: Session 36 logged out. Waiting for processes to exit. Mar 12 02:11:59.915359 systemd[1]: sshd@34-10.0.0.28:22-10.0.0.1:37398.service: Deactivated successfully. Mar 12 02:11:59.930533 systemd[1]: session-36.scope: Deactivated successfully. Mar 12 02:11:59.949974 systemd-logind[1589]: Removed session 36. Mar 12 02:12:01.950072 kubelet[2981]: E0312 02:12:01.946296 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:12:04.946526 systemd[1]: Started sshd@35-10.0.0.28:22-10.0.0.1:54148.service - OpenSSH per-connection server daemon (10.0.0.1:54148). Mar 12 02:12:05.691337 sshd[6137]: Accepted publickey for core from 10.0.0.1 port 54148 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:12:05.729268 sshd-session[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:12:05.781906 systemd-logind[1589]: New session 37 of user core. Mar 12 02:12:05.822370 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 12 02:12:06.714197 sshd[6141]: Connection closed by 10.0.0.1 port 54148 Mar 12 02:12:06.713071 sshd-session[6137]: pam_unix(sshd:session): session closed for user core Mar 12 02:12:06.804467 systemd[1]: sshd@35-10.0.0.28:22-10.0.0.1:54148.service: Deactivated successfully. Mar 12 02:12:06.837716 systemd[1]: session-37.scope: Deactivated successfully. Mar 12 02:12:06.867816 systemd-logind[1589]: Session 37 logged out. Waiting for processes to exit. Mar 12 02:12:06.883806 systemd-logind[1589]: Removed session 37. Mar 12 02:12:11.798354 systemd[1]: Started sshd@36-10.0.0.28:22-10.0.0.1:60464.service - OpenSSH per-connection server daemon (10.0.0.1:60464). Mar 12 02:12:12.221139 sshd[6177]: Accepted publickey for core from 10.0.0.1 port 60464 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:12:12.229559 sshd-session[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:12:12.288216 systemd-logind[1589]: New session 38 of user core. Mar 12 02:12:12.306922 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 12 02:12:13.004037 sshd[6181]: Connection closed by 10.0.0.1 port 60464 Mar 12 02:12:13.006367 sshd-session[6177]: pam_unix(sshd:session): session closed for user core Mar 12 02:12:13.048487 systemd[1]: sshd@36-10.0.0.28:22-10.0.0.1:60464.service: Deactivated successfully. Mar 12 02:12:13.087958 systemd[1]: session-38.scope: Deactivated successfully. Mar 12 02:12:13.108460 systemd-logind[1589]: Session 38 logged out. Waiting for processes to exit. Mar 12 02:12:13.121464 systemd-logind[1589]: Removed session 38. Mar 12 02:12:18.107748 systemd[1]: Started sshd@37-10.0.0.28:22-10.0.0.1:60476.service - OpenSSH per-connection server daemon (10.0.0.1:60476). Mar 12 02:12:18.605085 sshd[6215]: Accepted publickey for core from 10.0.0.1 port 60476 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:12:18.613112 sshd-session[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:12:18.655755 systemd-logind[1589]: New session 39 of user core. Mar 12 02:12:18.688692 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 12 02:12:19.793388 sshd[6219]: Connection closed by 10.0.0.1 port 60476 Mar 12 02:12:19.800979 sshd-session[6215]: pam_unix(sshd:session): session closed for user core Mar 12 02:12:19.855063 systemd[1]: sshd@37-10.0.0.28:22-10.0.0.1:60476.service: Deactivated successfully. Mar 12 02:12:19.894339 systemd[1]: session-39.scope: Deactivated successfully. Mar 12 02:12:19.901291 systemd-logind[1589]: Session 39 logged out. Waiting for processes to exit. Mar 12 02:12:19.923746 systemd-logind[1589]: Removed session 39. Mar 12 02:12:19.940444 kubelet[2981]: E0312 02:12:19.940395 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:12:24.900793 systemd[1]: Started sshd@38-10.0.0.28:22-10.0.0.1:56142.service - OpenSSH per-connection server daemon (10.0.0.1:56142). Mar 12 02:12:25.640392 sshd[6271]: Accepted publickey for core from 10.0.0.1 port 56142 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:12:25.666410 sshd-session[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:12:25.783434 systemd-logind[1589]: New session 40 of user core. Mar 12 02:12:25.814479 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 12 02:12:27.003389 sshd[6281]: Connection closed by 10.0.0.1 port 56142 Mar 12 02:12:27.007057 sshd-session[6271]: pam_unix(sshd:session): session closed for user core Mar 12 02:12:27.035029 systemd[1]: sshd@38-10.0.0.28:22-10.0.0.1:56142.service: Deactivated successfully. Mar 12 02:12:27.060553 systemd[1]: session-40.scope: Deactivated successfully. Mar 12 02:12:27.085155 systemd-logind[1589]: Session 40 logged out. Waiting for processes to exit. Mar 12 02:12:27.096717 systemd-logind[1589]: Removed session 40. Mar 12 02:12:27.947909 kubelet[2981]: E0312 02:12:27.947857 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:12:31.897285 containerd[1613]: time="2026-03-12T02:12:31.892765786Z" level=info msg="container event discarded" container=f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2 type=CONTAINER_STOPPED_EVENT Mar 12 02:12:32.104849 systemd[1]: Started sshd@39-10.0.0.28:22-10.0.0.1:36820.service - OpenSSH per-connection server daemon (10.0.0.1:36820). Mar 12 02:12:32.666372 sshd[6316]: Accepted publickey for core from 10.0.0.1 port 36820 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:12:32.701535 containerd[1613]: time="2026-03-12T02:12:32.693391834Z" level=info msg="container event discarded" container=b19e3dac97e3eb341d035a997015489b64d7dbaffa185e69f9e6445654759df0 type=CONTAINER_DELETED_EVENT Mar 12 02:12:32.699713 sshd-session[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:12:32.795316 systemd-logind[1589]: New session 41 of user core. Mar 12 02:12:32.806318 containerd[1613]: time="2026-03-12T02:12:32.800799360Z" level=info msg="container event discarded" container=ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0 type=CONTAINER_CREATED_EVENT Mar 12 02:12:32.892376 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 12 02:12:33.325660 containerd[1613]: time="2026-03-12T02:12:33.325165034Z" level=info msg="container event discarded" container=ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0 type=CONTAINER_STARTED_EVENT Mar 12 02:12:33.959817 sshd[6321]: Connection closed by 10.0.0.1 port 36820 Mar 12 02:12:33.958489 sshd-session[6316]: pam_unix(sshd:session): session closed for user core Mar 12 02:12:34.005178 systemd[1]: sshd@39-10.0.0.28:22-10.0.0.1:36820.service: Deactivated successfully. Mar 12 02:12:34.012697 systemd[1]: session-41.scope: Deactivated successfully. Mar 12 02:12:34.028316 systemd-logind[1589]: Session 41 logged out. Waiting for processes to exit. Mar 12 02:12:34.048224 systemd-logind[1589]: Removed session 41. Mar 12 02:12:38.953951 kubelet[2981]: E0312 02:12:38.953902 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:12:39.045878 systemd[1]: Started sshd@40-10.0.0.28:22-10.0.0.1:47116.service - OpenSSH per-connection server daemon (10.0.0.1:47116). Mar 12 02:12:39.634180 sshd[6357]: Accepted publickey for core from 10.0.0.1 port 47116 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:12:39.643416 sshd-session[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:12:39.734090 systemd-logind[1589]: New session 42 of user core. Mar 12 02:12:39.760928 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 12 02:12:40.359083 sshd[6361]: Connection closed by 10.0.0.1 port 47116 Mar 12 02:12:40.358370 sshd-session[6357]: pam_unix(sshd:session): session closed for user core Mar 12 02:12:40.401506 systemd-logind[1589]: Session 42 logged out. Waiting for processes to exit. Mar 12 02:12:40.402769 systemd[1]: sshd@40-10.0.0.28:22-10.0.0.1:47116.service: Deactivated successfully. Mar 12 02:12:40.429399 systemd[1]: session-42.scope: Deactivated successfully. Mar 12 02:12:40.452189 systemd-logind[1589]: Removed session 42. Mar 12 02:12:45.468320 systemd[1]: Started sshd@41-10.0.0.28:22-10.0.0.1:47124.service - OpenSSH per-connection server daemon (10.0.0.1:47124). Mar 12 02:12:46.237192 sshd[6394]: Accepted publickey for core from 10.0.0.1 port 47124 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:12:46.288466 sshd-session[6394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:12:46.361142 systemd-logind[1589]: New session 43 of user core. Mar 12 02:12:46.418390 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 12 02:12:47.420875 sshd[6418]: Connection closed by 10.0.0.1 port 47124 Mar 12 02:12:47.425296 sshd-session[6394]: pam_unix(sshd:session): session closed for user core Mar 12 02:12:47.488861 systemd[1]: sshd@41-10.0.0.28:22-10.0.0.1:47124.service: Deactivated successfully. Mar 12 02:12:47.528822 systemd[1]: session-43.scope: Deactivated successfully. Mar 12 02:12:47.551177 systemd-logind[1589]: Session 43 logged out. Waiting for processes to exit. Mar 12 02:12:47.592344 systemd-logind[1589]: Removed session 43. Mar 12 02:12:50.949368 kubelet[2981]: E0312 02:12:50.947329 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:12:50.949368 kubelet[2981]: E0312 02:12:50.948435 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:12:52.464441 systemd[1]: Started sshd@42-10.0.0.28:22-10.0.0.1:33142.service - OpenSSH per-connection server daemon (10.0.0.1:33142). Mar 12 02:12:52.719380 sshd[6451]: Accepted publickey for core from 10.0.0.1 port 33142 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:12:52.729301 sshd-session[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:12:52.782213 systemd-logind[1589]: New session 44 of user core. Mar 12 02:12:52.805425 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 12 02:12:53.431289 sshd[6455]: Connection closed by 10.0.0.1 port 33142 Mar 12 02:12:53.432937 sshd-session[6451]: pam_unix(sshd:session): session closed for user core Mar 12 02:12:53.491904 systemd[1]: sshd@42-10.0.0.28:22-10.0.0.1:33142.service: Deactivated successfully. Mar 12 02:12:53.519503 systemd[1]: session-44.scope: Deactivated successfully. Mar 12 02:12:53.528954 systemd-logind[1589]: Session 44 logged out. Waiting for processes to exit. Mar 12 02:12:53.550133 systemd-logind[1589]: Removed session 44. Mar 12 02:12:53.944501 kubelet[2981]: E0312 02:12:53.942262 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:12:58.626826 systemd[1]: Started sshd@43-10.0.0.28:22-10.0.0.1:33158.service - OpenSSH per-connection server daemon (10.0.0.1:33158). Mar 12 02:12:59.216196 sshd[6490]: Accepted publickey for core from 10.0.0.1 port 33158 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:12:59.224254 sshd-session[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:12:59.285777 systemd-logind[1589]: New session 45 of user core. Mar 12 02:12:59.324793 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 12 02:13:00.129553 sshd[6494]: Connection closed by 10.0.0.1 port 33158 Mar 12 02:13:00.127944 sshd-session[6490]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:00.150226 systemd[1]: sshd@43-10.0.0.28:22-10.0.0.1:33158.service: Deactivated successfully. Mar 12 02:13:00.168930 systemd[1]: session-45.scope: Deactivated successfully. Mar 12 02:13:00.182389 systemd-logind[1589]: Session 45 logged out. Waiting for processes to exit. Mar 12 02:13:00.184867 systemd-logind[1589]: Removed session 45. Mar 12 02:13:05.205994 systemd[1]: Started sshd@44-10.0.0.28:22-10.0.0.1:38850.service - OpenSSH per-connection server daemon (10.0.0.1:38850). Mar 12 02:13:05.606205 sshd[6528]: Accepted publickey for core from 10.0.0.1 port 38850 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:05.617250 sshd-session[6528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:05.649391 systemd-logind[1589]: New session 46 of user core. Mar 12 02:13:05.679991 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 12 02:13:06.466888 sshd[6532]: Connection closed by 10.0.0.1 port 38850 Mar 12 02:13:06.462988 sshd-session[6528]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:06.571898 systemd[1]: sshd@44-10.0.0.28:22-10.0.0.1:38850.service: Deactivated successfully. Mar 12 02:13:06.613843 systemd[1]: session-46.scope: Deactivated successfully. Mar 12 02:13:06.641403 systemd-logind[1589]: Session 46 logged out. Waiting for processes to exit. Mar 12 02:13:06.664078 systemd[1]: Started sshd@45-10.0.0.28:22-10.0.0.1:38864.service - OpenSSH per-connection server daemon (10.0.0.1:38864). Mar 12 02:13:06.710392 systemd-logind[1589]: Removed session 46. Mar 12 02:13:07.303698 sshd[6545]: Accepted publickey for core from 10.0.0.1 port 38864 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:07.314679 sshd-session[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:07.366740 systemd-logind[1589]: New session 47 of user core. Mar 12 02:13:07.387913 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 12 02:13:08.789326 sshd[6557]: Connection closed by 10.0.0.1 port 38864 Mar 12 02:13:08.792813 sshd-session[6545]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:08.891369 systemd[1]: sshd@45-10.0.0.28:22-10.0.0.1:38864.service: Deactivated successfully. Mar 12 02:13:08.909514 systemd[1]: session-47.scope: Deactivated successfully. Mar 12 02:13:08.927730 systemd-logind[1589]: Session 47 logged out. Waiting for processes to exit. Mar 12 02:13:08.946807 systemd[1]: Started sshd@46-10.0.0.28:22-10.0.0.1:41002.service - OpenSSH per-connection server daemon (10.0.0.1:41002). Mar 12 02:13:08.957978 systemd-logind[1589]: Removed session 47. Mar 12 02:13:09.403262 sshd[6583]: Accepted publickey for core from 10.0.0.1 port 41002 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:09.414897 sshd-session[6583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:09.468963 systemd-logind[1589]: New session 48 of user core. Mar 12 02:13:09.525218 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 12 02:13:12.963717 sshd[6587]: Connection closed by 10.0.0.1 port 41002 Mar 12 02:13:12.963312 sshd-session[6583]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:13.039801 kubelet[2981]: E0312 02:13:13.012195 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:13:13.098552 systemd[1]: sshd@46-10.0.0.28:22-10.0.0.1:41002.service: Deactivated successfully. Mar 12 02:13:13.108003 systemd[1]: session-48.scope: Deactivated successfully. Mar 12 02:13:13.110526 systemd[1]: session-48.scope: Consumed 1.101s CPU time, 41.1M memory peak. Mar 12 02:13:13.145709 systemd-logind[1589]: Session 48 logged out. Waiting for processes to exit. Mar 12 02:13:13.167830 systemd[1]: Started sshd@47-10.0.0.28:22-10.0.0.1:41016.service - OpenSSH per-connection server daemon (10.0.0.1:41016). Mar 12 02:13:13.228004 systemd-logind[1589]: Removed session 48. Mar 12 02:13:13.639274 sshd[6621]: Accepted publickey for core from 10.0.0.1 port 41016 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:13.648328 sshd-session[6621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:13.730505 systemd-logind[1589]: New session 49 of user core. Mar 12 02:13:13.781010 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 12 02:13:15.766059 sshd[6633]: Connection closed by 10.0.0.1 port 41016 Mar 12 02:13:15.797059 sshd-session[6621]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:15.890395 systemd[1]: sshd@47-10.0.0.28:22-10.0.0.1:41016.service: Deactivated successfully. Mar 12 02:13:15.898726 systemd[1]: session-49.scope: Deactivated successfully. Mar 12 02:13:15.904407 systemd-logind[1589]: Session 49 logged out. Waiting for processes to exit. Mar 12 02:13:15.936094 systemd[1]: Started sshd@48-10.0.0.28:22-10.0.0.1:41018.service - OpenSSH per-connection server daemon (10.0.0.1:41018). Mar 12 02:13:15.975491 systemd-logind[1589]: Removed session 49. Mar 12 02:13:16.315221 sshd[6644]: Accepted publickey for core from 10.0.0.1 port 41018 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:16.319707 sshd-session[6644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:16.347994 systemd-logind[1589]: New session 50 of user core. Mar 12 02:13:16.364055 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 12 02:13:16.904382 sshd[6648]: Connection closed by 10.0.0.1 port 41018 Mar 12 02:13:16.906473 sshd-session[6644]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:16.931002 systemd[1]: sshd@48-10.0.0.28:22-10.0.0.1:41018.service: Deactivated successfully. Mar 12 02:13:16.946701 systemd[1]: session-50.scope: Deactivated successfully. Mar 12 02:13:16.953808 systemd-logind[1589]: Session 50 logged out. Waiting for processes to exit. Mar 12 02:13:16.992050 systemd-logind[1589]: Removed session 50. Mar 12 02:13:21.989526 systemd[1]: Started sshd@49-10.0.0.28:22-10.0.0.1:41738.service - OpenSSH per-connection server daemon (10.0.0.1:41738). Mar 12 02:13:22.309256 sshd[6681]: Accepted publickey for core from 10.0.0.1 port 41738 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:22.333305 sshd-session[6681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:22.365955 systemd-logind[1589]: New session 51 of user core. Mar 12 02:13:22.383937 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 12 02:13:23.450053 sshd[6685]: Connection closed by 10.0.0.1 port 41738 Mar 12 02:13:23.456065 sshd-session[6681]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:23.498295 systemd[1]: sshd@49-10.0.0.28:22-10.0.0.1:41738.service: Deactivated successfully. Mar 12 02:13:23.503561 systemd[1]: session-51.scope: Deactivated successfully. Mar 12 02:13:23.511988 systemd-logind[1589]: Session 51 logged out. Waiting for processes to exit. Mar 12 02:13:23.529271 systemd-logind[1589]: Removed session 51. Mar 12 02:13:28.524149 systemd[1]: Started sshd@50-10.0.0.28:22-10.0.0.1:41748.service - OpenSSH per-connection server daemon (10.0.0.1:41748). Mar 12 02:13:28.825754 sshd[6724]: Accepted publickey for core from 10.0.0.1 port 41748 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:28.845125 sshd-session[6724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:28.913018 systemd-logind[1589]: New session 52 of user core. Mar 12 02:13:28.929471 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 12 02:13:29.549568 sshd[6733]: Connection closed by 10.0.0.1 port 41748 Mar 12 02:13:29.554009 sshd-session[6724]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:29.610812 systemd[1]: sshd@50-10.0.0.28:22-10.0.0.1:41748.service: Deactivated successfully. Mar 12 02:13:29.633504 systemd[1]: session-52.scope: Deactivated successfully. Mar 12 02:13:29.670851 systemd-logind[1589]: Session 52 logged out. Waiting for processes to exit. Mar 12 02:13:29.681994 systemd-logind[1589]: Removed session 52. Mar 12 02:13:34.622335 systemd[1]: Started sshd@51-10.0.0.28:22-10.0.0.1:49126.service - OpenSSH per-connection server daemon (10.0.0.1:49126). Mar 12 02:13:34.999534 sshd[6776]: Accepted publickey for core from 10.0.0.1 port 49126 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:35.007472 sshd-session[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:35.057050 systemd-logind[1589]: New session 53 of user core. Mar 12 02:13:35.111881 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 12 02:13:36.279807 sshd[6784]: Connection closed by 10.0.0.1 port 49126 Mar 12 02:13:36.284243 sshd-session[6776]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:36.316168 systemd[1]: sshd@51-10.0.0.28:22-10.0.0.1:49126.service: Deactivated successfully. Mar 12 02:13:36.351947 systemd[1]: session-53.scope: Deactivated successfully. Mar 12 02:13:36.358451 systemd-logind[1589]: Session 53 logged out. Waiting for processes to exit. Mar 12 02:13:36.378479 systemd-logind[1589]: Removed session 53. Mar 12 02:13:37.967056 kubelet[2981]: E0312 02:13:37.962388 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:13:39.948425 kubelet[2981]: E0312 02:13:39.940724 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:13:41.333522 systemd[1]: Started sshd@52-10.0.0.28:22-10.0.0.1:45490.service - OpenSSH per-connection server daemon (10.0.0.1:45490). Mar 12 02:13:41.793051 sshd[6821]: Accepted publickey for core from 10.0.0.1 port 45490 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:41.832158 sshd-session[6821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:41.930696 systemd-logind[1589]: New session 54 of user core. Mar 12 02:13:41.949161 kubelet[2981]: E0312 02:13:41.949118 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:13:41.958497 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 12 02:13:42.651549 sshd[6825]: Connection closed by 10.0.0.1 port 45490 Mar 12 02:13:42.656925 sshd-session[6821]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:42.696824 systemd[1]: sshd@52-10.0.0.28:22-10.0.0.1:45490.service: Deactivated successfully. Mar 12 02:13:42.707398 systemd[1]: session-54.scope: Deactivated successfully. Mar 12 02:13:42.720075 systemd-logind[1589]: Session 54 logged out. Waiting for processes to exit. Mar 12 02:13:42.766032 systemd-logind[1589]: Removed session 54. Mar 12 02:13:47.756396 systemd[1]: Started sshd@53-10.0.0.28:22-10.0.0.1:45492.service - OpenSSH per-connection server daemon (10.0.0.1:45492). Mar 12 02:13:48.406470 sshd[6859]: Accepted publickey for core from 10.0.0.1 port 45492 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:48.459211 sshd-session[6859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:48.728911 systemd-logind[1589]: New session 55 of user core. Mar 12 02:13:48.850206 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 12 02:13:49.700072 sshd[6869]: Connection closed by 10.0.0.1 port 45492 Mar 12 02:13:49.703001 sshd-session[6859]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:49.732019 systemd[1]: sshd@53-10.0.0.28:22-10.0.0.1:45492.service: Deactivated successfully. Mar 12 02:13:49.745301 systemd[1]: session-55.scope: Deactivated successfully. Mar 12 02:13:49.764126 systemd-logind[1589]: Session 55 logged out. Waiting for processes to exit. Mar 12 02:13:49.801963 systemd-logind[1589]: Removed session 55. Mar 12 02:13:52.951045 kubelet[2981]: E0312 02:13:52.950942 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:13:54.804902 systemd[1]: Started sshd@54-10.0.0.28:22-10.0.0.1:33864.service - OpenSSH per-connection server daemon (10.0.0.1:33864). Mar 12 02:13:55.264097 sshd[6903]: Accepted publickey for core from 10.0.0.1 port 33864 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:13:55.271995 sshd-session[6903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:13:55.323827 systemd-logind[1589]: New session 56 of user core. Mar 12 02:13:55.350800 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 12 02:13:55.985755 sshd[6908]: Connection closed by 10.0.0.1 port 33864 Mar 12 02:13:55.979842 sshd-session[6903]: pam_unix(sshd:session): session closed for user core Mar 12 02:13:56.007273 systemd[1]: sshd@54-10.0.0.28:22-10.0.0.1:33864.service: Deactivated successfully. Mar 12 02:13:56.034866 systemd[1]: session-56.scope: Deactivated successfully. Mar 12 02:13:56.042324 systemd-logind[1589]: Session 56 logged out. Waiting for processes to exit. Mar 12 02:13:56.049276 systemd-logind[1589]: Removed session 56. Mar 12 02:13:58.974059 kubelet[2981]: E0312 02:13:58.974000 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:13:59.945741 kubelet[2981]: E0312 02:13:59.945514 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:14:01.311170 systemd[1]: Started sshd@55-10.0.0.28:22-10.0.0.1:34186.service - OpenSSH per-connection server daemon (10.0.0.1:34186). Mar 12 02:14:02.748854 sshd[6953]: Accepted publickey for core from 10.0.0.1 port 34186 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:14:02.779158 sshd-session[6953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:14:03.046853 systemd-logind[1589]: New session 57 of user core. Mar 12 02:14:03.154160 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 12 02:14:04.779711 sshd[6961]: Connection closed by 10.0.0.1 port 34186 Mar 12 02:14:04.783997 sshd-session[6953]: pam_unix(sshd:session): session closed for user core Mar 12 02:14:04.827856 systemd-logind[1589]: Session 57 logged out. Waiting for processes to exit. Mar 12 02:14:04.841236 systemd[1]: sshd@55-10.0.0.28:22-10.0.0.1:34186.service: Deactivated successfully. Mar 12 02:14:04.866163 systemd[1]: session-57.scope: Deactivated successfully. Mar 12 02:14:04.957758 systemd-logind[1589]: Removed session 57. Mar 12 02:14:10.116274 systemd[1]: Started sshd@56-10.0.0.28:22-10.0.0.1:35340.service - OpenSSH per-connection server daemon (10.0.0.1:35340). Mar 12 02:14:11.421854 sshd[7004]: Accepted publickey for core from 10.0.0.1 port 35340 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:14:11.463155 sshd-session[7004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:14:11.575360 systemd-logind[1589]: New session 58 of user core. Mar 12 02:14:11.697752 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 12 02:14:13.906762 sshd[7008]: Connection closed by 10.0.0.1 port 35340 Mar 12 02:14:13.919204 sshd-session[7004]: pam_unix(sshd:session): session closed for user core Mar 12 02:14:14.010139 systemd-logind[1589]: Session 58 logged out. Waiting for processes to exit. Mar 12 02:14:14.026791 systemd[1]: sshd@56-10.0.0.28:22-10.0.0.1:35340.service: Deactivated successfully. Mar 12 02:14:14.056914 systemd[1]: session-58.scope: Deactivated successfully. Mar 12 02:14:14.112843 systemd-logind[1589]: Removed session 58. Mar 12 02:14:19.106273 systemd[1]: Started sshd@57-10.0.0.28:22-10.0.0.1:35712.service - OpenSSH per-connection server daemon (10.0.0.1:35712). Mar 12 02:14:20.232216 sshd[7051]: Accepted publickey for core from 10.0.0.1 port 35712 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:14:20.261333 sshd-session[7051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:14:20.322361 systemd-logind[1589]: New session 59 of user core. Mar 12 02:14:20.382292 systemd[1]: Started session-59.scope - Session 59 of User core. Mar 12 02:14:21.685249 sshd[7060]: Connection closed by 10.0.0.1 port 35712 Mar 12 02:14:21.697256 sshd-session[7051]: pam_unix(sshd:session): session closed for user core Mar 12 02:14:21.772908 systemd[1]: sshd@57-10.0.0.28:22-10.0.0.1:35712.service: Deactivated successfully. Mar 12 02:14:21.790425 systemd[1]: session-59.scope: Deactivated successfully. Mar 12 02:14:21.807374 systemd-logind[1589]: Session 59 logged out. Waiting for processes to exit. Mar 12 02:14:21.813138 systemd-logind[1589]: Removed session 59. Mar 12 02:14:26.892009 systemd[1]: Started sshd@58-10.0.0.28:22-10.0.0.1:35726.service - OpenSSH per-connection server daemon (10.0.0.1:35726). Mar 12 02:14:28.139333 sshd[7100]: Accepted publickey for core from 10.0.0.1 port 35726 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:14:28.250462 sshd-session[7100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:14:28.550996 systemd-logind[1589]: New session 60 of user core. Mar 12 02:14:28.636186 systemd[1]: Started session-60.scope - Session 60 of User core. Mar 12 02:14:32.267988 sshd[7104]: Connection closed by 10.0.0.1 port 35726 Mar 12 02:14:32.153392 sshd-session[7100]: pam_unix(sshd:session): session closed for user core Mar 12 02:14:32.365048 systemd[1]: sshd@58-10.0.0.28:22-10.0.0.1:35726.service: Deactivated successfully. Mar 12 02:14:32.419056 systemd[1]: session-60.scope: Deactivated successfully. Mar 12 02:14:32.635446 systemd-logind[1589]: Session 60 logged out. Waiting for processes to exit. Mar 12 02:14:32.749289 systemd-logind[1589]: Removed session 60. Mar 12 02:14:37.405490 systemd[1]: Started sshd@59-10.0.0.28:22-10.0.0.1:48554.service - OpenSSH per-connection server daemon (10.0.0.1:48554). Mar 12 02:14:38.570169 sshd[7139]: Accepted publickey for core from 10.0.0.1 port 48554 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:14:38.599347 sshd-session[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:14:38.949109 systemd-logind[1589]: New session 61 of user core. Mar 12 02:14:39.004330 systemd[1]: Started session-61.scope - Session 61 of User core. Mar 12 02:14:41.033984 sshd[7160]: Connection closed by 10.0.0.1 port 48554 Mar 12 02:14:40.988368 sshd-session[7139]: pam_unix(sshd:session): session closed for user core Mar 12 02:14:41.206282 systemd[1]: sshd@59-10.0.0.28:22-10.0.0.1:48554.service: Deactivated successfully. Mar 12 02:14:41.242323 systemd[1]: session-61.scope: Deactivated successfully. Mar 12 02:14:41.322015 systemd-logind[1589]: Session 61 logged out. Waiting for processes to exit. Mar 12 02:14:41.344276 systemd-logind[1589]: Removed session 61. Mar 12 02:14:43.074420 kubelet[2981]: E0312 02:14:43.074373 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:14:46.191350 systemd[1]: Started sshd@60-10.0.0.28:22-10.0.0.1:45682.service - OpenSSH per-connection server daemon (10.0.0.1:45682). Mar 12 02:14:47.036481 sshd[7196]: Accepted publickey for core from 10.0.0.1 port 45682 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:14:47.066278 sshd-session[7196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:14:47.133238 systemd-logind[1589]: New session 62 of user core. Mar 12 02:14:47.213489 systemd[1]: Started session-62.scope - Session 62 of User core. Mar 12 02:14:48.123343 kubelet[2981]: E0312 02:14:48.119988 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:14:52.042226 sshd[7200]: Connection closed by 10.0.0.1 port 45682 Mar 12 02:14:52.038117 sshd-session[7196]: pam_unix(sshd:session): session closed for user core Mar 12 02:14:52.236198 systemd[1]: sshd@60-10.0.0.28:22-10.0.0.1:45682.service: Deactivated successfully. Mar 12 02:14:52.245281 systemd-logind[1589]: Session 62 logged out. Waiting for processes to exit. Mar 12 02:14:52.338325 systemd[1]: session-62.scope: Deactivated successfully. Mar 12 02:14:52.687319 systemd-logind[1589]: Removed session 62. Mar 12 02:14:57.587157 systemd[1]: Started sshd@61-10.0.0.28:22-10.0.0.1:49914.service - OpenSSH per-connection server daemon (10.0.0.1:49914). Mar 12 02:14:59.539985 sshd[7243]: Accepted publickey for core from 10.0.0.1 port 49914 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:14:59.729168 sshd-session[7243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:14:59.930006 systemd-logind[1589]: New session 63 of user core. Mar 12 02:15:00.094197 systemd[1]: Started session-63.scope - Session 63 of User core. Mar 12 02:15:03.319112 sshd[7263]: Connection closed by 10.0.0.1 port 49914 Mar 12 02:15:03.321054 sshd-session[7243]: pam_unix(sshd:session): session closed for user core Mar 12 02:15:03.350303 systemd[1]: sshd@61-10.0.0.28:22-10.0.0.1:49914.service: Deactivated successfully. Mar 12 02:15:03.371081 systemd[1]: session-63.scope: Deactivated successfully. Mar 12 02:15:03.379181 systemd-logind[1589]: Session 63 logged out. Waiting for processes to exit. Mar 12 02:15:03.403196 systemd-logind[1589]: Removed session 63. Mar 12 02:15:03.971269 kubelet[2981]: E0312 02:15:03.971196 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:07.002848 kubelet[2981]: E0312 02:15:06.989825 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:07.945227 kubelet[2981]: E0312 02:15:07.945184 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:08.509415 systemd[1]: Started sshd@62-10.0.0.28:22-10.0.0.1:41378.service - OpenSSH per-connection server daemon (10.0.0.1:41378). Mar 12 02:15:10.026960 sshd[7302]: Accepted publickey for core from 10.0.0.1 port 41378 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:15:10.140227 sshd-session[7302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:15:10.424708 systemd-logind[1589]: New session 64 of user core. Mar 12 02:15:10.622187 systemd[1]: Started session-64.scope - Session 64 of User core. Mar 12 02:15:12.843717 sshd[7306]: Connection closed by 10.0.0.1 port 41378 Mar 12 02:15:12.869166 sshd-session[7302]: pam_unix(sshd:session): session closed for user core Mar 12 02:15:13.064871 systemd[1]: sshd@62-10.0.0.28:22-10.0.0.1:41378.service: Deactivated successfully. Mar 12 02:15:13.140850 systemd[1]: session-64.scope: Deactivated successfully. Mar 12 02:15:13.172867 systemd-logind[1589]: Session 64 logged out. Waiting for processes to exit. Mar 12 02:15:13.329121 systemd-logind[1589]: Removed session 64. Mar 12 02:15:14.953805 kubelet[2981]: E0312 02:15:14.953760 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:18.447009 systemd[1]: Started sshd@63-10.0.0.28:22-10.0.0.1:53412.service - OpenSSH per-connection server daemon (10.0.0.1:53412). Mar 12 02:15:19.490787 sshd[7347]: Accepted publickey for core from 10.0.0.1 port 53412 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:15:19.540001 sshd-session[7347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:15:19.689051 systemd-logind[1589]: New session 65 of user core. Mar 12 02:15:19.729542 systemd[1]: Started session-65.scope - Session 65 of User core. Mar 12 02:15:20.007803 kubelet[2981]: E0312 02:15:19.979429 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:23.610982 sshd[7365]: Connection closed by 10.0.0.1 port 53412 Mar 12 02:15:23.636518 sshd-session[7347]: pam_unix(sshd:session): session closed for user core Mar 12 02:15:24.006065 systemd[1]: sshd@63-10.0.0.28:22-10.0.0.1:53412.service: Deactivated successfully. Mar 12 02:15:24.091938 systemd[1]: session-65.scope: Deactivated successfully. Mar 12 02:15:24.123947 systemd-logind[1589]: Session 65 logged out. Waiting for processes to exit. Mar 12 02:15:24.135021 systemd-logind[1589]: Removed session 65. Mar 12 02:15:28.634870 systemd[1]: Started sshd@64-10.0.0.28:22-10.0.0.1:34826.service - OpenSSH per-connection server daemon (10.0.0.1:34826). Mar 12 02:15:28.932001 sshd[7399]: Accepted publickey for core from 10.0.0.1 port 34826 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:15:28.933392 sshd-session[7399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:15:28.959425 systemd-logind[1589]: New session 66 of user core. Mar 12 02:15:28.973316 systemd[1]: Started session-66.scope - Session 66 of User core. Mar 12 02:15:35.226255 kubelet[2981]: E0312 02:15:35.225270 2981 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Mar 12 02:15:37.600538 systemd[1]: cri-containerd-ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0.scope: Deactivated successfully. Mar 12 02:15:37.602251 systemd[1]: cri-containerd-ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0.scope: Consumed 28.660s CPU time, 51.4M memory peak, 68K read from disk. Mar 12 02:15:37.621172 containerd[1613]: time="2026-03-12T02:15:37.620281356Z" level=info msg="received container exit event container_id:\"ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0\" id:\"ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0\" pid:4549 exit_status:1 exited_at:{seconds:1773281737 nanos:610310026}" Mar 12 02:15:37.777306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0-rootfs.mount: Deactivated successfully. Mar 12 02:15:37.923470 kubelet[2981]: E0312 02:15:37.921850 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:40.679242 kubelet[2981]: E0312 02:15:40.672847 2981 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Mar 12 02:15:40.722272 sshd[7403]: Connection closed by 10.0.0.1 port 34826 Mar 12 02:15:40.724190 sshd-session[7399]: pam_unix(sshd:session): session closed for user core Mar 12 02:15:40.738859 systemd-logind[1589]: Session 66 logged out. Waiting for processes to exit. Mar 12 02:15:40.744277 systemd[1]: sshd@64-10.0.0.28:22-10.0.0.1:34826.service: Deactivated successfully. Mar 12 02:15:40.753274 systemd[1]: session-66.scope: Deactivated successfully. Mar 12 02:15:40.764269 systemd-logind[1589]: Removed session 66. Mar 12 02:15:40.780825 kubelet[2981]: E0312 02:15:40.780266 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:40.813186 systemd[1]: cri-containerd-51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f.scope: Deactivated successfully. Mar 12 02:15:40.833489 systemd[1]: cri-containerd-51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f.scope: Consumed 25.609s CPU time, 24.7M memory peak. Mar 12 02:15:40.865933 containerd[1613]: time="2026-03-12T02:15:40.860257482Z" level=info msg="received container exit event container_id:\"51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f\" id:\"51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f\" pid:4382 exit_status:1 exited_at:{seconds:1773281740 nanos:849492069}" Mar 12 02:15:41.234461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f-rootfs.mount: Deactivated successfully. Mar 12 02:15:41.695194 kubelet[2981]: I0312 02:15:41.659881 2981 scope.go:117] "RemoveContainer" containerID="f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2" Mar 12 02:15:41.695194 kubelet[2981]: I0312 02:15:41.660437 2981 scope.go:117] "RemoveContainer" containerID="ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0" Mar 12 02:15:41.695194 kubelet[2981]: E0312 02:15:41.660517 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:41.695194 kubelet[2981]: E0312 02:15:41.660849 2981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(8747e1f8a49a618fbc1324a8fe2d3754)\"" pod="kube-system/kube-controller-manager-localhost" podUID="8747e1f8a49a618fbc1324a8fe2d3754" Mar 12 02:15:41.729537 containerd[1613]: time="2026-03-12T02:15:41.727455969Z" level=info msg="RemoveContainer for \"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\"" Mar 12 02:15:41.739210 kubelet[2981]: I0312 02:15:41.738335 2981 scope.go:117] "RemoveContainer" containerID="51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f" Mar 12 02:15:41.739210 kubelet[2981]: E0312 02:15:41.738425 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:41.739210 kubelet[2981]: E0312 02:15:41.738525 2981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(e944e4cb17af904786c3a2e01e298498)\"" pod="kube-system/kube-scheduler-localhost" podUID="e944e4cb17af904786c3a2e01e298498" Mar 12 02:15:41.787875 containerd[1613]: time="2026-03-12T02:15:41.786535879Z" level=info msg="RemoveContainer for \"f87a6a49695daed635251565a3afff8e60e63e0e8dcffc771e0725e4e2acc5e2\" returns successfully" Mar 12 02:15:41.790857 kubelet[2981]: I0312 02:15:41.789139 2981 scope.go:117] "RemoveContainer" containerID="f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8" Mar 12 02:15:41.830801 containerd[1613]: time="2026-03-12T02:15:41.829286697Z" level=info msg="RemoveContainer for \"f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8\"" Mar 12 02:15:41.926396 containerd[1613]: time="2026-03-12T02:15:41.924322031Z" level=info msg="RemoveContainer for \"f82158de8eb36ee452d05528985d9e13c7517aa0ad042c464d20b5d458f13bd8\" returns successfully" Mar 12 02:15:42.981534 kubelet[2981]: I0312 02:15:42.947930 2981 scope.go:117] "RemoveContainer" containerID="51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f" Mar 12 02:15:42.981534 kubelet[2981]: E0312 02:15:42.962899 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:42.981534 kubelet[2981]: E0312 02:15:42.963183 2981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(e944e4cb17af904786c3a2e01e298498)\"" pod="kube-system/kube-scheduler-localhost" podUID="e944e4cb17af904786c3a2e01e298498" Mar 12 02:15:43.717809 kubelet[2981]: I0312 02:15:43.713853 2981 scope.go:117] "RemoveContainer" containerID="ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0" Mar 12 02:15:43.722904 kubelet[2981]: E0312 02:15:43.718315 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:43.722904 kubelet[2981]: E0312 02:15:43.718439 2981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(8747e1f8a49a618fbc1324a8fe2d3754)\"" pod="kube-system/kube-controller-manager-localhost" podUID="8747e1f8a49a618fbc1324a8fe2d3754" Mar 12 02:15:45.957297 systemd[1]: Started sshd@65-10.0.0.28:22-10.0.0.1:44216.service - OpenSSH per-connection server daemon (10.0.0.1:44216). Mar 12 02:15:47.202293 sshd[7512]: Accepted publickey for core from 10.0.0.1 port 44216 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:15:47.256298 sshd-session[7512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:15:47.369275 systemd-logind[1589]: New session 67 of user core. Mar 12 02:15:47.392068 systemd[1]: Started session-67.scope - Session 67 of User core. Mar 12 02:15:48.588274 kubelet[2981]: I0312 02:15:48.587344 2981 scope.go:117] "RemoveContainer" containerID="51d462ab5e0450bee8902c7978bad528278ec3e45852451334c0749ef867cd2f" Mar 12 02:15:48.598470 kubelet[2981]: E0312 02:15:48.591040 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:48.630236 sshd[7530]: Connection closed by 10.0.0.1 port 44216 Mar 12 02:15:48.638184 sshd-session[7512]: pam_unix(sshd:session): session closed for user core Mar 12 02:15:48.664007 containerd[1613]: time="2026-03-12T02:15:48.660469895Z" level=info msg="CreateContainer within sandbox \"a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Mar 12 02:15:48.747548 systemd[1]: sshd@65-10.0.0.28:22-10.0.0.1:44216.service: Deactivated successfully. Mar 12 02:15:48.765430 systemd[1]: session-67.scope: Deactivated successfully. Mar 12 02:15:48.786390 systemd-logind[1589]: Session 67 logged out. Waiting for processes to exit. Mar 12 02:15:48.791277 systemd-logind[1589]: Removed session 67. Mar 12 02:15:53.659392 systemd[1]: Started sshd@66-10.0.0.28:22-10.0.0.1:34906.service - OpenSSH per-connection server daemon (10.0.0.1:34906). Mar 12 02:15:53.823077 sshd-session[7563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:15:53.848177 systemd-logind[1589]: New session 68 of user core. Mar 12 02:15:54.550549 sshd[7563]: Accepted publickey for core from 10.0.0.1 port 34906 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:15:53.862416 systemd[1]: Started session-68.scope - Session 68 of User core. Mar 12 02:15:54.922556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361244720.mount: Deactivated successfully. Mar 12 02:15:55.091368 containerd[1613]: time="2026-03-12T02:15:55.088237350Z" level=info msg="Container 3164c95b670051ee903b1d346385af5744d187d691e49b4616161f901b35ae1e: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:15:55.330509 containerd[1613]: time="2026-03-12T02:15:55.320455285Z" level=info msg="CreateContainer within sandbox \"a6098b94f1a7eadea1395e97421f12c39acea61c965817b4f2366f7a7f926405\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"3164c95b670051ee903b1d346385af5744d187d691e49b4616161f901b35ae1e\"" Mar 12 02:15:55.507309 containerd[1613]: time="2026-03-12T02:15:55.443348080Z" level=info msg="StartContainer for \"3164c95b670051ee903b1d346385af5744d187d691e49b4616161f901b35ae1e\"" Mar 12 02:15:55.507309 containerd[1613]: time="2026-03-12T02:15:55.482397704Z" level=info msg="connecting to shim 3164c95b670051ee903b1d346385af5744d187d691e49b4616161f901b35ae1e" address="unix:///run/containerd/s/d2fdb522566ad107655458427599cd4d50ce7f91008909a20bf9af0a29a4896c" protocol=ttrpc version=3 Mar 12 02:15:55.845289 systemd[1]: Started cri-containerd-3164c95b670051ee903b1d346385af5744d187d691e49b4616161f901b35ae1e.scope - libcontainer container 3164c95b670051ee903b1d346385af5744d187d691e49b4616161f901b35ae1e. Mar 12 02:15:55.950199 kubelet[2981]: I0312 02:15:55.950153 2981 scope.go:117] "RemoveContainer" containerID="ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0" Mar 12 02:15:56.031726 kubelet[2981]: E0312 02:15:56.025110 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:56.046235 kubelet[2981]: E0312 02:15:56.046195 2981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(8747e1f8a49a618fbc1324a8fe2d3754)\"" pod="kube-system/kube-controller-manager-localhost" podUID="8747e1f8a49a618fbc1324a8fe2d3754" Mar 12 02:15:56.108757 sshd[7567]: Connection closed by 10.0.0.1 port 34906 Mar 12 02:15:56.113147 sshd-session[7563]: pam_unix(sshd:session): session closed for user core Mar 12 02:15:56.148273 systemd-logind[1589]: Session 68 logged out. Waiting for processes to exit. Mar 12 02:15:56.161096 systemd[1]: sshd@66-10.0.0.28:22-10.0.0.1:34906.service: Deactivated successfully. Mar 12 02:15:56.201546 systemd[1]: session-68.scope: Deactivated successfully. Mar 12 02:15:56.226078 systemd-logind[1589]: Removed session 68. Mar 12 02:15:57.523510 containerd[1613]: time="2026-03-12T02:15:57.523462399Z" level=info msg="StartContainer for \"3164c95b670051ee903b1d346385af5744d187d691e49b4616161f901b35ae1e\" returns successfully" Mar 12 02:15:57.751390 kubelet[2981]: E0312 02:15:57.736875 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:58.859525 kubelet[2981]: E0312 02:15:58.859375 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:15:59.906127 kubelet[2981]: E0312 02:15:59.906071 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:00.904848 kubelet[2981]: E0312 02:16:00.904806 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:01.582812 systemd[1]: Started sshd@67-10.0.0.28:22-10.0.0.1:48684.service - OpenSSH per-connection server daemon (10.0.0.1:48684). Mar 12 02:16:02.267340 sshd[7642]: Accepted publickey for core from 10.0.0.1 port 48684 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:16:02.319187 sshd-session[7642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:16:02.418998 systemd-logind[1589]: New session 69 of user core. Mar 12 02:16:02.462482 systemd[1]: Started session-69.scope - Session 69 of User core. Mar 12 02:16:04.529491 sshd[7648]: Connection closed by 10.0.0.1 port 48684 Mar 12 02:16:04.532183 sshd-session[7642]: pam_unix(sshd:session): session closed for user core Mar 12 02:16:04.648502 systemd[1]: sshd@67-10.0.0.28:22-10.0.0.1:48684.service: Deactivated successfully. Mar 12 02:16:04.666269 systemd[1]: session-69.scope: Deactivated successfully. Mar 12 02:16:04.693802 systemd-logind[1589]: Session 69 logged out. Waiting for processes to exit. Mar 12 02:16:04.698205 systemd-logind[1589]: Removed session 69. Mar 12 02:16:09.972026 systemd[1]: Started sshd@68-10.0.0.28:22-10.0.0.1:36322.service - OpenSSH per-connection server daemon (10.0.0.1:36322). Mar 12 02:16:10.193189 kubelet[2981]: I0312 02:16:10.160049 2981 scope.go:117] "RemoveContainer" containerID="ebd4f5143f6b6bf0cf49f9a382a685229d76bd8f8b83159577006d8898f45ee0" Mar 12 02:16:10.193189 kubelet[2981]: E0312 02:16:10.160184 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:10.416269 kubelet[2981]: E0312 02:16:10.413233 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:10.426722 containerd[1613]: time="2026-03-12T02:16:10.422275201Z" level=info msg="CreateContainer within sandbox \"94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Mar 12 02:16:10.631441 kubelet[2981]: E0312 02:16:10.631301 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:10.861012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321269128.mount: Deactivated successfully. Mar 12 02:16:10.926209 containerd[1613]: time="2026-03-12T02:16:10.924459566Z" level=info msg="Container 5fff2ccbca6795b57382e442a1df179bbc8e7292921b7e441bc60f506a36878c: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:16:11.232136 containerd[1613]: time="2026-03-12T02:16:11.189557255Z" level=info msg="CreateContainer within sandbox \"94e03b77954f3d63403e7018e79e6a9a9f56d01cf4e6df892629fcc53c3c34d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"5fff2ccbca6795b57382e442a1df179bbc8e7292921b7e441bc60f506a36878c\"" Mar 12 02:16:11.232136 containerd[1613]: time="2026-03-12T02:16:11.202469648Z" level=info msg="StartContainer for \"5fff2ccbca6795b57382e442a1df179bbc8e7292921b7e441bc60f506a36878c\"" Mar 12 02:16:11.232136 containerd[1613]: time="2026-03-12T02:16:11.210527752Z" level=info msg="connecting to shim 5fff2ccbca6795b57382e442a1df179bbc8e7292921b7e441bc60f506a36878c" address="unix:///run/containerd/s/9e0ec35891eed3399cf93a4554fac23dc348508b312db97a662cd42b8c614eac" protocol=ttrpc version=3 Mar 12 02:16:11.409302 sshd[7690]: Accepted publickey for core from 10.0.0.1 port 36322 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:16:11.413426 sshd-session[7690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:16:11.473170 systemd-logind[1589]: New session 70 of user core. Mar 12 02:16:11.509189 systemd[1]: Started session-70.scope - Session 70 of User core. Mar 12 02:16:11.691433 systemd[1]: Started cri-containerd-5fff2ccbca6795b57382e442a1df179bbc8e7292921b7e441bc60f506a36878c.scope - libcontainer container 5fff2ccbca6795b57382e442a1df179bbc8e7292921b7e441bc60f506a36878c. Mar 12 02:16:12.714149 sshd[7720]: Connection closed by 10.0.0.1 port 36322 Mar 12 02:16:12.719406 sshd-session[7690]: pam_unix(sshd:session): session closed for user core Mar 12 02:16:12.864478 systemd[1]: sshd@68-10.0.0.28:22-10.0.0.1:36322.service: Deactivated successfully. Mar 12 02:16:12.895190 systemd-logind[1589]: Session 70 logged out. Waiting for processes to exit. Mar 12 02:16:12.957394 containerd[1613]: time="2026-03-12T02:16:12.947360235Z" level=info msg="StartContainer for \"5fff2ccbca6795b57382e442a1df179bbc8e7292921b7e441bc60f506a36878c\" returns successfully" Mar 12 02:16:13.024480 systemd[1]: session-70.scope: Deactivated successfully. Mar 12 02:16:13.143973 systemd-logind[1589]: Removed session 70. Mar 12 02:16:14.035208 kubelet[2981]: E0312 02:16:14.034122 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:14.976193 kubelet[2981]: E0312 02:16:14.955558 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:15.051975 kubelet[2981]: E0312 02:16:15.051931 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:17.126187 kubelet[2981]: E0312 02:16:17.126136 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:17.827259 systemd[1]: Started sshd@69-10.0.0.28:22-10.0.0.1:36352.service - OpenSSH per-connection server daemon (10.0.0.1:36352). Mar 12 02:16:18.376263 sshd[7772]: Accepted publickey for core from 10.0.0.1 port 36352 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:16:18.408553 sshd-session[7772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:16:18.453189 systemd-logind[1589]: New session 71 of user core. Mar 12 02:16:18.507496 systemd[1]: Started session-71.scope - Session 71 of User core. Mar 12 02:16:19.886184 sshd[7776]: Connection closed by 10.0.0.1 port 36352 Mar 12 02:16:19.883102 sshd-session[7772]: pam_unix(sshd:session): session closed for user core Mar 12 02:16:19.914568 systemd[1]: sshd@69-10.0.0.28:22-10.0.0.1:36352.service: Deactivated successfully. Mar 12 02:16:19.934071 systemd[1]: session-71.scope: Deactivated successfully. Mar 12 02:16:19.945426 systemd-logind[1589]: Session 71 logged out. Waiting for processes to exit. Mar 12 02:16:19.953123 systemd-logind[1589]: Removed session 71. Mar 12 02:16:24.657214 kubelet[2981]: E0312 02:16:24.652168 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:25.631549 systemd[1]: Started sshd@70-10.0.0.28:22-10.0.0.1:37418.service - OpenSSH per-connection server daemon (10.0.0.1:37418). Mar 12 02:16:26.002186 kubelet[2981]: E0312 02:16:25.971394 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:28.262516 sshd[7813]: Accepted publickey for core from 10.0.0.1 port 37418 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:16:28.727438 kubelet[2981]: E0312 02:16:28.395389 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:16:28.687269 sshd-session[7813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:16:29.345106 systemd-logind[1589]: New session 72 of user core. Mar 12 02:16:29.414309 systemd[1]: Started session-72.scope - Session 72 of User core. Mar 12 02:16:31.647925 sshd[7836]: Connection closed by 10.0.0.1 port 37418 Mar 12 02:16:31.662149 sshd-session[7813]: pam_unix(sshd:session): session closed for user core Mar 12 02:16:31.694946 systemd[1]: sshd@70-10.0.0.28:22-10.0.0.1:37418.service: Deactivated successfully. Mar 12 02:16:31.701908 systemd[1]: session-72.scope: Deactivated successfully. Mar 12 02:16:31.710456 systemd-logind[1589]: Session 72 logged out. Waiting for processes to exit. Mar 12 02:16:31.716190 systemd-logind[1589]: Removed session 72. Mar 12 02:16:37.105315 systemd[1]: Started sshd@71-10.0.0.28:22-10.0.0.1:40026.service - OpenSSH per-connection server daemon (10.0.0.1:40026). Mar 12 02:16:39.841411 sshd[7874]: Accepted publickey for core from 10.0.0.1 port 40026 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:16:39.887198 sshd-session[7874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:16:40.023991 systemd-logind[1589]: New session 73 of user core. Mar 12 02:16:40.065339 systemd[1]: Started session-73.scope - Session 73 of User core. Mar 12 02:16:42.159359 sshd[7885]: Connection closed by 10.0.0.1 port 40026 Mar 12 02:16:42.159121 sshd-session[7874]: pam_unix(sshd:session): session closed for user core Mar 12 02:16:42.239972 systemd[1]: sshd@71-10.0.0.28:22-10.0.0.1:40026.service: Deactivated successfully. Mar 12 02:16:42.335877 systemd[1]: session-73.scope: Deactivated successfully. Mar 12 02:16:42.396235 systemd-logind[1589]: Session 73 logged out. Waiting for processes to exit. Mar 12 02:16:42.417161 systemd-logind[1589]: Removed session 73. Mar 12 02:16:47.418203 systemd[1]: Started sshd@72-10.0.0.28:22-10.0.0.1:37588.service - OpenSSH per-connection server daemon (10.0.0.1:37588). Mar 12 02:16:49.189270 sshd[7919]: Accepted publickey for core from 10.0.0.1 port 37588 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:16:49.195058 sshd-session[7919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:16:49.460024 systemd-logind[1589]: New session 74 of user core. Mar 12 02:16:49.554328 systemd[1]: Started session-74.scope - Session 74 of User core. Mar 12 02:16:51.946126 sshd[7939]: Connection closed by 10.0.0.1 port 37588 Mar 12 02:16:51.947356 sshd-session[7919]: pam_unix(sshd:session): session closed for user core Mar 12 02:16:52.028517 systemd[1]: sshd@72-10.0.0.28:22-10.0.0.1:37588.service: Deactivated successfully. Mar 12 02:16:52.095550 systemd[1]: session-74.scope: Deactivated successfully. Mar 12 02:16:52.120182 systemd-logind[1589]: Session 74 logged out. Waiting for processes to exit. Mar 12 02:16:52.140170 systemd-logind[1589]: Removed session 74. Mar 12 02:16:57.095169 systemd[1]: Started sshd@73-10.0.0.28:22-10.0.0.1:34978.service - OpenSSH per-connection server daemon (10.0.0.1:34978). Mar 12 02:16:58.952048 sshd[7980]: Accepted publickey for core from 10.0.0.1 port 34978 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:16:59.026968 sshd-session[7980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:16:59.287020 systemd-logind[1589]: New session 75 of user core. Mar 12 02:16:59.343984 systemd[1]: Started session-75.scope - Session 75 of User core. Mar 12 02:17:01.020515 kubelet[2981]: E0312 02:17:01.018258 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:17:02.047128 sshd[7986]: Connection closed by 10.0.0.1 port 34978 Mar 12 02:17:02.064059 sshd-session[7980]: pam_unix(sshd:session): session closed for user core Mar 12 02:17:02.133173 systemd-logind[1589]: Session 75 logged out. Waiting for processes to exit. Mar 12 02:17:02.155067 systemd[1]: sshd@73-10.0.0.28:22-10.0.0.1:34978.service: Deactivated successfully. Mar 12 02:17:02.236008 systemd[1]: session-75.scope: Deactivated successfully. Mar 12 02:17:02.335563 systemd-logind[1589]: Removed session 75. Mar 12 02:17:07.131028 systemd[1]: Started sshd@74-10.0.0.28:22-10.0.0.1:36102.service - OpenSSH per-connection server daemon (10.0.0.1:36102). Mar 12 02:17:07.968301 sshd[8042]: Accepted publickey for core from 10.0.0.1 port 36102 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:17:07.998071 sshd-session[8042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:17:08.039089 systemd-logind[1589]: New session 76 of user core. Mar 12 02:17:08.099091 systemd[1]: Started session-76.scope - Session 76 of User core. Mar 12 02:17:08.804817 sshd[8046]: Connection closed by 10.0.0.1 port 36102 Mar 12 02:17:08.804288 sshd-session[8042]: pam_unix(sshd:session): session closed for user core Mar 12 02:17:08.849367 systemd[1]: sshd@74-10.0.0.28:22-10.0.0.1:36102.service: Deactivated successfully. Mar 12 02:17:08.901521 systemd[1]: session-76.scope: Deactivated successfully. Mar 12 02:17:08.936297 systemd-logind[1589]: Session 76 logged out. Waiting for processes to exit. Mar 12 02:17:08.955492 systemd-logind[1589]: Removed session 76. Mar 12 02:17:14.097357 systemd[1]: Started sshd@75-10.0.0.28:22-10.0.0.1:50038.service - OpenSSH per-connection server daemon (10.0.0.1:50038). Mar 12 02:17:14.732382 sshd[8080]: Accepted publickey for core from 10.0.0.1 port 50038 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:17:14.757903 sshd-session[8080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:17:14.854370 systemd-logind[1589]: New session 77 of user core. Mar 12 02:17:14.918316 systemd[1]: Started session-77.scope - Session 77 of User core. Mar 12 02:17:15.847812 sshd[8084]: Connection closed by 10.0.0.1 port 50038 Mar 12 02:17:15.843877 sshd-session[8080]: pam_unix(sshd:session): session closed for user core Mar 12 02:17:15.869374 systemd[1]: sshd@75-10.0.0.28:22-10.0.0.1:50038.service: Deactivated successfully. Mar 12 02:17:15.899373 systemd[1]: session-77.scope: Deactivated successfully. Mar 12 02:17:15.924000 systemd-logind[1589]: Session 77 logged out. Waiting for processes to exit. Mar 12 02:17:15.933710 systemd-logind[1589]: Removed session 77. Mar 12 02:17:20.920805 systemd[1]: Started sshd@76-10.0.0.28:22-10.0.0.1:42454.service - OpenSSH per-connection server daemon (10.0.0.1:42454). Mar 12 02:17:21.570897 sshd[8118]: Accepted publickey for core from 10.0.0.1 port 42454 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:17:21.596515 sshd-session[8118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:17:21.659505 systemd-logind[1589]: New session 78 of user core. Mar 12 02:17:21.736011 systemd[1]: Started session-78.scope - Session 78 of User core. Mar 12 02:17:22.701418 sshd[8122]: Connection closed by 10.0.0.1 port 42454 Mar 12 02:17:22.703860 sshd-session[8118]: pam_unix(sshd:session): session closed for user core Mar 12 02:17:22.751333 systemd[1]: sshd@76-10.0.0.28:22-10.0.0.1:42454.service: Deactivated successfully. Mar 12 02:17:22.766071 systemd[1]: session-78.scope: Deactivated successfully. Mar 12 02:17:22.808425 systemd-logind[1589]: Session 78 logged out. Waiting for processes to exit. Mar 12 02:17:22.820972 systemd-logind[1589]: Removed session 78. Mar 12 02:17:25.941317 kubelet[2981]: E0312 02:17:25.940834 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:17:27.839212 systemd[1]: Started sshd@77-10.0.0.28:22-10.0.0.1:42480.service - OpenSSH per-connection server daemon (10.0.0.1:42480). Mar 12 02:17:28.639859 sshd[8162]: Accepted publickey for core from 10.0.0.1 port 42480 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:17:28.650842 sshd-session[8162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:17:28.709750 systemd-logind[1589]: New session 79 of user core. Mar 12 02:17:28.735532 systemd[1]: Started session-79.scope - Session 79 of User core. Mar 12 02:17:30.116207 sshd[8180]: Connection closed by 10.0.0.1 port 42480 Mar 12 02:17:30.117326 sshd-session[8162]: pam_unix(sshd:session): session closed for user core Mar 12 02:17:30.150475 systemd[1]: sshd@77-10.0.0.28:22-10.0.0.1:42480.service: Deactivated successfully. Mar 12 02:17:30.202019 systemd[1]: session-79.scope: Deactivated successfully. Mar 12 02:17:30.239154 systemd-logind[1589]: Session 79 logged out. Waiting for processes to exit. Mar 12 02:17:30.269892 systemd-logind[1589]: Removed session 79. Mar 12 02:17:30.953231 kubelet[2981]: E0312 02:17:30.945098 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:17:30.953231 kubelet[2981]: E0312 02:17:30.945828 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:17:32.948485 kubelet[2981]: E0312 02:17:32.948439 2981 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 02:17:35.224841 systemd[1]: Started sshd@78-10.0.0.28:22-10.0.0.1:47250.service - OpenSSH per-connection server daemon (10.0.0.1:47250). Mar 12 02:17:36.670638 sshd[8213]: Accepted publickey for core from 10.0.0.1 port 47250 ssh2: RSA SHA256:Xu2s3Vu7tmntBlpTn/p2/7O19DuIoT4RlKzDHlMhJsA Mar 12 02:17:37.352502 sshd-session[8213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:17:38.550092 systemd-logind[1589]: New session 80 of user core. Mar 12 02:17:38.821312 systemd[1]: Started session-80.scope - Session 80 of User core. Mar 12 02:17:47.753671 sshd[8227]: Connection closed by 10.0.0.1 port 47250 Mar 12 02:17:47.755399 sshd-session[8213]: pam_unix(sshd:session): session closed for user core Mar 12 02:17:47.767836 systemd[1]: sshd@78-10.0.0.28:22-10.0.0.1:47250.service: Deactivated successfully. Mar 12 02:17:47.777866 kubelet[2981]: E0312 02:17:47.776747 2981 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.64s" Mar 12 02:17:47.794503 systemd[1]: session-80.scope: Deactivated successfully. Mar 12 02:17:47.795391 systemd[1]: session-80.scope: Consumed 1.330s CPU time, 17.7M memory peak. Mar 12 02:17:47.807808 systemd-logind[1589]: Session 80 logged out. Waiting for processes to exit. Mar 12 02:17:47.831776 systemd-logind[1589]: Removed session 80.