Mar 6 02:27:56.228540 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:16:40 -00 2026 Mar 6 02:27:56.228576 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 02:27:56.228588 kernel: BIOS-provided physical RAM map: Mar 6 02:27:56.228601 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 6 02:27:56.228611 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 6 02:27:56.228620 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 6 02:27:56.248768 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 6 02:27:56.248793 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 6 02:27:56.248804 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 6 02:27:56.248813 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 6 02:27:56.248930 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 6 02:27:56.248940 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 6 02:27:56.248956 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 6 02:27:56.248965 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 6 02:27:56.248975 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 6 02:27:56.248985 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 6 02:27:56.249081 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 6 02:27:56.249096 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 6 02:27:56.249107 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 6 02:27:56.249117 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 6 02:27:56.249128 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 6 02:27:56.249139 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 6 02:27:56.249149 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 6 02:27:56.249160 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 6 02:27:56.249171 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 6 02:27:56.249181 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 6 02:27:56.249192 kernel: NX (Execute Disable) protection: active Mar 6 02:27:56.249201 kernel: APIC: Static calls initialized Mar 6 02:27:56.249214 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 6 02:27:56.249223 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 6 02:27:56.249232 kernel: extended physical RAM map: Mar 6 02:27:56.249240 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 6 02:27:56.249249 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 6 02:27:56.249444 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 6 02:27:56.249458 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 6 02:27:56.249469 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 6 02:27:56.249478 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 6 02:27:56.249486 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 6 02:27:56.249495 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 6 02:27:56.249508 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 6 02:27:56.249523 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 6 02:27:56.249535 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 6 02:27:56.249544 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 6 02:27:56.249553 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 6 02:27:56.249565 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 6 02:27:56.249574 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 6 02:27:56.249583 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 6 02:27:56.249594 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 6 02:27:56.249604 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 6 02:27:56.249614 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 6 02:27:56.249625 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 6 02:27:56.249636 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 6 02:27:56.249647 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 6 02:27:56.249657 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 6 02:27:56.249668 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 6 02:27:56.249683 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 6 02:27:56.249694 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 6 02:27:56.249706 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 6 02:27:56.249716 kernel: efi: EFI v2.7 by EDK II Mar 6 02:27:56.249727 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 6 02:27:56.249737 kernel: random: crng init done Mar 6 02:27:56.249748 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 6 02:27:56.249758 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 6 02:27:56.249769 kernel: secureboot: Secure boot disabled Mar 6 02:27:56.249779 kernel: SMBIOS 2.8 present. Mar 6 02:27:56.249789 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 6 02:27:56.249803 kernel: DMI: Memory slots populated: 1/1 Mar 6 02:27:56.249814 kernel: Hypervisor detected: KVM Mar 6 02:27:56.250013 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 6 02:27:56.250023 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 02:27:56.250032 kernel: kvm-clock: using sched offset of 43977793405 cycles Mar 6 02:27:56.250043 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 02:27:56.250052 kernel: tsc: Detected 2445.424 MHz processor Mar 6 02:27:56.250062 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 02:27:56.250073 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 02:27:56.250083 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 6 02:27:56.250094 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 6 02:27:56.250109 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 02:27:56.250120 kernel: Using GB pages for direct mapping Mar 6 02:27:56.250130 kernel: ACPI: Early table checksum verification disabled Mar 6 02:27:56.250141 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 6 02:27:56.250152 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 6 02:27:56.250163 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:27:56.250175 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:27:56.250187 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 6 02:27:56.250202 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:27:56.250211 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:27:56.250220 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:27:56.250230 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:27:56.250239 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 6 02:27:56.250251 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 6 02:27:56.250419 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 6 02:27:56.250429 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 6 02:27:56.250439 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 6 02:27:56.250454 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 6 02:27:56.250465 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 6 02:27:56.250475 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 6 02:27:56.250486 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 6 02:27:56.250497 kernel: No NUMA configuration found Mar 6 02:27:56.250508 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 6 02:27:56.250519 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 6 02:27:56.250530 kernel: Zone ranges: Mar 6 02:27:56.250541 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 02:27:56.250556 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 6 02:27:56.250567 kernel: Normal empty Mar 6 02:27:56.250577 kernel: Device empty Mar 6 02:27:56.250588 kernel: Movable zone start for each node Mar 6 02:27:56.250599 kernel: Early memory node ranges Mar 6 02:27:56.250609 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 6 02:27:56.250620 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 6 02:27:56.250631 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 6 02:27:56.250642 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 6 02:27:56.250656 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 6 02:27:56.250668 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 6 02:27:56.250678 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 6 02:27:56.250690 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 6 02:27:56.250700 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 6 02:27:56.250712 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 02:27:56.250733 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 6 02:27:56.250747 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 6 02:27:56.250758 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 02:27:56.250769 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 6 02:27:56.250780 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 6 02:27:56.250791 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 6 02:27:56.250806 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 6 02:27:56.250925 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 6 02:27:56.250939 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 02:27:56.250949 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 02:27:56.250958 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 02:27:56.250972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 02:27:56.250982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 02:27:56.250994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 02:27:56.251007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 02:27:56.251017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 02:27:56.251027 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 02:27:56.251036 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 02:27:56.251045 kernel: TSC deadline timer available Mar 6 02:27:56.251055 kernel: CPU topo: Max. logical packages: 1 Mar 6 02:27:56.251070 kernel: CPU topo: Max. logical dies: 1 Mar 6 02:27:56.251081 kernel: CPU topo: Max. dies per package: 1 Mar 6 02:27:56.251092 kernel: CPU topo: Max. threads per core: 1 Mar 6 02:27:56.251104 kernel: CPU topo: Num. cores per package: 4 Mar 6 02:27:56.251115 kernel: CPU topo: Num. threads per package: 4 Mar 6 02:27:56.251126 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 6 02:27:56.251137 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 02:27:56.251149 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 02:27:56.251160 kernel: kvm-guest: setup PV sched yield Mar 6 02:27:56.251171 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 6 02:27:56.251186 kernel: Booting paravirtualized kernel on KVM Mar 6 02:27:56.251197 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 02:27:56.251209 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 02:27:56.251220 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 6 02:27:56.251232 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 6 02:27:56.251244 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 02:27:56.251255 kernel: kvm-guest: PV spinlocks enabled Mar 6 02:27:56.251453 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 02:27:56.251467 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 02:27:56.251484 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 02:27:56.251495 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 02:27:56.251507 kernel: Fallback order for Node 0: 0 Mar 6 02:27:56.251518 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 6 02:27:56.251530 kernel: Policy zone: DMA32 Mar 6 02:27:56.251541 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 02:27:56.251553 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 02:27:56.251565 kernel: ftrace: allocating 40099 entries in 157 pages Mar 6 02:27:56.251581 kernel: ftrace: allocated 157 pages with 5 groups Mar 6 02:27:56.251592 kernel: Dynamic Preempt: voluntary Mar 6 02:27:56.251604 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 02:27:56.251617 kernel: rcu: RCU event tracing is enabled. Mar 6 02:27:56.251629 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 02:27:56.251640 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 02:27:56.251651 kernel: Rude variant of Tasks RCU enabled. Mar 6 02:27:56.251662 kernel: Tracing variant of Tasks RCU enabled. Mar 6 02:27:56.251673 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 02:27:56.251688 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 02:27:56.251706 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 02:27:56.251718 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 02:27:56.251729 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 02:27:56.251741 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 02:27:56.251752 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 02:27:56.251763 kernel: Console: colour dummy device 80x25 Mar 6 02:27:56.251774 kernel: printk: legacy console [ttyS0] enabled Mar 6 02:27:56.251786 kernel: ACPI: Core revision 20240827 Mar 6 02:27:56.251801 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 02:27:56.251812 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 02:27:56.251932 kernel: x2apic enabled Mar 6 02:27:56.251942 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 02:27:56.251952 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 02:27:56.251962 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 02:27:56.251971 kernel: kvm-guest: setup PV IPIs Mar 6 02:27:56.251981 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 02:27:56.251994 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Mar 6 02:27:56.252190 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 6 02:27:56.252205 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 02:27:56.252218 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 02:27:56.252229 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 02:27:56.252241 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 02:27:56.252252 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 02:27:56.252506 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 02:27:56.252520 kernel: Speculative Store Bypass: Vulnerable Mar 6 02:27:56.252532 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 02:27:56.252550 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 02:27:56.252561 kernel: active return thunk: srso_alias_return_thunk Mar 6 02:27:56.252572 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 02:27:56.252584 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 02:27:56.252595 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 02:27:56.252606 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 02:27:56.252617 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 02:27:56.252628 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 02:27:56.252643 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 02:27:56.252654 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 02:27:56.252666 kernel: Freeing SMP alternatives memory: 32K Mar 6 02:27:56.252677 kernel: pid_max: default: 32768 minimum: 301 Mar 6 02:27:56.252688 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 6 02:27:56.252699 kernel: landlock: Up and running. Mar 6 02:27:56.252711 kernel: SELinux: Initializing. Mar 6 02:27:56.252722 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 02:27:56.252734 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 02:27:56.252748 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 02:27:56.252759 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 02:27:56.252771 kernel: signal: max sigframe size: 1776 Mar 6 02:27:56.252782 kernel: rcu: Hierarchical SRCU implementation. Mar 6 02:27:56.252794 kernel: rcu: Max phase no-delay instances is 400. Mar 6 02:27:56.252806 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 6 02:27:56.252926 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 02:27:56.252939 kernel: smp: Bringing up secondary CPUs ... Mar 6 02:27:56.252949 kernel: smpboot: x86: Booting SMP configuration: Mar 6 02:27:56.252963 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 02:27:56.252973 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 02:27:56.252983 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 6 02:27:56.252997 kernel: Memory: 2414472K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 145388K reserved, 0K cma-reserved) Mar 6 02:27:56.253007 kernel: devtmpfs: initialized Mar 6 02:27:56.253017 kernel: x86/mm: Memory block size: 128MB Mar 6 02:27:56.253027 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 6 02:27:56.253036 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 6 02:27:56.253047 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 6 02:27:56.253064 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 6 02:27:56.253075 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 6 02:27:56.253085 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 6 02:27:56.253094 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 02:27:56.253104 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 02:27:56.253114 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 02:27:56.253125 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 02:27:56.253137 kernel: audit: initializing netlink subsys (disabled) Mar 6 02:27:56.253149 kernel: audit: type=2000 audit(1772764051.661:1): state=initialized audit_enabled=0 res=1 Mar 6 02:27:56.253167 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 02:27:56.253177 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 02:27:56.253186 kernel: cpuidle: using governor menu Mar 6 02:27:56.253196 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 02:27:56.253205 kernel: dca service started, version 1.12.1 Mar 6 02:27:56.253215 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 6 02:27:56.253228 kernel: PCI: Using configuration type 1 for base access Mar 6 02:27:56.253239 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 02:27:56.253249 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 02:27:56.253442 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 02:27:56.253453 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 02:27:56.253463 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 02:27:56.253473 kernel: ACPI: Added _OSI(Module Device) Mar 6 02:27:56.253482 kernel: ACPI: Added _OSI(Processor Device) Mar 6 02:27:56.253491 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 02:27:56.253503 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 02:27:56.253514 kernel: ACPI: Interpreter enabled Mar 6 02:27:56.253527 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 02:27:56.253542 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 02:27:56.253551 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 02:27:56.253561 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 02:27:56.253571 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 02:27:56.253581 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 02:27:56.255141 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 02:27:56.255503 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 02:27:56.255674 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 02:27:56.255689 kernel: PCI host bridge to bus 0000:00 Mar 6 02:27:56.256569 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 02:27:56.256719 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 02:27:56.256971 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 02:27:56.257125 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 6 02:27:56.257540 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 6 02:27:56.257706 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 6 02:27:56.257969 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 02:27:56.258940 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 6 02:27:56.259227 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 6 02:27:56.259586 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 6 02:27:56.259767 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 6 02:27:56.260046 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 6 02:27:56.260225 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 02:27:56.260567 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 39062 usecs Mar 6 02:27:56.261004 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 6 02:27:56.261183 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 6 02:27:56.261555 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 6 02:27:56.261750 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 6 02:27:56.262476 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 6 02:27:56.262671 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 6 02:27:56.262942 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 6 02:27:56.263113 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 6 02:27:56.263648 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 6 02:27:56.263935 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 6 02:27:56.264110 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 6 02:27:56.264551 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 6 02:27:56.264724 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 6 02:27:56.265181 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 6 02:27:56.265521 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 02:27:56.265690 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 39062 usecs Mar 6 02:27:56.265975 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 6 02:27:56.266149 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 6 02:27:56.266509 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 6 02:27:56.267507 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 6 02:27:56.267699 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 6 02:27:56.267717 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 02:27:56.267728 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 02:27:56.267738 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 02:27:56.267748 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 02:27:56.267761 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 02:27:56.267779 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 02:27:56.267788 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 02:27:56.267798 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 02:27:56.267808 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 02:27:56.267915 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 02:27:56.267927 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 02:27:56.267936 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 02:27:56.267946 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 02:27:56.267955 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 02:27:56.267970 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 02:27:56.267982 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 02:27:56.267993 kernel: iommu: Default domain type: Translated Mar 6 02:27:56.268002 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 02:27:56.268012 kernel: efivars: Registered efivars operations Mar 6 02:27:56.268022 kernel: PCI: Using ACPI for IRQ routing Mar 6 02:27:56.268031 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 02:27:56.268043 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 6 02:27:56.268054 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 6 02:27:56.268068 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 6 02:27:56.268077 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 6 02:27:56.268087 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 6 02:27:56.268097 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 6 02:27:56.268108 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 6 02:27:56.268118 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 6 02:27:56.268585 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 02:27:56.268772 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 02:27:56.269055 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 02:27:56.269075 kernel: vgaarb: loaded Mar 6 02:27:56.269088 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 02:27:56.269099 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 02:27:56.269112 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 02:27:56.269123 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 02:27:56.269135 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 02:27:56.269148 kernel: pnp: PnP ACPI init Mar 6 02:27:56.270243 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 6 02:27:56.270455 kernel: pnp: PnP ACPI: found 6 devices Mar 6 02:27:56.270469 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 02:27:56.270481 kernel: NET: Registered PF_INET protocol family Mar 6 02:27:56.270492 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 02:27:56.270505 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 02:27:56.270541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 02:27:56.270556 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 02:27:56.270571 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 02:27:56.270586 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 02:27:56.270597 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 02:27:56.270609 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 02:27:56.270622 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 02:27:56.270635 kernel: NET: Registered PF_XDP protocol family Mar 6 02:27:56.270944 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 6 02:27:56.271243 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 6 02:27:56.272073 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 02:27:56.272598 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 02:27:56.272756 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 02:27:56.273127 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 6 02:27:56.273504 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 6 02:27:56.273679 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 6 02:27:56.273698 kernel: PCI: CLS 0 bytes, default 64 Mar 6 02:27:56.273711 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Mar 6 02:27:56.273724 kernel: Initialise system trusted keyrings Mar 6 02:27:56.273743 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 02:27:56.273754 kernel: Key type asymmetric registered Mar 6 02:27:56.273766 kernel: Asymmetric key parser 'x509' registered Mar 6 02:27:56.273778 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 6 02:27:56.273791 kernel: io scheduler mq-deadline registered Mar 6 02:27:56.273803 kernel: io scheduler kyber registered Mar 6 02:27:56.273815 kernel: io scheduler bfq registered Mar 6 02:27:56.273948 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 02:27:56.273960 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 02:27:56.273975 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 02:27:56.273985 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 02:27:56.273995 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 02:27:56.274006 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 02:27:56.274019 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 02:27:56.274032 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 02:27:56.274046 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 02:27:56.274743 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 02:27:56.274763 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 02:27:56.275041 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 02:27:56.275208 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T02:27:53 UTC (1772764073) Mar 6 02:27:56.275576 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 6 02:27:56.275595 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 02:27:56.275606 kernel: efifb: probing for efifb Mar 6 02:27:56.275623 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 6 02:27:56.275634 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 6 02:27:56.275644 kernel: efifb: scrolling: redraw Mar 6 02:27:56.275655 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 6 02:27:56.275666 kernel: Console: switching to colour frame buffer device 160x50 Mar 6 02:27:56.275678 kernel: fb0: EFI VGA frame buffer device Mar 6 02:27:56.275691 kernel: pstore: Using crash dump compression: deflate Mar 6 02:27:56.275701 kernel: pstore: Registered efi_pstore as persistent store backend Mar 6 02:27:56.275711 kernel: NET: Registered PF_INET6 protocol family Mar 6 02:27:56.275725 kernel: Segment Routing with IPv6 Mar 6 02:27:56.275735 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 02:27:56.275745 kernel: NET: Registered PF_PACKET protocol family Mar 6 02:27:56.275757 kernel: Key type dns_resolver registered Mar 6 02:27:56.275768 kernel: IPI shorthand broadcast: enabled Mar 6 02:27:56.275780 kernel: sched_clock: Marking stable (20471089174, 3451992698)->(28638062612, -4714980740) Mar 6 02:27:56.275793 kernel: registered taskstats version 1 Mar 6 02:27:56.275805 kernel: Loading compiled-in X.509 certificates Mar 6 02:27:56.275929 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 30893fe9fd219d26109af079e6493e1c8b1c00af' Mar 6 02:27:56.275949 kernel: Demotion targets for Node 0: null Mar 6 02:27:56.275960 kernel: Key type .fscrypt registered Mar 6 02:27:56.275970 kernel: Key type fscrypt-provisioning registered Mar 6 02:27:56.275980 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 02:27:56.275993 kernel: ima: Allocated hash algorithm: sha1 Mar 6 02:27:56.276006 kernel: ima: No architecture policies found Mar 6 02:27:56.276017 kernel: clk: Disabling unused clocks Mar 6 02:27:56.276028 kernel: Warning: unable to open an initial console. Mar 6 02:27:56.276038 kernel: Freeing unused kernel image (initmem) memory: 46196K Mar 6 02:27:56.276055 kernel: Write protecting the kernel read-only data: 40960k Mar 6 02:27:56.276065 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 6 02:27:56.276076 kernel: Run /init as init process Mar 6 02:27:56.276088 kernel: with arguments: Mar 6 02:27:56.276100 kernel: /init Mar 6 02:27:56.276112 kernel: with environment: Mar 6 02:27:56.276123 kernel: HOME=/ Mar 6 02:27:56.276135 kernel: TERM=linux Mar 6 02:27:56.276149 systemd[1]: Successfully made /usr/ read-only. Mar 6 02:27:56.276169 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 02:27:56.276182 systemd[1]: Detected virtualization kvm. Mar 6 02:27:56.276194 systemd[1]: Detected architecture x86-64. Mar 6 02:27:56.276206 systemd[1]: Running in initrd. Mar 6 02:27:56.276219 systemd[1]: No hostname configured, using default hostname. Mar 6 02:27:56.276232 systemd[1]: Hostname set to . Mar 6 02:27:56.276249 systemd[1]: Initializing machine ID from VM UUID. Mar 6 02:27:56.276478 systemd[1]: Queued start job for default target initrd.target. Mar 6 02:27:56.276493 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:27:56.276505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:27:56.276519 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 02:27:56.276532 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 02:27:56.276545 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 02:27:56.276559 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 02:27:56.276579 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 02:27:56.276592 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 02:27:56.276603 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:27:56.276616 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:27:56.276628 systemd[1]: Reached target paths.target - Path Units. Mar 6 02:27:56.276641 systemd[1]: Reached target slices.target - Slice Units. Mar 6 02:27:56.276653 systemd[1]: Reached target swap.target - Swaps. Mar 6 02:27:56.276665 systemd[1]: Reached target timers.target - Timer Units. Mar 6 02:27:56.276681 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 02:27:56.276694 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 02:27:56.276706 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 02:27:56.276719 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 6 02:27:56.276731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:27:56.276743 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 02:27:56.276756 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:27:56.276768 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 02:27:56.276781 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 02:27:56.276798 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 02:27:56.276810 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 02:27:56.276931 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 6 02:27:56.276943 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 02:27:56.276953 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 02:27:56.276964 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 02:27:56.276974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:27:56.276986 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 02:27:56.277044 systemd-journald[203]: Collecting audit messages is disabled. Mar 6 02:27:56.277079 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:27:56.277093 systemd-journald[203]: Journal started Mar 6 02:27:56.277119 systemd-journald[203]: Runtime Journal (/run/log/journal/32d8dcfebde64932ad0caa60f7c19c48) is 6M, max 48.1M, 42.1M free. Mar 6 02:27:56.189510 systemd-modules-load[204]: Inserted module 'overlay' Mar 6 02:27:56.339219 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 02:27:56.354218 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 02:27:56.375724 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:27:56.393608 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 02:27:56.483244 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 02:27:56.595554 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 02:27:56.616909 kernel: Bridge firewalling registered Mar 6 02:27:56.616970 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 6 02:27:56.618166 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 02:27:56.647764 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 02:27:56.684814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:27:56.736969 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 02:27:56.746190 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 6 02:27:56.753003 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:27:56.786630 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:27:56.822558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:27:56.831614 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 02:27:56.877489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 02:27:56.937445 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 02:27:57.026160 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 02:27:57.048552 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:27:57.177924 systemd-resolved[240]: Positive Trust Anchors: Mar 6 02:27:57.178015 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 02:27:57.178061 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 02:27:57.188448 systemd-resolved[240]: Defaulting to hostname 'linux'. Mar 6 02:27:57.192471 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 02:27:57.216091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:27:58.047408 kernel: SCSI subsystem initialized Mar 6 02:27:58.082396 kernel: Loading iSCSI transport class v2.0-870. Mar 6 02:27:58.161991 kernel: iscsi: registered transport (tcp) Mar 6 02:27:58.242506 kernel: iscsi: registered transport (qla4xxx) Mar 6 02:27:58.249447 kernel: QLogic iSCSI HBA Driver Mar 6 02:27:58.424764 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 02:27:58.502042 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:27:58.552068 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 02:27:59.563062 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 02:27:59.625961 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 02:28:00.058455 kernel: raid6: avx2x4 gen() 16346 MB/s Mar 6 02:28:00.081194 kernel: raid6: avx2x2 gen() 12424 MB/s Mar 6 02:28:00.120036 kernel: raid6: avx2x1 gen() 13309 MB/s Mar 6 02:28:00.120125 kernel: raid6: using algorithm avx2x4 gen() 16346 MB/s Mar 6 02:28:00.150660 kernel: raid6: .... xor() 4683 MB/s, rmw enabled Mar 6 02:28:00.150740 kernel: raid6: using avx2x2 recovery algorithm Mar 6 02:28:00.277018 kernel: xor: automatically using best checksumming function avx Mar 6 02:28:01.965122 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 02:28:02.034685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 02:28:02.082808 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:28:02.197035 systemd-udevd[454]: Using default interface naming scheme 'v255'. Mar 6 02:28:02.221628 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:28:02.249809 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 02:28:02.415093 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Mar 6 02:28:02.655151 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 02:28:02.684136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 02:28:03.476227 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:28:03.532250 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 02:28:03.729017 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 02:28:03.793219 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 02:28:03.801977 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:28:03.829147 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 02:28:03.804130 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:28:03.901546 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 02:28:03.901612 kernel: GPT:9289727 != 19775487 Mar 6 02:28:03.901630 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 02:28:03.901640 kernel: GPT:9289727 != 19775487 Mar 6 02:28:03.901649 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 02:28:03.901663 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 02:28:03.868812 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:28:03.921646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:28:03.950697 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:28:04.115596 kernel: libata version 3.00 loaded. Mar 6 02:28:04.117094 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:28:04.118594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:28:04.173549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:28:04.253108 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 02:28:04.271013 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 02:28:04.340939 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 6 02:28:04.341435 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 6 02:28:04.341637 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 02:28:04.372983 kernel: AES CTR mode by8 optimization enabled Mar 6 02:28:04.465732 kernel: scsi host0: ahci Mar 6 02:28:04.471731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:28:04.545789 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 6 02:28:04.592656 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 02:28:04.659690 kernel: scsi host1: ahci Mar 6 02:28:04.660192 kernel: scsi host2: ahci Mar 6 02:28:04.672213 kernel: scsi host3: ahci Mar 6 02:28:04.681655 kernel: scsi host4: ahci Mar 6 02:28:04.686125 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 02:28:04.831442 kernel: scsi host5: ahci Mar 6 02:28:04.831824 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Mar 6 02:28:04.833521 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Mar 6 02:28:04.833540 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Mar 6 02:28:04.833555 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Mar 6 02:28:04.833568 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Mar 6 02:28:04.833580 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Mar 6 02:28:04.951214 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 02:28:05.001604 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 02:28:05.018793 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 02:28:05.070023 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 02:28:05.183017 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 02:28:05.183064 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 02:28:05.183082 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 02:28:05.183099 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 02:28:05.195530 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 02:28:05.195618 kernel: ata3.00: LPM support broken, forcing max_power Mar 6 02:28:05.219817 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 02:28:05.227071 kernel: ata3.00: applying bridge limits Mar 6 02:28:05.244575 kernel: ata3.00: LPM support broken, forcing max_power Mar 6 02:28:05.244651 kernel: ata3.00: configured for UDMA/100 Mar 6 02:28:05.262198 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 02:28:05.265820 disk-uuid[623]: Primary Header is updated. Mar 6 02:28:05.265820 disk-uuid[623]: Secondary Entries is updated. Mar 6 02:28:05.265820 disk-uuid[623]: Secondary Header is updated. Mar 6 02:28:05.322555 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 02:28:05.322590 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 02:28:05.447599 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 02:28:05.449569 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 02:28:05.474614 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 02:28:06.199175 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 02:28:06.232633 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 02:28:06.292215 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:28:06.326575 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 02:28:06.351944 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 02:28:06.398938 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 02:28:06.425218 disk-uuid[624]: The operation has completed successfully. Mar 6 02:28:06.455709 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 02:28:06.578612 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 02:28:06.579531 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 02:28:06.676555 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 02:28:06.751114 sh[652]: Success Mar 6 02:28:06.880995 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 02:28:06.881136 kernel: device-mapper: uevent: version 1.0.3 Mar 6 02:28:06.881157 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 6 02:28:06.989678 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 6 02:28:07.189781 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 02:28:07.228707 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 02:28:07.307038 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 02:28:07.345771 kernel: BTRFS: device fsid 1235dd15-5252-4928-9c6c-372370c6bfca devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (664) Mar 6 02:28:07.395445 kernel: BTRFS info (device dm-0): first mount of filesystem 1235dd15-5252-4928-9c6c-372370c6bfca Mar 6 02:28:07.395690 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:28:07.520224 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 6 02:28:07.520803 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 6 02:28:07.529104 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 02:28:07.544690 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 6 02:28:07.588541 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 02:28:07.623790 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 02:28:07.666163 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 02:28:07.899451 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (695) Mar 6 02:28:07.954526 kernel: BTRFS info (device vda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:28:07.986197 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:28:08.075212 kernel: BTRFS info (device vda6): turning on async discard Mar 6 02:28:08.075525 kernel: BTRFS info (device vda6): enabling free space tree Mar 6 02:28:08.120246 kernel: BTRFS info (device vda6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:28:08.147741 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 02:28:08.181560 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 02:28:09.257671 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 02:28:09.353695 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 02:28:10.162110 systemd-networkd[833]: lo: Link UP Mar 6 02:28:10.172704 systemd-networkd[833]: lo: Gained carrier Mar 6 02:28:10.227466 systemd-networkd[833]: Enumeration completed Mar 6 02:28:10.228783 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 02:28:10.242489 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:28:10.242496 systemd-networkd[833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 02:28:11.233105 systemd-networkd[833]: eth0: Link UP Mar 6 02:28:11.233775 systemd-networkd[833]: eth0: Gained carrier Mar 6 02:28:11.233953 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:28:11.344031 systemd[1]: Reached target network.target - Network. Mar 6 02:28:11.384473 systemd-networkd[833]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 02:28:11.429687 ignition[758]: Ignition 2.22.0 Mar 6 02:28:11.431133 ignition[758]: Stage: fetch-offline Mar 6 02:28:11.431571 ignition[758]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:28:11.431589 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:28:11.432788 ignition[758]: parsed url from cmdline: "" Mar 6 02:28:11.432794 ignition[758]: no config URL provided Mar 6 02:28:11.432988 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 02:28:11.433004 ignition[758]: no config at "/usr/lib/ignition/user.ign" Mar 6 02:28:11.433138 ignition[758]: op(1): [started] loading QEMU firmware config module Mar 6 02:28:11.433146 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 02:28:11.569677 ignition[758]: op(1): [finished] loading QEMU firmware config module Mar 6 02:28:12.439147 systemd-networkd[833]: eth0: Gained IPv6LL Mar 6 02:28:13.148064 ignition[758]: parsing config with SHA512: c6a6ad98dc097f32484823404196498d17513d2ad5f22e8afb28431df15271ff4507199cecc8859a22ef23e0ffd974e2683b10631750155a2198c314bb9cee0f Mar 6 02:28:13.189977 unknown[758]: fetched base config from "system" Mar 6 02:28:13.191058 ignition[758]: fetch-offline: fetch-offline passed Mar 6 02:28:13.189990 unknown[758]: fetched user config from "qemu" Mar 6 02:28:13.191219 ignition[758]: Ignition finished successfully Mar 6 02:28:13.200738 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 02:28:13.233466 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 02:28:13.242435 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 02:28:13.561588 ignition[846]: Ignition 2.22.0 Mar 6 02:28:13.561683 ignition[846]: Stage: kargs Mar 6 02:28:13.561969 ignition[846]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:28:13.561983 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:28:13.587787 ignition[846]: kargs: kargs passed Mar 6 02:28:13.588083 ignition[846]: Ignition finished successfully Mar 6 02:28:13.620986 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 02:28:13.634994 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 02:28:13.771032 ignition[855]: Ignition 2.22.0 Mar 6 02:28:13.771139 ignition[855]: Stage: disks Mar 6 02:28:13.772739 ignition[855]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:28:13.772766 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:28:13.802224 ignition[855]: disks: disks passed Mar 6 02:28:13.824983 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 02:28:13.817754 ignition[855]: Ignition finished successfully Mar 6 02:28:13.850084 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 02:28:13.856488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 02:28:13.865787 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 02:28:13.867201 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 02:28:13.930509 systemd[1]: Reached target basic.target - Basic System. Mar 6 02:28:13.962531 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 02:28:14.131647 systemd-fsck[865]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 6 02:28:14.157560 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 02:28:14.185641 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 02:28:15.323212 kernel: EXT4-fs (vda9): mounted filesystem 16ab7223-a8af-43d2-ad40-7e1bf0ff2a89 r/w with ordered data mode. Quota mode: none. Mar 6 02:28:15.332818 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 02:28:15.335790 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 02:28:15.346195 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 02:28:15.445102 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 02:28:15.463072 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 02:28:15.512830 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (874) Mar 6 02:28:15.463156 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 02:28:15.463196 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 02:28:15.532443 kernel: BTRFS info (device vda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:28:15.543428 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:28:15.578956 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 02:28:15.594507 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 02:28:15.704075 kernel: BTRFS info (device vda6): turning on async discard Mar 6 02:28:15.704156 kernel: BTRFS info (device vda6): enabling free space tree Mar 6 02:28:15.716435 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 02:28:15.897476 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 02:28:15.951760 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Mar 6 02:28:15.987814 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 02:28:16.045099 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 02:28:17.373580 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 02:28:17.394453 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 02:28:17.478973 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 02:28:17.656167 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 02:28:17.684498 kernel: BTRFS info (device vda6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:28:17.777490 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 02:28:17.972588 ignition[988]: INFO : Ignition 2.22.0 Mar 6 02:28:17.972588 ignition[988]: INFO : Stage: mount Mar 6 02:28:17.999182 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:28:17.999182 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:28:17.999182 ignition[988]: INFO : mount: mount passed Mar 6 02:28:17.999182 ignition[988]: INFO : Ignition finished successfully Mar 6 02:28:18.026993 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 02:28:18.048968 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 02:28:18.290688 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 02:28:18.447478 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1000) Mar 6 02:28:18.470518 kernel: BTRFS info (device vda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:28:18.470571 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:28:18.590163 kernel: BTRFS info (device vda6): turning on async discard Mar 6 02:28:18.590532 kernel: BTRFS info (device vda6): enabling free space tree Mar 6 02:28:18.604994 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 02:28:18.887767 ignition[1017]: INFO : Ignition 2.22.0 Mar 6 02:28:18.887767 ignition[1017]: INFO : Stage: files Mar 6 02:28:18.970808 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:28:18.970808 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:28:18.970808 ignition[1017]: DEBUG : files: compiled without relabeling support, skipping Mar 6 02:28:18.970808 ignition[1017]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 02:28:18.970808 ignition[1017]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 02:28:19.233835 ignition[1017]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 02:28:19.283782 ignition[1017]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 02:28:19.283782 ignition[1017]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 02:28:19.248180 unknown[1017]: wrote ssh authorized keys file for user: core Mar 6 02:28:19.403592 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 02:28:19.403592 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 02:28:19.667818 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 02:28:21.283412 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 02:28:21.337639 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 6 02:28:21.337639 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 6 02:28:22.251579 kernel: hrtimer: interrupt took 11966308 ns Mar 6 02:28:23.726569 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1416666009 wd_nsec: 1416665761 Mar 6 02:28:23.945402 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 6 02:28:27.296822 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 6 02:28:27.296822 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:28:27.402520 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 6 02:28:28.148166 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 6 02:28:32.714528 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:28:32.753725 ignition[1017]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 6 02:28:32.753725 ignition[1017]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 02:28:32.753725 ignition[1017]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 02:28:32.753725 ignition[1017]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 6 02:28:32.753725 ignition[1017]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 6 02:28:32.753725 ignition[1017]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 02:28:32.753725 ignition[1017]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 02:28:32.753725 ignition[1017]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 6 02:28:32.753725 ignition[1017]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 02:28:33.042478 ignition[1017]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 02:28:33.075745 ignition[1017]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 02:28:33.075745 ignition[1017]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 02:28:33.075745 ignition[1017]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 6 02:28:33.075745 ignition[1017]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 02:28:33.075745 ignition[1017]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 02:28:33.075745 ignition[1017]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 02:28:33.075745 ignition[1017]: INFO : files: files passed Mar 6 02:28:33.075745 ignition[1017]: INFO : Ignition finished successfully Mar 6 02:28:33.104462 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 02:28:33.156995 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 02:28:33.238719 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 02:28:33.314974 initrd-setup-root-after-ignition[1046]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 02:28:33.332721 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:28:33.332721 initrd-setup-root-after-ignition[1048]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:28:33.355191 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:28:33.345152 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 02:28:33.345585 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 02:28:33.427972 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 02:28:33.464833 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 02:28:33.497836 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 02:28:33.681142 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 02:28:33.681717 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 02:28:33.705101 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 02:28:33.717050 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 02:28:33.746802 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 02:28:33.750564 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 02:28:33.935741 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 02:28:33.961552 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 02:28:34.042801 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:28:34.064061 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:28:34.101507 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 02:28:34.157510 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 02:28:34.163181 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 02:28:34.182547 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 02:28:34.182717 systemd[1]: Stopped target basic.target - Basic System. Mar 6 02:28:34.182979 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 02:28:34.183116 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 02:28:34.183243 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 02:28:34.314603 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 6 02:28:34.348823 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 02:28:34.439850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 02:28:34.537743 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 02:28:34.645544 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 02:28:34.690227 systemd[1]: Stopped target swap.target - Swaps. Mar 6 02:28:34.744761 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 02:28:34.745992 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 02:28:34.786168 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:28:34.813711 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:28:34.842044 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 02:28:34.848544 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:28:35.061608 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 02:28:35.062080 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 02:28:35.129698 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 02:28:35.130707 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 02:28:35.174805 systemd[1]: Stopped target paths.target - Path Units. Mar 6 02:28:35.204462 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 02:28:35.212695 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:28:35.223158 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 02:28:35.304150 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 02:28:35.332779 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 02:28:35.333077 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 02:28:35.374603 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 02:28:35.374852 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 02:28:35.446691 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 02:28:35.447500 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 02:28:35.481110 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 02:28:35.481620 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 02:28:35.544475 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 02:28:35.637475 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 02:28:35.649779 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 02:28:35.650463 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:28:35.718146 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 02:28:35.719037 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 02:28:35.764074 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 02:28:35.764740 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 02:28:35.940779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 02:28:35.983084 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 02:28:35.983563 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 02:28:36.258707 ignition[1073]: INFO : Ignition 2.22.0 Mar 6 02:28:36.273216 ignition[1073]: INFO : Stage: umount Mar 6 02:28:36.273216 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:28:36.273216 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:28:36.319781 ignition[1073]: INFO : umount: umount passed Mar 6 02:28:36.319781 ignition[1073]: INFO : Ignition finished successfully Mar 6 02:28:36.357789 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 02:28:36.379220 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 02:28:36.435712 systemd[1]: Stopped target network.target - Network. Mar 6 02:28:36.459515 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 02:28:36.460038 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 02:28:36.460176 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 02:28:36.460537 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 02:28:36.473673 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 02:28:36.474126 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 02:28:36.496255 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 02:28:36.496575 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 02:28:36.529621 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 02:28:36.530131 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 02:28:36.574837 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 02:28:36.601648 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 02:28:36.832068 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 02:28:36.833834 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 02:28:36.987587 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 6 02:28:37.000657 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 02:28:37.028648 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 02:28:37.072825 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 6 02:28:37.080710 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 6 02:28:37.135837 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 02:28:37.136086 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:28:37.192531 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 02:28:37.193064 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 02:28:37.193150 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 02:28:37.333233 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 02:28:37.333626 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:28:37.363818 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 02:28:37.364102 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 02:28:37.374253 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 02:28:37.374496 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:28:37.435470 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:28:37.451061 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 02:28:37.451543 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:28:37.581792 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 02:28:37.582776 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:28:37.635196 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 02:28:37.635977 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 02:28:37.688861 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 02:28:37.689237 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 02:28:37.702083 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 02:28:37.702168 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:28:37.702641 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 02:28:37.702735 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 02:28:37.791111 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 02:28:37.791447 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 02:28:37.829585 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 02:28:37.829692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 02:28:37.892676 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 02:28:37.895826 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 6 02:28:37.896045 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:28:37.935603 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 02:28:37.935715 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:28:38.033234 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 02:28:38.033739 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:28:38.050741 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 02:28:38.050829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:28:38.080630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:28:38.080824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:28:38.167851 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 6 02:28:38.168086 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 6 02:28:38.168586 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 6 02:28:38.168786 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:28:38.169799 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 02:28:38.170172 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 02:28:38.173245 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 02:28:38.299600 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 02:28:38.391072 systemd[1]: Switching root. Mar 6 02:28:38.494656 systemd-journald[203]: Journal stopped Mar 6 02:28:44.177019 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 6 02:28:44.177108 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 02:28:44.177132 kernel: SELinux: policy capability open_perms=1 Mar 6 02:28:44.177149 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 02:28:44.177163 kernel: SELinux: policy capability always_check_network=0 Mar 6 02:28:44.177178 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 02:28:44.177193 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 02:28:44.177208 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 02:28:44.177223 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 02:28:44.177255 kernel: SELinux: policy capability userspace_initial_context=0 Mar 6 02:28:44.177504 kernel: audit: type=1403 audit(1772764119.236:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 02:28:44.177528 systemd[1]: Successfully loaded SELinux policy in 331.502ms. Mar 6 02:28:44.177555 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 34.541ms. Mar 6 02:28:44.177573 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 02:28:44.177589 systemd[1]: Detected virtualization kvm. Mar 6 02:28:44.177605 systemd[1]: Detected architecture x86-64. Mar 6 02:28:44.177619 systemd[1]: Detected first boot. Mar 6 02:28:44.177634 systemd[1]: Initializing machine ID from VM UUID. Mar 6 02:28:44.177649 zram_generator::config[1118]: No configuration found. Mar 6 02:28:44.177667 kernel: Guest personality initialized and is inactive Mar 6 02:28:44.177688 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 6 02:28:44.177703 kernel: Initialized host personality Mar 6 02:28:44.177716 kernel: NET: Registered PF_VSOCK protocol family Mar 6 02:28:44.177733 systemd[1]: Populated /etc with preset unit settings. Mar 6 02:28:44.177753 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 6 02:28:44.177777 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 02:28:44.177793 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 02:28:44.177812 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 02:28:44.177831 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 02:28:44.177847 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 02:28:44.177864 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 02:28:44.178006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 02:28:44.178025 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 02:28:44.178044 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 02:28:44.178063 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 02:28:44.178078 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 02:28:44.178093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:28:44.178115 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:28:44.178135 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 02:28:44.178153 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 02:28:44.178170 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 02:28:44.178186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 02:28:44.178203 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 02:28:44.178220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:28:44.178236 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:28:44.178472 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 02:28:44.178496 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 02:28:44.178512 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 02:28:44.178530 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 02:28:44.178547 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:28:44.178565 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 02:28:44.178583 systemd[1]: Reached target slices.target - Slice Units. Mar 6 02:28:44.178602 systemd[1]: Reached target swap.target - Swaps. Mar 6 02:28:44.178618 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 02:28:44.178642 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 02:28:44.178661 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 6 02:28:44.178678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:28:44.178694 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 02:28:44.178709 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:28:44.178726 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 02:28:44.178746 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 02:28:44.178763 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 02:28:44.178779 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 02:28:44.178800 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:28:44.178819 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 02:28:44.178835 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 02:28:44.178850 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 02:28:44.178866 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 02:28:44.179001 systemd[1]: Reached target machines.target - Containers. Mar 6 02:28:44.179020 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 02:28:44.179038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:28:44.179061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 02:28:44.179077 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 02:28:44.179094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:28:44.179112 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 02:28:44.179130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:28:44.179145 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 02:28:44.179160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:28:44.179178 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 02:28:44.179197 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 02:28:44.179213 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 02:28:44.179231 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 02:28:44.179246 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 02:28:44.179469 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:28:44.179488 kernel: ACPI: bus type drm_connector registered Mar 6 02:28:44.179506 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 02:28:44.179521 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 02:28:44.179535 kernel: loop: module loaded Mar 6 02:28:44.179556 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 02:28:44.179573 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 02:28:44.179588 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 6 02:28:44.179639 systemd-journald[1203]: Collecting audit messages is disabled. Mar 6 02:28:44.179676 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 02:28:44.179698 systemd-journald[1203]: Journal started Mar 6 02:28:44.179735 systemd-journald[1203]: Runtime Journal (/run/log/journal/32d8dcfebde64932ad0caa60f7c19c48) is 6M, max 48.1M, 42.1M free. Mar 6 02:28:41.805651 systemd[1]: Queued start job for default target multi-user.target. Mar 6 02:28:41.838080 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 02:28:41.842149 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 02:28:41.845462 systemd[1]: systemd-journald.service: Consumed 4.428s CPU time. Mar 6 02:28:44.198622 kernel: fuse: init (API version 7.41) Mar 6 02:28:44.230625 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 02:28:44.230702 systemd[1]: Stopped verity-setup.service. Mar 6 02:28:44.295205 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:28:44.318478 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 02:28:44.337497 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 02:28:44.352231 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 02:28:44.374702 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 02:28:44.398855 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 02:28:44.415847 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 02:28:44.434627 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 02:28:44.453678 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 02:28:44.474863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:28:44.499183 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 02:28:44.504799 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 02:28:44.527810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:28:44.528657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:28:44.556072 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 02:28:44.556775 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 02:28:44.576092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:28:44.578670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:28:44.598081 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 02:28:44.598969 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 02:28:44.616868 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:28:44.617589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:28:44.633865 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 02:28:44.649859 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:28:44.667596 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 02:28:44.689059 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 6 02:28:44.709683 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:28:44.755221 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 02:28:44.779226 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 02:28:44.817476 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 02:28:44.839129 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 02:28:44.839189 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 02:28:44.857076 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 6 02:28:44.890105 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 02:28:44.906188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:28:44.913448 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 02:28:44.934802 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 02:28:44.956146 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 02:28:44.985649 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 02:28:45.003584 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 02:28:45.032812 systemd-journald[1203]: Time spent on flushing to /var/log/journal/32d8dcfebde64932ad0caa60f7c19c48 is 151.079ms for 1071 entries. Mar 6 02:28:45.032812 systemd-journald[1203]: System Journal (/var/log/journal/32d8dcfebde64932ad0caa60f7c19c48) is 8M, max 195.6M, 187.6M free. Mar 6 02:28:45.242076 systemd-journald[1203]: Received client request to flush runtime journal. Mar 6 02:28:45.242145 kernel: loop0: detected capacity change from 0 to 128560 Mar 6 02:28:45.009837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:28:45.035596 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 02:28:45.051031 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 02:28:45.089614 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 02:28:45.120038 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 02:28:45.184844 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:28:45.232157 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Mar 6 02:28:45.241590 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Mar 6 02:28:45.259035 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 02:28:45.276236 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 02:28:45.295594 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:28:45.327990 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 02:28:45.349124 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 6 02:28:45.370842 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 02:28:45.406055 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 02:28:45.516833 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 02:28:45.519587 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 6 02:28:45.573692 kernel: loop1: detected capacity change from 0 to 228704 Mar 6 02:28:45.626860 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 02:28:45.653085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 02:28:45.752712 kernel: loop2: detected capacity change from 0 to 110984 Mar 6 02:28:45.773208 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 6 02:28:45.773587 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 6 02:28:45.784009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:28:46.165459 kernel: loop3: detected capacity change from 0 to 128560 Mar 6 02:28:48.263479 kernel: loop4: detected capacity change from 0 to 228704 Mar 6 02:28:48.448459 kernel: loop5: detected capacity change from 0 to 110984 Mar 6 02:28:48.624727 (sd-merge)[1265]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 02:28:48.633781 (sd-merge)[1265]: Merged extensions into '/usr'. Mar 6 02:28:48.722777 systemd[1]: Reload requested from client PID 1238 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 02:28:48.722985 systemd[1]: Reloading... Mar 6 02:28:53.355522 zram_generator::config[1298]: No configuration found. Mar 6 02:28:54.862168 ldconfig[1233]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 02:28:55.193994 systemd[1]: Reloading finished in 6456 ms. Mar 6 02:28:55.327233 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 02:28:55.360740 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 02:28:55.390143 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 02:28:55.534542 systemd[1]: Starting ensure-sysext.service... Mar 6 02:28:55.557175 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 02:28:55.754024 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:28:55.840708 systemd[1]: Reload requested from client PID 1330 ('systemctl') (unit ensure-sysext.service)... Mar 6 02:28:55.840877 systemd[1]: Reloading... Mar 6 02:28:55.911075 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 6 02:28:55.911255 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 6 02:28:55.912765 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 02:28:55.916710 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 02:28:55.920254 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 02:28:55.921863 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Mar 6 02:28:55.922176 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Mar 6 02:28:55.943210 systemd-tmpfiles[1331]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 02:28:55.943612 systemd-tmpfiles[1331]: Skipping /boot Mar 6 02:28:55.972173 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Mar 6 02:28:56.020812 systemd-tmpfiles[1331]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 02:28:56.021538 systemd-tmpfiles[1331]: Skipping /boot Mar 6 02:28:56.163573 zram_generator::config[1371]: No configuration found. Mar 6 02:28:56.977656 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 6 02:28:57.012710 kernel: ACPI: button: Power Button [PWRF] Mar 6 02:28:57.031225 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 02:28:57.068631 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 02:28:57.070235 systemd[1]: Reloading finished in 1228 ms. Mar 6 02:28:57.154626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:28:57.180634 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:28:57.382819 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 02:28:57.474081 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 02:28:57.750649 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 02:28:57.799044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 02:28:57.861106 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 02:28:57.936132 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 02:28:57.976497 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 02:28:58.161598 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 02:28:58.251244 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:28:58.252666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:28:58.265572 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:28:58.299809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:28:58.362794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:28:58.396247 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:28:58.398759 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:28:58.399217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:28:58.481478 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:28:58.484858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:28:58.486695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:28:58.486838 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:28:58.487093 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:28:58.554538 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:28:58.555104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:28:58.571564 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 02:28:58.621025 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 6 02:28:58.644658 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 02:28:58.650630 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 02:28:58.578869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:28:58.579525 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:28:58.579707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:28:58.632714 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 02:28:59.035745 systemd[1]: Finished ensure-sysext.service. Mar 6 02:28:59.067075 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 02:28:59.088624 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 02:28:59.142008 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 02:28:59.180737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:28:59.181204 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:28:59.203575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:28:59.233807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:28:59.288148 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:28:59.289636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:28:59.315098 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 02:28:59.322231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 02:28:59.343175 augenrules[1483]: No rules Mar 6 02:28:59.381253 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 02:28:59.387853 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 02:28:59.486688 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 02:28:59.613784 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 02:28:59.616205 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 02:28:59.635741 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 02:28:59.703085 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 02:28:59.729223 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 02:28:59.770725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:28:59.881217 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 02:29:00.070187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:29:00.074794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:29:00.106738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:29:00.135520 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 02:29:01.167643 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:29:01.211076 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 02:29:01.244787 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 02:29:01.260767 systemd-networkd[1451]: lo: Link UP Mar 6 02:29:01.260780 systemd-networkd[1451]: lo: Gained carrier Mar 6 02:29:01.308106 systemd-networkd[1451]: Enumeration completed Mar 6 02:29:01.334213 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 02:29:01.346481 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:29:01.346489 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 02:29:01.360631 systemd-networkd[1451]: eth0: Link UP Mar 6 02:29:01.361062 systemd-networkd[1451]: eth0: Gained carrier Mar 6 02:29:01.361098 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:29:01.384742 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 6 02:29:01.556659 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 02:29:02.153645 systemd-networkd[1451]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 02:29:02.156764 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Mar 6 02:29:03.957459 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 02:29:03.961215 systemd-timesyncd[1497]: Initial clock synchronization to Fri 2026-03-06 02:29:03.952187 UTC. Mar 6 02:29:03.971290 systemd-resolved[1452]: Positive Trust Anchors: Mar 6 02:29:03.972298 systemd-resolved[1452]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 02:29:03.972341 systemd-resolved[1452]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 02:29:04.044247 systemd-resolved[1452]: Defaulting to hostname 'linux'. Mar 6 02:29:04.054223 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 02:29:04.076438 systemd[1]: Reached target network.target - Network. Mar 6 02:29:04.089557 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:29:04.115922 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 02:29:04.131800 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 02:29:04.164751 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 02:29:04.213288 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 6 02:29:04.316752 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 02:29:04.336222 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 02:29:04.360826 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 02:29:04.384466 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 02:29:04.384765 systemd[1]: Reached target paths.target - Path Units. Mar 6 02:29:04.403782 systemd[1]: Reached target timers.target - Timer Units. Mar 6 02:29:04.432380 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 02:29:04.479728 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 02:29:04.612370 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 6 02:29:04.656900 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 6 02:29:04.681770 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 6 02:29:04.755804 kernel: kvm_amd: TSC scaling supported Mar 6 02:29:04.757330 kernel: kvm_amd: Nested Virtualization enabled Mar 6 02:29:04.757368 kernel: kvm_amd: Nested Paging enabled Mar 6 02:29:04.770920 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 02:29:04.772952 kernel: kvm_amd: PMU virtualization is disabled Mar 6 02:29:04.916567 systemd-networkd[1451]: eth0: Gained IPv6LL Mar 6 02:29:04.918872 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 02:29:04.941889 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 6 02:29:04.970811 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 6 02:29:04.999508 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 02:29:05.032346 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 02:29:05.074381 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 02:29:05.098513 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 02:29:05.127511 systemd[1]: Reached target basic.target - Basic System. Mar 6 02:29:05.150345 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 02:29:05.150388 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 02:29:05.167942 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 02:29:05.273895 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 02:29:05.423609 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 02:29:05.453271 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 02:29:05.511925 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 02:29:05.540765 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 02:29:05.555201 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 02:29:05.559503 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 6 02:29:05.623802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:29:05.720955 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 02:29:05.763360 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 02:29:05.788520 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Refreshing passwd entry cache Mar 6 02:29:05.789343 oslogin_cache_refresh[1532]: Refreshing passwd entry cache Mar 6 02:29:05.822336 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 02:29:05.878386 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Failure getting users, quitting Mar 6 02:29:05.878386 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 02:29:05.878386 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Refreshing group entry cache Mar 6 02:29:05.875326 oslogin_cache_refresh[1532]: Failure getting users, quitting Mar 6 02:29:05.875361 oslogin_cache_refresh[1532]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 02:29:05.875444 oslogin_cache_refresh[1532]: Refreshing group entry cache Mar 6 02:29:05.919276 jq[1530]: false Mar 6 02:29:05.967333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 02:29:06.067584 oslogin_cache_refresh[1532]: Failure getting groups, quitting Mar 6 02:29:06.072866 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 02:29:06.079865 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Failure getting groups, quitting Mar 6 02:29:06.079865 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 02:29:06.067604 oslogin_cache_refresh[1532]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 02:29:06.210308 extend-filesystems[1531]: Found /dev/vda6 Mar 6 02:29:06.245594 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 02:29:06.262737 extend-filesystems[1531]: Found /dev/vda9 Mar 6 02:29:06.275435 extend-filesystems[1531]: Checking size of /dev/vda9 Mar 6 02:29:06.264358 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 02:29:06.309462 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 02:29:06.316237 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 02:29:06.343960 extend-filesystems[1531]: Resized partition /dev/vda9 Mar 6 02:29:06.360555 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 02:29:06.370201 extend-filesystems[1562]: resize2fs 1.47.3 (8-Jul-2025) Mar 6 02:29:06.422852 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 02:29:06.465541 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 02:29:06.516356 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 02:29:06.530961 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 02:29:06.540927 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 6 02:29:06.541873 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 6 02:29:06.583411 jq[1561]: true Mar 6 02:29:06.586783 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 02:29:06.590840 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 02:29:06.636438 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 02:29:06.909387 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 02:29:06.910816 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 02:29:07.045261 update_engine[1558]: I20260306 02:29:07.043361 1558 main.cc:92] Flatcar Update Engine starting Mar 6 02:29:07.369430 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 02:29:07.579869 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 02:29:07.637347 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 02:29:07.660249 extend-filesystems[1562]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 02:29:07.660249 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 02:29:07.660249 extend-filesystems[1562]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 02:29:07.754792 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Mar 6 02:29:07.784590 tar[1569]: linux-amd64/LICENSE Mar 6 02:29:07.784590 tar[1569]: linux-amd64/helm Mar 6 02:29:07.661516 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 02:29:07.698453 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 02:29:07.738453 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 02:29:07.841496 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 02:29:07.863735 jq[1571]: true Mar 6 02:29:08.570605 dbus-daemon[1528]: [system] SELinux support is enabled Mar 6 02:29:08.655590 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 02:29:08.782975 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 02:29:08.801340 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 02:29:08.841605 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 02:29:08.841824 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 02:29:08.972551 systemd[1]: Started update-engine.service - Update Engine. Mar 6 02:29:08.975905 update_engine[1558]: I20260306 02:29:08.975428 1558 update_check_scheduler.cc:74] Next update check in 2m12s Mar 6 02:29:09.012482 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 02:29:09.075598 systemd-logind[1552]: Watching system buttons on /dev/input/event2 (Power Button) Mar 6 02:29:09.075776 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 02:29:09.114927 systemd-logind[1552]: New seat seat0. Mar 6 02:29:09.123211 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 02:29:09.771223 kernel: EDAC MC: Ver: 3.0.0 Mar 6 02:29:09.924908 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Mar 6 02:29:09.948863 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 02:29:09.972201 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 02:29:10.982432 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 02:29:11.699909 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 02:29:12.176879 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 02:29:12.318386 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 02:29:12.451225 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 02:29:12.453265 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 02:29:12.614559 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 02:29:15.149319 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 02:29:15.217482 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:43470.service - OpenSSH per-connection server daemon (10.0.0.1:43470). Mar 6 02:29:15.313547 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 02:29:15.349577 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 02:29:15.403819 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 02:29:15.437963 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 02:29:16.257620 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 43470 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:29:16.260816 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:29:16.337419 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 02:29:16.384832 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 02:29:16.956566 systemd-logind[1552]: New session 1 of user core. Mar 6 02:29:17.164538 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 02:29:17.216551 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 02:29:17.767523 (systemd)[1643]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 02:29:17.780284 systemd-logind[1552]: New session c1 of user core. Mar 6 02:29:18.039238 tar[1569]: linux-amd64/README.md Mar 6 02:29:18.093460 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 02:29:18.219629 containerd[1590]: time="2026-03-06T02:29:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 6 02:29:18.224877 containerd[1590]: time="2026-03-06T02:29:18.224842073Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 6 02:29:18.255954 systemd[1643]: Queued start job for default target default.target. Mar 6 02:29:18.261090 containerd[1590]: time="2026-03-06T02:29:18.260536932Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="116.016µs" Mar 6 02:29:18.261090 containerd[1590]: time="2026-03-06T02:29:18.260803961Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 6 02:29:18.261757 containerd[1590]: time="2026-03-06T02:29:18.260969480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 6 02:29:18.262490 containerd[1590]: time="2026-03-06T02:29:18.261910447Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 6 02:29:18.262490 containerd[1590]: time="2026-03-06T02:29:18.262361610Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 6 02:29:18.262561 containerd[1590]: time="2026-03-06T02:29:18.262522029Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 02:29:18.262890 containerd[1590]: time="2026-03-06T02:29:18.262628999Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 02:29:18.262890 containerd[1590]: time="2026-03-06T02:29:18.262784159Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 02:29:18.266150 containerd[1590]: time="2026-03-06T02:29:18.263596575Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 02:29:18.270309 containerd[1590]: time="2026-03-06T02:29:18.266818863Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 02:29:18.270309 containerd[1590]: time="2026-03-06T02:29:18.268352286Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 02:29:18.274338 containerd[1590]: time="2026-03-06T02:29:18.272550265Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 6 02:29:18.274338 containerd[1590]: time="2026-03-06T02:29:18.272829487Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 6 02:29:18.274338 containerd[1590]: time="2026-03-06T02:29:18.273463290Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 02:29:18.274338 containerd[1590]: time="2026-03-06T02:29:18.273503605Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 02:29:18.274338 containerd[1590]: time="2026-03-06T02:29:18.273516530Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 6 02:29:18.274338 containerd[1590]: time="2026-03-06T02:29:18.273557866Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 6 02:29:18.277970 containerd[1590]: time="2026-03-06T02:29:18.277291218Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 6 02:29:18.277970 containerd[1590]: time="2026-03-06T02:29:18.277500449Z" level=info msg="metadata content store policy set" policy=shared Mar 6 02:29:18.286951 systemd[1643]: Created slice app.slice - User Application Slice. Mar 6 02:29:18.287497 systemd[1643]: Reached target paths.target - Paths. Mar 6 02:29:18.287820 systemd[1643]: Reached target timers.target - Timers. Mar 6 02:29:18.291860 systemd[1643]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 02:29:18.340250 containerd[1590]: time="2026-03-06T02:29:18.339522071Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 6 02:29:18.343230 containerd[1590]: time="2026-03-06T02:29:18.343197494Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 6 02:29:18.346429 containerd[1590]: time="2026-03-06T02:29:18.345603858Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 6 02:29:18.347366 containerd[1590]: time="2026-03-06T02:29:18.347337124Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 6 02:29:18.348192 containerd[1590]: time="2026-03-06T02:29:18.347957092Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 6 02:29:18.348410 containerd[1590]: time="2026-03-06T02:29:18.348379891Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 6 02:29:18.348523 containerd[1590]: time="2026-03-06T02:29:18.348500857Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 6 02:29:18.348595 containerd[1590]: time="2026-03-06T02:29:18.348578612Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 6 02:29:18.348790 containerd[1590]: time="2026-03-06T02:29:18.348771022Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 6 02:29:18.348860 containerd[1590]: time="2026-03-06T02:29:18.348840731Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 6 02:29:18.348920 containerd[1590]: time="2026-03-06T02:29:18.348905011Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 6 02:29:18.349194 containerd[1590]: time="2026-03-06T02:29:18.348970604Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 6 02:29:18.349827 containerd[1590]: time="2026-03-06T02:29:18.349804782Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 6 02:29:18.350274 containerd[1590]: time="2026-03-06T02:29:18.350252087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 6 02:29:18.350830 containerd[1590]: time="2026-03-06T02:29:18.350808997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 6 02:29:18.355220 containerd[1590]: time="2026-03-06T02:29:18.354522802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 6 02:29:18.355220 containerd[1590]: time="2026-03-06T02:29:18.354903223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 6 02:29:18.355220 containerd[1590]: time="2026-03-06T02:29:18.354933119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 6 02:29:18.355220 containerd[1590]: time="2026-03-06T02:29:18.354956231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 6 02:29:18.363463 containerd[1590]: time="2026-03-06T02:29:18.354974917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 6 02:29:18.363463 containerd[1590]: time="2026-03-06T02:29:18.362529208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 6 02:29:18.363463 containerd[1590]: time="2026-03-06T02:29:18.362566697Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 6 02:29:18.363463 containerd[1590]: time="2026-03-06T02:29:18.362592406Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 6 02:29:18.365757 containerd[1590]: time="2026-03-06T02:29:18.365493673Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 6 02:29:18.365757 containerd[1590]: time="2026-03-06T02:29:18.365612665Z" level=info msg="Start snapshots syncer" Mar 6 02:29:18.365830 containerd[1590]: time="2026-03-06T02:29:18.365766913Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 6 02:29:18.365837 systemd[1643]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 02:29:18.370812 systemd[1643]: Reached target sockets.target - Sockets. Mar 6 02:29:18.371357 systemd[1643]: Reached target basic.target - Basic System. Mar 6 02:29:18.371621 containerd[1590]: time="2026-03-06T02:29:18.371449152Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 6 02:29:18.375311 containerd[1590]: time="2026-03-06T02:29:18.371613799Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 6 02:29:18.375311 containerd[1590]: time="2026-03-06T02:29:18.372212196Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 6 02:29:18.375311 containerd[1590]: time="2026-03-06T02:29:18.372413713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 6 02:29:18.375311 containerd[1590]: time="2026-03-06T02:29:18.372443999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 6 02:29:18.375311 containerd[1590]: time="2026-03-06T02:29:18.372460089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 6 02:29:18.375311 containerd[1590]: time="2026-03-06T02:29:18.372473554Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 6 02:29:18.375311 containerd[1590]: time="2026-03-06T02:29:18.372492370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 6 02:29:18.372263 systemd[1643]: Reached target default.target - Main User Target. Mar 6 02:29:18.372314 systemd[1643]: Startup finished in 533ms. Mar 6 02:29:18.372408 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.372506847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.389411444Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.389581861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.389609803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.389626344Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.389846966Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.389873736Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.389887282Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.389900246Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.389912589Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.389926054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.390273433Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.390314550Z" level=info msg="runtime interface created" Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.390325460Z" level=info msg="created NRI interface" Mar 6 02:29:18.390538 containerd[1590]: time="2026-03-06T02:29:18.390340669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 6 02:29:18.391783 containerd[1590]: time="2026-03-06T02:29:18.390363020Z" level=info msg="Connect containerd service" Mar 6 02:29:18.391783 containerd[1590]: time="2026-03-06T02:29:18.390407092Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 02:29:18.397116 containerd[1590]: time="2026-03-06T02:29:18.395425053Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 02:29:18.403412 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 02:29:19.052363 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:43472.service - OpenSSH per-connection server daemon (10.0.0.1:43472). Mar 6 02:29:20.329375 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 43472 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:29:20.658615 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:29:20.695556 systemd-logind[1552]: New session 2 of user core. Mar 6 02:29:20.724829 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 02:29:21.201894 sshd[1672]: Connection closed by 10.0.0.1 port 43472 Mar 6 02:29:21.205634 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Mar 6 02:29:21.221792 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:43480.service - OpenSSH per-connection server daemon (10.0.0.1:43480). Mar 6 02:29:21.236402 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:43472.service: Deactivated successfully. Mar 6 02:29:21.245853 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 02:29:21.257267 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. Mar 6 02:29:21.269283 systemd-logind[1552]: Removed session 2. Mar 6 02:29:21.613453 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 43480 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:29:21.665234 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:29:21.692852 containerd[1590]: time="2026-03-06T02:29:21.691781635Z" level=info msg="Start subscribing containerd event" Mar 6 02:29:21.707416 containerd[1590]: time="2026-03-06T02:29:21.705769004Z" level=info msg="Start recovering state" Mar 6 02:29:21.719389 containerd[1590]: time="2026-03-06T02:29:21.716472200Z" level=info msg="Start event monitor" Mar 6 02:29:21.733170 containerd[1590]: time="2026-03-06T02:29:21.732311368Z" level=info msg="Start cni network conf syncer for default" Mar 6 02:29:21.733170 containerd[1590]: time="2026-03-06T02:29:21.732389203Z" level=info msg="Start streaming server" Mar 6 02:29:21.733170 containerd[1590]: time="2026-03-06T02:29:21.732502534Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 6 02:29:21.733170 containerd[1590]: time="2026-03-06T02:29:21.732825828Z" level=info msg="runtime interface starting up..." Mar 6 02:29:21.733170 containerd[1590]: time="2026-03-06T02:29:21.732940482Z" level=info msg="starting plugins..." Mar 6 02:29:21.733626 containerd[1590]: time="2026-03-06T02:29:21.733604102Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 6 02:29:21.740271 containerd[1590]: time="2026-03-06T02:29:21.740245042Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 02:29:21.740428 containerd[1590]: time="2026-03-06T02:29:21.740408788Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 02:29:21.757611 containerd[1590]: time="2026-03-06T02:29:21.756929718Z" level=info msg="containerd successfully booted in 3.537295s" Mar 6 02:29:21.785357 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 02:29:21.806303 systemd-logind[1552]: New session 3 of user core. Mar 6 02:29:21.816248 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 02:29:21.965388 sshd[1690]: Connection closed by 10.0.0.1 port 43480 Mar 6 02:29:21.969572 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Mar 6 02:29:21.989867 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:43480.service: Deactivated successfully. Mar 6 02:29:22.004924 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 02:29:22.018534 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. Mar 6 02:29:22.146971 systemd-logind[1552]: Removed session 3. Mar 6 02:29:25.305642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:29:25.308591 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 02:29:25.319889 systemd[1]: Startup finished in 20.879s (kernel) + 44.679s (initrd) + 44.604s (userspace) = 1min 50.162s. Mar 6 02:29:25.346565 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:29:32.263588 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:56076.service - OpenSSH per-connection server daemon (10.0.0.1:56076). Mar 6 02:29:32.742548 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 56076 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:29:32.753528 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:29:32.793386 systemd-logind[1552]: New session 4 of user core. Mar 6 02:29:32.815452 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 02:29:32.891843 sshd[1711]: Connection closed by 10.0.0.1 port 56076 Mar 6 02:29:32.890954 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Mar 6 02:29:32.906578 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:56076.service: Deactivated successfully. Mar 6 02:29:32.911310 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 02:29:32.920344 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. Mar 6 02:29:32.923482 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:39648.service - OpenSSH per-connection server daemon (10.0.0.1:39648). Mar 6 02:29:32.933516 systemd-logind[1552]: Removed session 4. Mar 6 02:29:33.087882 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 39648 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:29:33.092941 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:29:33.505599 systemd-logind[1552]: New session 5 of user core. Mar 6 02:29:33.529419 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 02:29:33.661950 sshd[1720]: Connection closed by 10.0.0.1 port 39648 Mar 6 02:29:33.673135 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Mar 6 02:29:33.735536 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:39656.service - OpenSSH per-connection server daemon (10.0.0.1:39656). Mar 6 02:29:33.763520 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:39648.service: Deactivated successfully. Mar 6 02:29:33.772822 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 02:29:33.810884 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Mar 6 02:29:33.815782 systemd-logind[1552]: Removed session 5. Mar 6 02:29:34.084853 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 39656 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:29:34.091345 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:29:34.195813 systemd-logind[1552]: New session 6 of user core. Mar 6 02:29:34.215835 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 02:29:34.491843 sshd[1730]: Connection closed by 10.0.0.1 port 39656 Mar 6 02:29:34.510389 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Mar 6 02:29:34.575207 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:39656.service: Deactivated successfully. Mar 6 02:29:34.593954 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 02:29:34.604192 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Mar 6 02:29:34.618963 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:39668.service - OpenSSH per-connection server daemon (10.0.0.1:39668). Mar 6 02:29:34.629594 systemd-logind[1552]: Removed session 6. Mar 6 02:29:34.660308 kubelet[1700]: E0306 02:29:34.659577 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:29:34.671576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:29:34.674492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:29:34.678350 systemd[1]: kubelet.service: Consumed 16.986s CPU time, 269.3M memory peak. Mar 6 02:29:34.846969 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 39668 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:29:34.852435 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:29:34.892892 systemd-logind[1552]: New session 7 of user core. Mar 6 02:29:34.911605 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 02:29:35.121800 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 02:29:35.122603 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:29:35.188856 sudo[1741]: pam_unix(sudo:session): session closed for user root Mar 6 02:29:35.205875 sshd[1740]: Connection closed by 10.0.0.1 port 39668 Mar 6 02:29:35.205598 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Mar 6 02:29:35.239460 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:39668.service: Deactivated successfully. Mar 6 02:29:35.244835 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 02:29:35.250265 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Mar 6 02:29:35.258240 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:39680.service - OpenSSH per-connection server daemon (10.0.0.1:39680). Mar 6 02:29:35.279527 systemd-logind[1552]: Removed session 7. Mar 6 02:29:35.580555 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 39680 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:29:35.595473 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:29:35.644329 systemd-logind[1552]: New session 8 of user core. Mar 6 02:29:35.672481 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 02:29:35.884462 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 02:29:35.885820 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:29:35.968355 sudo[1752]: pam_unix(sudo:session): session closed for user root Mar 6 02:29:36.010373 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 6 02:29:36.011540 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:29:36.068613 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 02:29:36.490758 augenrules[1774]: No rules Mar 6 02:29:36.493541 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 02:29:36.495258 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 02:29:36.508457 sudo[1751]: pam_unix(sudo:session): session closed for user root Mar 6 02:29:36.516489 sshd[1750]: Connection closed by 10.0.0.1 port 39680 Mar 6 02:29:36.519414 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Mar 6 02:29:36.549797 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:39680.service: Deactivated successfully. Mar 6 02:29:36.557295 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 02:29:36.565914 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. Mar 6 02:29:36.576514 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:39686.service - OpenSSH per-connection server daemon (10.0.0.1:39686). Mar 6 02:29:36.598212 systemd-logind[1552]: Removed session 8. Mar 6 02:29:36.783772 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 39686 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:29:36.799618 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:29:36.853331 systemd-logind[1552]: New session 9 of user core. Mar 6 02:29:36.877899 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 02:29:36.986978 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 02:29:36.988870 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:29:42.367818 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 02:29:42.411937 (dockerd)[1808]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 02:29:43.786742 dockerd[1808]: time="2026-03-06T02:29:43.785513782Z" level=info msg="Starting up" Mar 6 02:29:43.800166 dockerd[1808]: time="2026-03-06T02:29:43.798723630Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 6 02:29:43.914715 dockerd[1808]: time="2026-03-06T02:29:43.913958930Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 6 02:29:44.715518 dockerd[1808]: time="2026-03-06T02:29:44.698949492Z" level=info msg="Loading containers: start." Mar 6 02:29:44.750924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 02:29:44.763499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:29:44.786488 kernel: Initializing XFRM netlink socket Mar 6 02:29:47.470548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:29:47.506202 (kubelet)[1971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:29:47.752886 systemd-networkd[1451]: docker0: Link UP Mar 6 02:29:47.789931 dockerd[1808]: time="2026-03-06T02:29:47.789527541Z" level=info msg="Loading containers: done." Mar 6 02:29:47.979654 dockerd[1808]: time="2026-03-06T02:29:47.975282885Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 02:29:47.998425 dockerd[1808]: time="2026-03-06T02:29:47.986175243Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 6 02:29:47.998425 dockerd[1808]: time="2026-03-06T02:29:47.986912277Z" level=info msg="Initializing buildkit" Mar 6 02:29:48.149353 kubelet[1971]: E0306 02:29:48.140806 1971 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:29:48.151562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:29:48.151893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:29:48.152953 systemd[1]: kubelet.service: Consumed 2.325s CPU time, 109.2M memory peak. Mar 6 02:29:48.246736 dockerd[1808]: time="2026-03-06T02:29:48.244662558Z" level=info msg="Completed buildkit initialization" Mar 6 02:29:48.261550 dockerd[1808]: time="2026-03-06T02:29:48.258553564Z" level=info msg="Daemon has completed initialization" Mar 6 02:29:48.261550 dockerd[1808]: time="2026-03-06T02:29:48.259114337Z" level=info msg="API listen on /run/docker.sock" Mar 6 02:29:48.259265 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 02:29:54.175859 update_engine[1558]: I20260306 02:29:54.170831 1558 update_attempter.cc:509] Updating boot flags... Mar 6 02:29:54.840534 containerd[1590]: time="2026-03-06T02:29:54.839941999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 6 02:29:56.062504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518843601.mount: Deactivated successfully. Mar 6 02:29:58.248758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 02:29:58.274749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:30:00.005844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:30:00.087131 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:30:00.467838 kubelet[2123]: E0306 02:30:00.465484 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:30:00.474735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:30:00.475430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:30:00.476908 systemd[1]: kubelet.service: Consumed 1.324s CPU time, 108.7M memory peak. Mar 6 02:30:05.152509 containerd[1590]: time="2026-03-06T02:30:05.151670271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:05.158903 containerd[1590]: time="2026-03-06T02:30:05.155292063Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 6 02:30:05.173454 containerd[1590]: time="2026-03-06T02:30:05.172785810Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:05.242489 containerd[1590]: time="2026-03-06T02:30:05.241809622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:05.245880 containerd[1590]: time="2026-03-06T02:30:05.244889592Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 10.40474723s" Mar 6 02:30:05.247484 containerd[1590]: time="2026-03-06T02:30:05.246267123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 6 02:30:05.261954 containerd[1590]: time="2026-03-06T02:30:05.260686260Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 6 02:30:10.512881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 6 02:30:10.526459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:30:11.983780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:30:12.010893 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:30:12.267849 kubelet[2144]: E0306 02:30:12.267422 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:30:12.274196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:30:12.274394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:30:12.275966 systemd[1]: kubelet.service: Consumed 1.151s CPU time, 109.4M memory peak. Mar 6 02:30:17.060264 containerd[1590]: time="2026-03-06T02:30:17.058290153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:17.063341 containerd[1590]: time="2026-03-06T02:30:17.063273002Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 6 02:30:17.067521 containerd[1590]: time="2026-03-06T02:30:17.067383596Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:17.080296 containerd[1590]: time="2026-03-06T02:30:17.080156812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:17.081967 containerd[1590]: time="2026-03-06T02:30:17.081760202Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 11.820971552s" Mar 6 02:30:17.081967 containerd[1590]: time="2026-03-06T02:30:17.081799255Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 6 02:30:17.091726 containerd[1590]: time="2026-03-06T02:30:17.091313278Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 6 02:30:22.518878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 6 02:30:22.543818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:30:24.526210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:30:24.560907 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:30:25.364181 containerd[1590]: time="2026-03-06T02:30:25.363228799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:25.417417 containerd[1590]: time="2026-03-06T02:30:25.378496021Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 6 02:30:25.417417 containerd[1590]: time="2026-03-06T02:30:25.383896578Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:25.697909 containerd[1590]: time="2026-03-06T02:30:25.674175037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:25.697909 containerd[1590]: time="2026-03-06T02:30:25.678198259Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 8.58684611s" Mar 6 02:30:25.697909 containerd[1590]: time="2026-03-06T02:30:25.678238474Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 6 02:30:25.697909 containerd[1590]: time="2026-03-06T02:30:25.684349710Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 6 02:30:25.753784 kubelet[2163]: E0306 02:30:25.752961 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:30:25.765497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:30:25.765945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:30:25.768438 systemd[1]: kubelet.service: Consumed 2.112s CPU time, 108.6M memory peak. Mar 6 02:30:33.437763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588796056.mount: Deactivated successfully. Mar 6 02:30:36.049418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 6 02:30:36.085846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:30:39.185350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:30:39.385569 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:30:41.761495 kubelet[2189]: E0306 02:30:41.760211 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:30:41.773807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:30:41.774432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:30:41.779445 systemd[1]: kubelet.service: Consumed 4.270s CPU time, 110.2M memory peak. Mar 6 02:30:45.175507 containerd[1590]: time="2026-03-06T02:30:45.173359144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:45.175507 containerd[1590]: time="2026-03-06T02:30:45.175958866Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 6 02:30:45.181630 containerd[1590]: time="2026-03-06T02:30:45.181600350Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:45.197523 containerd[1590]: time="2026-03-06T02:30:45.197470993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:30:45.219013 containerd[1590]: time="2026-03-06T02:30:45.218842689Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 19.534453687s" Mar 6 02:30:45.220511 containerd[1590]: time="2026-03-06T02:30:45.219276753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 6 02:30:45.235355 containerd[1590]: time="2026-03-06T02:30:45.233472243Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 6 02:30:47.578418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535722617.mount: Deactivated successfully. Mar 6 02:30:52.010813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 6 02:30:52.045456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:30:54.352906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:30:54.414902 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:30:55.341669 kubelet[2257]: E0306 02:30:55.341288 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:30:55.452236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:30:55.484872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:30:55.493390 systemd[1]: kubelet.service: Consumed 1.878s CPU time, 107.7M memory peak. Mar 6 02:31:00.437458 containerd[1590]: time="2026-03-06T02:31:00.435894608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:31:00.449687 containerd[1590]: time="2026-03-06T02:31:00.448351713Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 6 02:31:00.458403 containerd[1590]: time="2026-03-06T02:31:00.458329836Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:31:00.476651 containerd[1590]: time="2026-03-06T02:31:00.473219654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:31:00.485581 containerd[1590]: time="2026-03-06T02:31:00.482373262Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 15.248745878s" Mar 6 02:31:00.485581 containerd[1590]: time="2026-03-06T02:31:00.482862787Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 6 02:31:00.502446 containerd[1590]: time="2026-03-06T02:31:00.496723357Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 6 02:31:03.961445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862986571.mount: Deactivated successfully. Mar 6 02:31:04.118491 containerd[1590]: time="2026-03-06T02:31:04.117496314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:31:04.127561 containerd[1590]: time="2026-03-06T02:31:04.126632549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 6 02:31:04.150882 containerd[1590]: time="2026-03-06T02:31:04.150457728Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:31:04.167923 containerd[1590]: time="2026-03-06T02:31:04.167403353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:31:04.193554 containerd[1590]: time="2026-03-06T02:31:04.192415694Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.689738715s" Mar 6 02:31:04.193554 containerd[1590]: time="2026-03-06T02:31:04.192640054Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 6 02:31:04.228910 containerd[1590]: time="2026-03-06T02:31:04.226640296Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 6 02:31:05.508419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 6 02:31:05.559547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:31:06.043696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1749539149.mount: Deactivated successfully. Mar 6 02:31:08.627767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:31:08.926613 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:31:10.374545 kubelet[2290]: E0306 02:31:10.372386 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:31:10.489962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:31:10.509410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:31:10.511966 systemd[1]: kubelet.service: Consumed 3.032s CPU time, 110.7M memory peak. Mar 6 02:31:20.287955 containerd[1590]: time="2026-03-06T02:31:20.286404043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:31:20.294355 containerd[1590]: time="2026-03-06T02:31:20.292338962Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 6 02:31:20.315199 containerd[1590]: time="2026-03-06T02:31:20.313487466Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:31:20.328600 containerd[1590]: time="2026-03-06T02:31:20.327624758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:31:20.335735 containerd[1590]: time="2026-03-06T02:31:20.332447283Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 16.105495053s" Mar 6 02:31:20.335735 containerd[1590]: time="2026-03-06T02:31:20.332600791Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 6 02:31:20.527480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 6 02:31:20.545388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:31:21.246337 update_engine[1558]: I20260306 02:31:21.245745 1558 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 6 02:31:21.246337 update_engine[1558]: I20260306 02:31:21.245810 1558 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 6 02:31:21.250603 update_engine[1558]: I20260306 02:31:21.250370 1558 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 6 02:31:21.253767 update_engine[1558]: I20260306 02:31:21.251420 1558 omaha_request_params.cc:62] Current group set to stable Mar 6 02:31:21.259302 update_engine[1558]: I20260306 02:31:21.256789 1558 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 6 02:31:21.259302 update_engine[1558]: I20260306 02:31:21.256819 1558 update_attempter.cc:643] Scheduling an action processor start. Mar 6 02:31:21.259302 update_engine[1558]: I20260306 02:31:21.257220 1558 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 6 02:31:21.259302 update_engine[1558]: I20260306 02:31:21.257515 1558 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 6 02:31:21.261483 update_engine[1558]: I20260306 02:31:21.259963 1558 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 6 02:31:21.261567 update_engine[1558]: I20260306 02:31:21.261543 1558 omaha_request_action.cc:272] Request: Mar 6 02:31:21.261567 update_engine[1558]: Mar 6 02:31:21.261567 update_engine[1558]: Mar 6 02:31:21.261567 update_engine[1558]: Mar 6 02:31:21.261567 update_engine[1558]: Mar 6 02:31:21.261567 update_engine[1558]: Mar 6 02:31:21.261567 update_engine[1558]: Mar 6 02:31:21.261567 update_engine[1558]: Mar 6 02:31:21.261567 update_engine[1558]: Mar 6 02:31:21.262543 update_engine[1558]: I20260306 02:31:21.262511 1558 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 6 02:31:21.296315 update_engine[1558]: I20260306 02:31:21.295537 1558 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 6 02:31:21.303596 update_engine[1558]: I20260306 02:31:21.302338 1558 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 6 02:31:21.322721 update_engine[1558]: E20260306 02:31:21.322559 1558 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 6 02:31:21.322721 update_engine[1558]: I20260306 02:31:21.322678 1558 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 6 02:31:21.350492 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 6 02:31:21.680540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:31:21.846270 (kubelet)[2381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:31:22.729522 kubelet[2381]: E0306 02:31:22.728590 2381 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:31:22.739493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:31:22.740253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:31:22.740642 systemd[1]: kubelet.service: Consumed 1.389s CPU time, 110.7M memory peak. Mar 6 02:31:29.124528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:31:29.127132 systemd[1]: kubelet.service: Consumed 1.389s CPU time, 110.7M memory peak. Mar 6 02:31:29.146411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:31:29.411400 systemd[1]: Reload requested from client PID 2400 ('systemctl') (unit session-9.scope)... Mar 6 02:31:29.411558 systemd[1]: Reloading... Mar 6 02:31:29.895645 zram_generator::config[2446]: No configuration found. Mar 6 02:31:30.908789 systemd[1]: Reloading finished in 1496 ms. Mar 6 02:31:31.134579 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:31:31.152611 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 02:31:31.156786 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:31:31.157606 systemd[1]: kubelet.service: Consumed 359ms CPU time, 98.3M memory peak. Mar 6 02:31:31.165485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:31:31.170672 update_engine[1558]: I20260306 02:31:31.167541 1558 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 6 02:31:31.170672 update_engine[1558]: I20260306 02:31:31.168807 1558 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 6 02:31:31.170672 update_engine[1558]: I20260306 02:31:31.170408 1558 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 6 02:31:31.191129 update_engine[1558]: E20260306 02:31:31.190629 1558 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 6 02:31:31.191129 update_engine[1558]: I20260306 02:31:31.190820 1558 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 6 02:31:32.446806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:31:32.486663 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 02:31:33.755748 kubelet[2493]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:31:33.755748 kubelet[2493]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 02:31:33.755748 kubelet[2493]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:31:33.755748 kubelet[2493]: I0306 02:31:33.754728 2493 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 02:31:36.027649 kubelet[2493]: I0306 02:31:36.026560 2493 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 02:31:36.034528 kubelet[2493]: I0306 02:31:36.028682 2493 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 02:31:36.034528 kubelet[2493]: I0306 02:31:36.033543 2493 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 02:31:36.539861 kubelet[2493]: E0306 02:31:36.536867 2493 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 02:31:36.551327 kubelet[2493]: I0306 02:31:36.550849 2493 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 02:31:36.658475 kubelet[2493]: I0306 02:31:36.656846 2493 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 02:31:36.746393 kubelet[2493]: I0306 02:31:36.743672 2493 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 02:31:36.750521 kubelet[2493]: I0306 02:31:36.750258 2493 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 02:31:36.770381 kubelet[2493]: I0306 02:31:36.750304 2493 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 02:31:36.770381 kubelet[2493]: I0306 02:31:36.769744 2493 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 02:31:36.770381 kubelet[2493]: I0306 02:31:36.769771 2493 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 02:31:36.773501 kubelet[2493]: I0306 02:31:36.772434 2493 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:31:36.859617 kubelet[2493]: I0306 02:31:36.858751 2493 kubelet.go:480] "Attempting to sync node with API server" Mar 6 02:31:36.873701 kubelet[2493]: I0306 02:31:36.869363 2493 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 02:31:36.873701 kubelet[2493]: I0306 02:31:36.870491 2493 kubelet.go:386] "Adding apiserver pod source" Mar 6 02:31:36.873701 kubelet[2493]: I0306 02:31:36.870701 2493 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 02:31:36.899718 kubelet[2493]: E0306 02:31:36.893730 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 02:31:36.899718 kubelet[2493]: E0306 02:31:36.897775 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 02:31:36.916447 kubelet[2493]: I0306 02:31:36.913158 2493 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 02:31:36.919373 kubelet[2493]: I0306 02:31:36.918667 2493 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 02:31:36.930414 kubelet[2493]: W0306 02:31:36.927848 2493 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 02:31:36.987473 kubelet[2493]: I0306 02:31:36.985400 2493 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 02:31:36.987473 kubelet[2493]: I0306 02:31:36.986386 2493 server.go:1289] "Started kubelet" Mar 6 02:31:36.988859 kubelet[2493]: I0306 02:31:36.988693 2493 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 02:31:37.010823 kubelet[2493]: I0306 02:31:37.010273 2493 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 02:31:37.014464 kubelet[2493]: I0306 02:31:37.012647 2493 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 02:31:37.028840 kubelet[2493]: I0306 02:31:37.028805 2493 server.go:317] "Adding debug handlers to kubelet server" Mar 6 02:31:37.045751 kubelet[2493]: E0306 02:31:37.042397 2493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1fbc505325f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 02:31:36.985650681 +0000 UTC m=+4.185190043,LastTimestamp:2026-03-06 02:31:36.985650681 +0000 UTC m=+4.185190043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 02:31:37.054589 kubelet[2493]: I0306 02:31:37.054563 2493 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 02:31:37.077404 kubelet[2493]: E0306 02:31:37.077363 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:31:37.078603 kubelet[2493]: I0306 02:31:37.078582 2493 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 02:31:37.088600 kubelet[2493]: E0306 02:31:37.088564 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Mar 6 02:31:37.090608 kubelet[2493]: E0306 02:31:37.090580 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 02:31:37.094336 kubelet[2493]: I0306 02:31:37.091520 2493 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 02:31:37.099608 kubelet[2493]: I0306 02:31:37.099583 2493 reconciler.go:26] "Reconciler: start to sync state" Mar 6 02:31:37.099747 kubelet[2493]: E0306 02:31:37.099729 2493 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 02:31:37.186422 kubelet[2493]: I0306 02:31:37.185635 2493 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 02:31:37.186618 kubelet[2493]: I0306 02:31:37.185733 2493 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 02:31:37.189264 kubelet[2493]: E0306 02:31:37.186516 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:31:37.205297 kubelet[2493]: I0306 02:31:37.200296 2493 factory.go:223] Registration of the containerd container factory successfully Mar 6 02:31:37.205297 kubelet[2493]: I0306 02:31:37.200596 2493 factory.go:223] Registration of the systemd container factory successfully Mar 6 02:31:37.297344 kubelet[2493]: E0306 02:31:37.294587 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:31:37.298590 kubelet[2493]: E0306 02:31:37.298547 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Mar 6 02:31:37.369276 kubelet[2493]: I0306 02:31:37.369243 2493 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 02:31:37.369597 kubelet[2493]: I0306 02:31:37.369579 2493 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 02:31:37.369690 kubelet[2493]: I0306 02:31:37.369678 2493 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:31:37.385338 kubelet[2493]: I0306 02:31:37.385319 2493 policy_none.go:49] "None policy: Start" Mar 6 02:31:37.385854 kubelet[2493]: I0306 02:31:37.385681 2493 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 02:31:37.385854 kubelet[2493]: I0306 02:31:37.385833 2493 state_mem.go:35] "Initializing new in-memory state store" Mar 6 02:31:37.395386 kubelet[2493]: E0306 02:31:37.394856 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:31:37.419825 kubelet[2493]: E0306 02:31:37.418630 2493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1fbc505325f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 02:31:36.985650681 +0000 UTC m=+4.185190043,LastTimestamp:2026-03-06 02:31:36.985650681 +0000 UTC m=+4.185190043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 02:31:37.424572 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 02:31:37.486473 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 02:31:37.495774 kubelet[2493]: E0306 02:31:37.495740 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:31:37.506614 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 02:31:37.518638 kubelet[2493]: I0306 02:31:37.518293 2493 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 02:31:37.526258 kubelet[2493]: E0306 02:31:37.525616 2493 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 02:31:37.529354 kubelet[2493]: I0306 02:31:37.529335 2493 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 02:31:37.529636 kubelet[2493]: I0306 02:31:37.529593 2493 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 02:31:37.532557 kubelet[2493]: I0306 02:31:37.532538 2493 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 02:31:37.542357 kubelet[2493]: E0306 02:31:37.541799 2493 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 02:31:37.542457 kubelet[2493]: E0306 02:31:37.542436 2493 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 02:31:37.542591 kubelet[2493]: I0306 02:31:37.542573 2493 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 02:31:37.543246 kubelet[2493]: I0306 02:31:37.543229 2493 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 02:31:37.543466 kubelet[2493]: I0306 02:31:37.543448 2493 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 02:31:37.543877 kubelet[2493]: I0306 02:31:37.543861 2493 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 02:31:37.544497 kubelet[2493]: E0306 02:31:37.544478 2493 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 6 02:31:37.548794 kubelet[2493]: E0306 02:31:37.548769 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 02:31:37.641552 kubelet[2493]: I0306 02:31:37.641365 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:31:37.642530 kubelet[2493]: E0306 02:31:37.642497 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 6 02:31:37.706864 kubelet[2493]: I0306 02:31:37.706685 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:31:37.708519 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 6 02:31:37.711747 kubelet[2493]: I0306 02:31:37.710684 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:31:37.711747 kubelet[2493]: I0306 02:31:37.710727 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:31:37.711747 kubelet[2493]: I0306 02:31:37.710749 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:31:37.713251 kubelet[2493]: I0306 02:31:37.710778 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 02:31:37.715333 kubelet[2493]: I0306 02:31:37.713485 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:31:37.715333 kubelet[2493]: E0306 02:31:37.708846 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Mar 6 02:31:37.754482 kubelet[2493]: E0306 02:31:37.753856 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:37.798471 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 6 02:31:37.810184 kubelet[2493]: E0306 02:31:37.809779 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:37.815802 kubelet[2493]: I0306 02:31:37.815775 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed75b82b1df7a82dd6b442dc51cc23f6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed75b82b1df7a82dd6b442dc51cc23f6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:31:37.816754 kubelet[2493]: I0306 02:31:37.816450 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed75b82b1df7a82dd6b442dc51cc23f6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ed75b82b1df7a82dd6b442dc51cc23f6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:31:37.816754 kubelet[2493]: I0306 02:31:37.816740 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed75b82b1df7a82dd6b442dc51cc23f6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed75b82b1df7a82dd6b442dc51cc23f6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:31:37.816752 systemd[1]: Created slice kubepods-burstable-poded75b82b1df7a82dd6b442dc51cc23f6.slice - libcontainer container kubepods-burstable-poded75b82b1df7a82dd6b442dc51cc23f6.slice. Mar 6 02:31:37.837799 kubelet[2493]: E0306 02:31:37.836563 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:37.860877 kubelet[2493]: I0306 02:31:37.858682 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:31:37.873730 kubelet[2493]: E0306 02:31:37.872258 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 6 02:31:37.881722 kubelet[2493]: E0306 02:31:37.880491 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 02:31:37.895398 kubelet[2493]: E0306 02:31:37.893479 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 02:31:38.061230 kubelet[2493]: E0306 02:31:38.059355 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:38.072287 containerd[1590]: time="2026-03-06T02:31:38.070460945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 6 02:31:38.124619 kubelet[2493]: E0306 02:31:38.114397 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:38.125337 containerd[1590]: time="2026-03-06T02:31:38.122383624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 6 02:31:38.139683 kubelet[2493]: E0306 02:31:38.139565 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:38.149291 containerd[1590]: time="2026-03-06T02:31:38.147616888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ed75b82b1df7a82dd6b442dc51cc23f6,Namespace:kube-system,Attempt:0,}" Mar 6 02:31:38.281804 kubelet[2493]: I0306 02:31:38.281432 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:31:38.286534 kubelet[2493]: E0306 02:31:38.286499 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 6 02:31:38.314646 kubelet[2493]: E0306 02:31:38.314361 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 02:31:38.321284 containerd[1590]: time="2026-03-06T02:31:38.320879374Z" level=info msg="connecting to shim baff8d281730be23a85c4940952e5c82b5c79aa23ba7f2a410e42b2c32cd8488" address="unix:///run/containerd/s/f2e015486ca8d9df7d2c387bd9683057cbd052dd9adaff83369b55597340c0c3" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:31:38.327874 containerd[1590]: time="2026-03-06T02:31:38.327640418Z" level=info msg="connecting to shim e50972ddbe618dbeaccfeac2c4c1415a7e83209e6f31d723b12d9bee026ce3e6" address="unix:///run/containerd/s/807e0aeeaba1b56b9510dc9d87fe292ce408266aeda5c9d1b58395b14246dace" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:31:38.339410 containerd[1590]: time="2026-03-06T02:31:38.338817030Z" level=info msg="connecting to shim e69cbf88eedf244ff44e822278a453cbad7a9027e67556a6f1b6ad1f845c2b30" address="unix:///run/containerd/s/9419390342ba2150be2832ff7c5377fbd0212ab6d37ac5b18668d5b5a33a08bd" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:31:38.520637 kubelet[2493]: E0306 02:31:38.520350 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="1.6s" Mar 6 02:31:38.602599 kubelet[2493]: E0306 02:31:38.591670 2493 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 02:31:38.675710 kubelet[2493]: E0306 02:31:38.675350 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 02:31:39.036300 systemd[1]: Started cri-containerd-e69cbf88eedf244ff44e822278a453cbad7a9027e67556a6f1b6ad1f845c2b30.scope - libcontainer container e69cbf88eedf244ff44e822278a453cbad7a9027e67556a6f1b6ad1f845c2b30. Mar 6 02:31:39.068561 systemd[1]: Started cri-containerd-baff8d281730be23a85c4940952e5c82b5c79aa23ba7f2a410e42b2c32cd8488.scope - libcontainer container baff8d281730be23a85c4940952e5c82b5c79aa23ba7f2a410e42b2c32cd8488. Mar 6 02:31:39.125500 systemd[1]: Started cri-containerd-e50972ddbe618dbeaccfeac2c4c1415a7e83209e6f31d723b12d9bee026ce3e6.scope - libcontainer container e50972ddbe618dbeaccfeac2c4c1415a7e83209e6f31d723b12d9bee026ce3e6. Mar 6 02:31:39.133392 kubelet[2493]: I0306 02:31:39.132710 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:31:39.136464 kubelet[2493]: E0306 02:31:39.136429 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 6 02:31:39.512337 containerd[1590]: time="2026-03-06T02:31:39.511725204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"baff8d281730be23a85c4940952e5c82b5c79aa23ba7f2a410e42b2c32cd8488\"" Mar 6 02:31:39.544492 kubelet[2493]: E0306 02:31:39.543328 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:39.557597 containerd[1590]: time="2026-03-06T02:31:39.557564555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"e50972ddbe618dbeaccfeac2c4c1415a7e83209e6f31d723b12d9bee026ce3e6\"" Mar 6 02:31:39.565785 containerd[1590]: time="2026-03-06T02:31:39.565752248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ed75b82b1df7a82dd6b442dc51cc23f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e69cbf88eedf244ff44e822278a453cbad7a9027e67556a6f1b6ad1f845c2b30\"" Mar 6 02:31:39.576775 kubelet[2493]: E0306 02:31:39.576631 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:39.590255 kubelet[2493]: E0306 02:31:39.587683 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:39.591831 containerd[1590]: time="2026-03-06T02:31:39.591797751Z" level=info msg="CreateContainer within sandbox \"baff8d281730be23a85c4940952e5c82b5c79aa23ba7f2a410e42b2c32cd8488\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 02:31:39.623632 containerd[1590]: time="2026-03-06T02:31:39.622510063Z" level=info msg="CreateContainer within sandbox \"e50972ddbe618dbeaccfeac2c4c1415a7e83209e6f31d723b12d9bee026ce3e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 02:31:39.638632 containerd[1590]: time="2026-03-06T02:31:39.636649843Z" level=info msg="CreateContainer within sandbox \"e69cbf88eedf244ff44e822278a453cbad7a9027e67556a6f1b6ad1f845c2b30\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 02:31:39.685435 kubelet[2493]: E0306 02:31:39.683382 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 02:31:39.683745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1925369314.mount: Deactivated successfully. Mar 6 02:31:39.704302 containerd[1590]: time="2026-03-06T02:31:39.702562073Z" level=info msg="Container 69611c1829f3c3915580d539bbc65699cdf33b65d6fc2e58fe4e1f61b5b97511: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:31:39.705829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573539962.mount: Deactivated successfully. Mar 6 02:31:39.734564 containerd[1590]: time="2026-03-06T02:31:39.733785263Z" level=info msg="Container de3ed7d6e1c536219011ddd7efe5f7b4f0376858dfbd42ae34f70bfae690016c: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:31:39.792862 containerd[1590]: time="2026-03-06T02:31:39.787532539Z" level=info msg="Container f3435396b6c5c11cc8092a225d85cfa330b73ff19ad9bd48d090534e788c83b4: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:31:39.811649 containerd[1590]: time="2026-03-06T02:31:39.810721078Z" level=info msg="CreateContainer within sandbox \"baff8d281730be23a85c4940952e5c82b5c79aa23ba7f2a410e42b2c32cd8488\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"69611c1829f3c3915580d539bbc65699cdf33b65d6fc2e58fe4e1f61b5b97511\"" Mar 6 02:31:39.818659 containerd[1590]: time="2026-03-06T02:31:39.818629052Z" level=info msg="StartContainer for \"69611c1829f3c3915580d539bbc65699cdf33b65d6fc2e58fe4e1f61b5b97511\"" Mar 6 02:31:39.833473 containerd[1590]: time="2026-03-06T02:31:39.833436278Z" level=info msg="connecting to shim 69611c1829f3c3915580d539bbc65699cdf33b65d6fc2e58fe4e1f61b5b97511" address="unix:///run/containerd/s/f2e015486ca8d9df7d2c387bd9683057cbd052dd9adaff83369b55597340c0c3" protocol=ttrpc version=3 Mar 6 02:31:39.859608 containerd[1590]: time="2026-03-06T02:31:39.858665259Z" level=info msg="CreateContainer within sandbox \"e50972ddbe618dbeaccfeac2c4c1415a7e83209e6f31d723b12d9bee026ce3e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"de3ed7d6e1c536219011ddd7efe5f7b4f0376858dfbd42ae34f70bfae690016c\"" Mar 6 02:31:39.863267 containerd[1590]: time="2026-03-06T02:31:39.862610152Z" level=info msg="StartContainer for \"de3ed7d6e1c536219011ddd7efe5f7b4f0376858dfbd42ae34f70bfae690016c\"" Mar 6 02:31:39.867206 containerd[1590]: time="2026-03-06T02:31:39.865670829Z" level=info msg="CreateContainer within sandbox \"e69cbf88eedf244ff44e822278a453cbad7a9027e67556a6f1b6ad1f845c2b30\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f3435396b6c5c11cc8092a225d85cfa330b73ff19ad9bd48d090534e788c83b4\"" Mar 6 02:31:39.870659 containerd[1590]: time="2026-03-06T02:31:39.868736461Z" level=info msg="connecting to shim de3ed7d6e1c536219011ddd7efe5f7b4f0376858dfbd42ae34f70bfae690016c" address="unix:///run/containerd/s/807e0aeeaba1b56b9510dc9d87fe292ce408266aeda5c9d1b58395b14246dace" protocol=ttrpc version=3 Mar 6 02:31:39.875349 containerd[1590]: time="2026-03-06T02:31:39.872611634Z" level=info msg="StartContainer for \"f3435396b6c5c11cc8092a225d85cfa330b73ff19ad9bd48d090534e788c83b4\"" Mar 6 02:31:39.879341 containerd[1590]: time="2026-03-06T02:31:39.877182668Z" level=info msg="connecting to shim f3435396b6c5c11cc8092a225d85cfa330b73ff19ad9bd48d090534e788c83b4" address="unix:///run/containerd/s/9419390342ba2150be2832ff7c5377fbd0212ab6d37ac5b18668d5b5a33a08bd" protocol=ttrpc version=3 Mar 6 02:31:40.127693 kubelet[2493]: E0306 02:31:40.124678 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="3.2s" Mar 6 02:31:40.634814 kubelet[2493]: E0306 02:31:40.633416 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 02:31:40.640651 kubelet[2493]: E0306 02:31:40.636532 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 02:31:40.760607 kubelet[2493]: I0306 02:31:40.760319 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:31:40.766652 kubelet[2493]: E0306 02:31:40.766461 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 6 02:31:41.185745 update_engine[1558]: I20260306 02:31:41.168679 1558 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 6 02:31:41.247649 update_engine[1558]: I20260306 02:31:41.186723 1558 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 6 02:31:41.247649 update_engine[1558]: I20260306 02:31:41.237720 1558 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 6 02:31:41.287249 update_engine[1558]: E20260306 02:31:41.283628 1558 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 6 02:31:41.302250 update_engine[1558]: I20260306 02:31:41.288489 1558 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 6 02:31:41.423885 systemd[1]: Started cri-containerd-69611c1829f3c3915580d539bbc65699cdf33b65d6fc2e58fe4e1f61b5b97511.scope - libcontainer container 69611c1829f3c3915580d539bbc65699cdf33b65d6fc2e58fe4e1f61b5b97511. Mar 6 02:31:41.486557 systemd[1]: Started cri-containerd-de3ed7d6e1c536219011ddd7efe5f7b4f0376858dfbd42ae34f70bfae690016c.scope - libcontainer container de3ed7d6e1c536219011ddd7efe5f7b4f0376858dfbd42ae34f70bfae690016c. Mar 6 02:31:41.623689 kubelet[2493]: E0306 02:31:41.621327 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 02:31:41.876435 systemd[1]: Started cri-containerd-f3435396b6c5c11cc8092a225d85cfa330b73ff19ad9bd48d090534e788c83b4.scope - libcontainer container f3435396b6c5c11cc8092a225d85cfa330b73ff19ad9bd48d090534e788c83b4. Mar 6 02:31:41.902730 containerd[1590]: time="2026-03-06T02:31:41.899763497Z" level=info msg="StartContainer for \"de3ed7d6e1c536219011ddd7efe5f7b4f0376858dfbd42ae34f70bfae690016c\" returns successfully" Mar 6 02:31:41.940835 containerd[1590]: time="2026-03-06T02:31:41.940482783Z" level=info msg="StartContainer for \"69611c1829f3c3915580d539bbc65699cdf33b65d6fc2e58fe4e1f61b5b97511\" returns successfully" Mar 6 02:31:42.188350 kubelet[2493]: E0306 02:31:42.183873 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:42.200283 kubelet[2493]: E0306 02:31:42.199401 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:42.220870 kubelet[2493]: E0306 02:31:42.218728 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:42.222300 kubelet[2493]: E0306 02:31:42.221616 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:42.517369 containerd[1590]: time="2026-03-06T02:31:42.516877596Z" level=info msg="StartContainer for \"f3435396b6c5c11cc8092a225d85cfa330b73ff19ad9bd48d090534e788c83b4\" returns successfully" Mar 6 02:31:42.729489 kubelet[2493]: E0306 02:31:42.728851 2493 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 02:31:43.531516 kubelet[2493]: E0306 02:31:43.530848 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:43.545852 kubelet[2493]: E0306 02:31:43.533459 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:43.545852 kubelet[2493]: E0306 02:31:43.542509 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:43.545852 kubelet[2493]: E0306 02:31:43.543837 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:43.556508 kubelet[2493]: E0306 02:31:43.554666 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:43.563268 kubelet[2493]: E0306 02:31:43.560840 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:44.047698 kubelet[2493]: I0306 02:31:44.042854 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:31:44.618880 kubelet[2493]: E0306 02:31:44.618596 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:44.674396 kubelet[2493]: E0306 02:31:44.672700 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:44.674396 kubelet[2493]: E0306 02:31:44.670782 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:44.678622 kubelet[2493]: E0306 02:31:44.676669 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:44.687575 kubelet[2493]: E0306 02:31:44.684537 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:44.725822 kubelet[2493]: E0306 02:31:44.724717 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:45.647700 kubelet[2493]: E0306 02:31:45.647584 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:45.651277 kubelet[2493]: E0306 02:31:45.651257 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:46.399644 kubelet[2493]: E0306 02:31:46.396418 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:46.399644 kubelet[2493]: E0306 02:31:46.418499 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:46.782422 kubelet[2493]: E0306 02:31:46.780899 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:46.782422 kubelet[2493]: E0306 02:31:46.782235 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:47.577408 kubelet[2493]: E0306 02:31:47.576407 2493 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 02:31:51.212330 update_engine[1558]: I20260306 02:31:51.207851 1558 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 6 02:31:51.212330 update_engine[1558]: I20260306 02:31:51.210568 1558 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 6 02:31:51.220625 update_engine[1558]: I20260306 02:31:51.218539 1558 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 6 02:31:51.236747 update_engine[1558]: E20260306 02:31:51.236474 1558 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 6 02:31:51.236747 update_engine[1558]: I20260306 02:31:51.236681 1558 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 6 02:31:51.236747 update_engine[1558]: I20260306 02:31:51.236703 1558 omaha_request_action.cc:617] Omaha request response: Mar 6 02:31:51.241645 update_engine[1558]: E20260306 02:31:51.238557 1558 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 6 02:31:51.241645 update_engine[1558]: I20260306 02:31:51.240602 1558 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 6 02:31:51.241645 update_engine[1558]: I20260306 02:31:51.240621 1558 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 6 02:31:51.241645 update_engine[1558]: I20260306 02:31:51.240633 1558 update_attempter.cc:306] Processing Done. Mar 6 02:31:51.241645 update_engine[1558]: E20260306 02:31:51.241199 1558 update_attempter.cc:619] Update failed. Mar 6 02:31:51.241645 update_engine[1558]: I20260306 02:31:51.241220 1558 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 6 02:31:51.241645 update_engine[1558]: I20260306 02:31:51.241230 1558 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 6 02:31:51.241645 update_engine[1558]: I20260306 02:31:51.241242 1558 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 6 02:31:51.246451 update_engine[1558]: I20260306 02:31:51.242439 1558 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 6 02:31:51.246451 update_engine[1558]: I20260306 02:31:51.243460 1558 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 6 02:31:51.246451 update_engine[1558]: I20260306 02:31:51.243479 1558 omaha_request_action.cc:272] Request: Mar 6 02:31:51.246451 update_engine[1558]: Mar 6 02:31:51.246451 update_engine[1558]: Mar 6 02:31:51.246451 update_engine[1558]: Mar 6 02:31:51.246451 update_engine[1558]: Mar 6 02:31:51.246451 update_engine[1558]: Mar 6 02:31:51.246451 update_engine[1558]: Mar 6 02:31:51.246451 update_engine[1558]: I20260306 02:31:51.243491 1558 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 6 02:31:51.246451 update_engine[1558]: I20260306 02:31:51.243525 1558 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 6 02:31:51.246451 update_engine[1558]: I20260306 02:31:51.246397 1558 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 6 02:31:51.246809 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 6 02:31:51.270498 update_engine[1558]: E20260306 02:31:51.270438 1558 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 6 02:31:51.270695 update_engine[1558]: I20260306 02:31:51.270668 1558 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 6 02:31:51.270799 update_engine[1558]: I20260306 02:31:51.270776 1558 omaha_request_action.cc:617] Omaha request response: Mar 6 02:31:51.270871 update_engine[1558]: I20260306 02:31:51.270851 1558 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 6 02:31:51.271258 update_engine[1558]: I20260306 02:31:51.271232 1558 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 6 02:31:51.276438 update_engine[1558]: I20260306 02:31:51.271332 1558 update_attempter.cc:306] Processing Done. Mar 6 02:31:51.276438 update_engine[1558]: I20260306 02:31:51.271358 1558 update_attempter.cc:310] Error event sent. Mar 6 02:31:51.276438 update_engine[1558]: I20260306 02:31:51.271373 1558 update_check_scheduler.cc:74] Next update check in 48m59s Mar 6 02:31:51.278727 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 6 02:31:52.777570 kubelet[2493]: E0306 02:31:52.776902 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:52.795848 kubelet[2493]: E0306 02:31:52.787256 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:53.353284 kubelet[2493]: E0306 02:31:53.351707 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 6 02:31:53.988774 kubelet[2493]: E0306 02:31:53.987817 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 02:31:54.176392 kubelet[2493]: E0306 02:31:54.174710 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 6 02:31:56.242397 kubelet[2493]: E0306 02:31:56.241364 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 02:31:56.256761 kubelet[2493]: E0306 02:31:56.256700 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 02:31:56.761550 kubelet[2493]: E0306 02:31:56.761483 2493 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 02:31:57.454521 kubelet[2493]: E0306 02:31:57.454379 2493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.189a1fbc505325f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 02:31:36.985650681 +0000 UTC m=+4.185190043,LastTimestamp:2026-03-06 02:31:36.985650681 +0000 UTC m=+4.185190043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 02:31:57.578277 kubelet[2493]: E0306 02:31:57.578218 2493 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 02:31:57.935806 kubelet[2493]: E0306 02:31:57.934787 2493 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:31:57.935806 kubelet[2493]: E0306 02:31:57.935258 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:31:58.442455 kubelet[2493]: E0306 02:31:58.440769 2493 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 6 02:31:59.044903 kubelet[2493]: E0306 02:31:59.044862 2493 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 6 02:31:59.786909 kubelet[2493]: E0306 02:31:59.786827 2493 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 6 02:31:59.846361 kubelet[2493]: E0306 02:31:59.841586 2493 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 6 02:32:00.588451 kubelet[2493]: I0306 02:32:00.587329 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:32:00.781871 kubelet[2493]: I0306 02:32:00.780301 2493 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 02:32:00.781871 kubelet[2493]: E0306 02:32:00.780672 2493 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 6 02:32:01.264522 kubelet[2493]: E0306 02:32:01.261556 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:01.368251 kubelet[2493]: E0306 02:32:01.364539 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:01.542169 kubelet[2493]: E0306 02:32:01.529642 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:01.647573 kubelet[2493]: E0306 02:32:01.635482 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:01.877396 kubelet[2493]: E0306 02:32:01.836191 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:01.940709 kubelet[2493]: E0306 02:32:01.937842 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:02.064462 kubelet[2493]: E0306 02:32:02.058863 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:02.199659 kubelet[2493]: E0306 02:32:02.184743 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:02.316445 kubelet[2493]: E0306 02:32:02.310189 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:02.425628 kubelet[2493]: E0306 02:32:02.412421 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:02.553603 kubelet[2493]: E0306 02:32:02.551240 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:02.667270 kubelet[2493]: E0306 02:32:02.660549 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:02.778397 kubelet[2493]: E0306 02:32:02.763383 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:02.922370 kubelet[2493]: E0306 02:32:02.891478 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:03.031370 kubelet[2493]: E0306 02:32:03.030860 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:03.142813 kubelet[2493]: E0306 02:32:03.135606 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:03.254478 kubelet[2493]: E0306 02:32:03.246789 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:03.356259 kubelet[2493]: E0306 02:32:03.354755 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:03.460529 kubelet[2493]: E0306 02:32:03.457499 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:03.638552 kubelet[2493]: E0306 02:32:03.590257 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:03.714223 kubelet[2493]: E0306 02:32:03.712893 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:03.817629 kubelet[2493]: E0306 02:32:03.814533 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:03.936910 kubelet[2493]: E0306 02:32:03.926466 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:04.032484 kubelet[2493]: E0306 02:32:04.031141 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:04.219794 kubelet[2493]: E0306 02:32:04.189859 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:04.427457 kubelet[2493]: E0306 02:32:04.419583 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:04.845921 kubelet[2493]: E0306 02:32:04.835708 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:04.995327 kubelet[2493]: E0306 02:32:04.994624 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:05.192235 kubelet[2493]: E0306 02:32:05.175409 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:05.335551 kubelet[2493]: E0306 02:32:05.334464 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:32:05.431468 kubelet[2493]: I0306 02:32:05.426864 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:32:06.345469 kubelet[2493]: I0306 02:32:06.334435 2493 apiserver.go:52] "Watching apiserver" Mar 6 02:32:06.481858 kubelet[2493]: I0306 02:32:06.480865 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 02:32:06.736421 kubelet[2493]: I0306 02:32:06.725857 2493 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 02:32:06.736421 kubelet[2493]: E0306 02:32:06.734641 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:06.803743 kubelet[2493]: I0306 02:32:06.792418 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 02:32:06.834685 kubelet[2493]: E0306 02:32:06.832770 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:07.029285 kubelet[2493]: E0306 02:32:06.989780 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:08.684596 kubelet[2493]: I0306 02:32:08.672934 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.658834875 podStartE2EDuration="2.658834875s" podCreationTimestamp="2026-03-06 02:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:32:08.586170959 +0000 UTC m=+35.785710141" watchObservedRunningTime="2026-03-06 02:32:08.658834875 +0000 UTC m=+35.858374058" Mar 6 02:32:11.758898 kubelet[2493]: I0306 02:32:11.758649 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.7585410360000004 podStartE2EDuration="5.758541036s" podCreationTimestamp="2026-03-06 02:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:32:09.558268883 +0000 UTC m=+36.757808076" watchObservedRunningTime="2026-03-06 02:32:11.758541036 +0000 UTC m=+38.958080199" Mar 6 02:32:15.733285 kubelet[2493]: E0306 02:32:15.729539 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:15.840916 kubelet[2493]: I0306 02:32:15.840714 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=9.840691437 podStartE2EDuration="9.840691437s" podCreationTimestamp="2026-03-06 02:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:32:11.779185248 +0000 UTC m=+38.978724430" watchObservedRunningTime="2026-03-06 02:32:15.840691437 +0000 UTC m=+43.040230599" Mar 6 02:32:16.494496 systemd[1]: Reload requested from client PID 2790 ('systemctl') (unit session-9.scope)... Mar 6 02:32:16.494520 systemd[1]: Reloading... Mar 6 02:32:16.734300 kubelet[2493]: E0306 02:32:16.716367 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:16.934677 zram_generator::config[2833]: No configuration found. Mar 6 02:32:17.890955 systemd[1]: Reloading finished in 1393 ms. Mar 6 02:32:18.027206 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:32:18.079948 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 02:32:18.080823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:32:18.082959 systemd[1]: kubelet.service: Consumed 13.379s CPU time, 136.6M memory peak. Mar 6 02:32:18.088636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:32:19.044723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:32:19.066737 (kubelet)[2878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 02:32:19.460348 kubelet[2878]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:32:19.460348 kubelet[2878]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 02:32:19.460348 kubelet[2878]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:32:19.460348 kubelet[2878]: I0306 02:32:19.457558 2878 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 02:32:19.549399 kubelet[2878]: I0306 02:32:19.548514 2878 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 02:32:19.549399 kubelet[2878]: I0306 02:32:19.548563 2878 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 02:32:19.554216 kubelet[2878]: I0306 02:32:19.553307 2878 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 02:32:19.568831 kubelet[2878]: I0306 02:32:19.568564 2878 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 02:32:19.583863 kubelet[2878]: I0306 02:32:19.583635 2878 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 02:32:19.643453 kubelet[2878]: I0306 02:32:19.643417 2878 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 02:32:19.765882 kubelet[2878]: I0306 02:32:19.765838 2878 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 02:32:19.770507 kubelet[2878]: I0306 02:32:19.769639 2878 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 02:32:19.776749 kubelet[2878]: I0306 02:32:19.773706 2878 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 02:32:19.778184 kubelet[2878]: I0306 02:32:19.777315 2878 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 02:32:19.778184 kubelet[2878]: I0306 02:32:19.777350 2878 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 02:32:19.778184 kubelet[2878]: I0306 02:32:19.777429 2878 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:32:19.778184 kubelet[2878]: I0306 02:32:19.777878 2878 kubelet.go:480] "Attempting to sync node with API server" Mar 6 02:32:19.778184 kubelet[2878]: I0306 02:32:19.777897 2878 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 02:32:19.778184 kubelet[2878]: I0306 02:32:19.777935 2878 kubelet.go:386] "Adding apiserver pod source" Mar 6 02:32:19.778184 kubelet[2878]: I0306 02:32:19.777956 2878 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 02:32:19.807861 kubelet[2878]: I0306 02:32:19.806499 2878 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 02:32:19.812402 kubelet[2878]: I0306 02:32:19.812307 2878 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 02:32:19.845323 kubelet[2878]: I0306 02:32:19.844669 2878 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 02:32:19.845323 kubelet[2878]: I0306 02:32:19.844719 2878 server.go:1289] "Started kubelet" Mar 6 02:32:19.846694 kubelet[2878]: I0306 02:32:19.846471 2878 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 02:32:19.851186 kubelet[2878]: I0306 02:32:19.849224 2878 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 02:32:19.853442 sudo[2895]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 6 02:32:19.854811 sudo[2895]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 6 02:32:19.856744 kubelet[2878]: I0306 02:32:19.856527 2878 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 02:32:19.860670 kubelet[2878]: I0306 02:32:19.860604 2878 server.go:317] "Adding debug handlers to kubelet server" Mar 6 02:32:19.884286 kubelet[2878]: I0306 02:32:19.883483 2878 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 02:32:19.886263 kubelet[2878]: I0306 02:32:19.884741 2878 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 02:32:19.895268 kubelet[2878]: E0306 02:32:19.893699 2878 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 02:32:19.895268 kubelet[2878]: I0306 02:32:19.894495 2878 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 02:32:19.899650 kubelet[2878]: I0306 02:32:19.899266 2878 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 02:32:19.899650 kubelet[2878]: I0306 02:32:19.899575 2878 reconciler.go:26] "Reconciler: start to sync state" Mar 6 02:32:19.906295 kubelet[2878]: I0306 02:32:19.904791 2878 factory.go:223] Registration of the systemd container factory successfully Mar 6 02:32:19.906295 kubelet[2878]: I0306 02:32:19.905415 2878 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 02:32:19.916556 kubelet[2878]: I0306 02:32:19.915930 2878 factory.go:223] Registration of the containerd container factory successfully Mar 6 02:32:20.080762 kubelet[2878]: I0306 02:32:20.079905 2878 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 02:32:20.126326 kubelet[2878]: I0306 02:32:20.125885 2878 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 02:32:20.126326 kubelet[2878]: I0306 02:32:20.126277 2878 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 02:32:20.126326 kubelet[2878]: I0306 02:32:20.126306 2878 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 02:32:20.126326 kubelet[2878]: I0306 02:32:20.126317 2878 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 02:32:20.126836 kubelet[2878]: E0306 02:32:20.126372 2878 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 02:32:20.181586 kubelet[2878]: I0306 02:32:20.180757 2878 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 02:32:20.181586 kubelet[2878]: I0306 02:32:20.180781 2878 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 02:32:20.181586 kubelet[2878]: I0306 02:32:20.180804 2878 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:32:20.181586 kubelet[2878]: I0306 02:32:20.180970 2878 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 02:32:20.181586 kubelet[2878]: I0306 02:32:20.181205 2878 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 02:32:20.181586 kubelet[2878]: I0306 02:32:20.181234 2878 policy_none.go:49] "None policy: Start" Mar 6 02:32:20.181586 kubelet[2878]: I0306 02:32:20.181251 2878 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 02:32:20.181586 kubelet[2878]: I0306 02:32:20.181266 2878 state_mem.go:35] "Initializing new in-memory state store" Mar 6 02:32:20.181586 kubelet[2878]: I0306 02:32:20.181384 2878 state_mem.go:75] "Updated machine memory state" Mar 6 02:32:20.213270 kubelet[2878]: E0306 02:32:20.212923 2878 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 02:32:20.213399 kubelet[2878]: I0306 02:32:20.213358 2878 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 02:32:20.213399 kubelet[2878]: I0306 02:32:20.213373 2878 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 02:32:20.215913 kubelet[2878]: I0306 02:32:20.214385 2878 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 02:32:20.223257 kubelet[2878]: E0306 02:32:20.221861 2878 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 02:32:20.228316 kubelet[2878]: I0306 02:32:20.228287 2878 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 02:32:20.230889 kubelet[2878]: I0306 02:32:20.230671 2878 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 02:32:20.234718 kubelet[2878]: I0306 02:32:20.234579 2878 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:32:20.269835 kubelet[2878]: E0306 02:32:20.269643 2878 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 02:32:20.274480 kubelet[2878]: E0306 02:32:20.274321 2878 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 02:32:20.274595 kubelet[2878]: E0306 02:32:20.274491 2878 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:32:20.392559 kubelet[2878]: I0306 02:32:20.391684 2878 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:32:20.410180 kubelet[2878]: I0306 02:32:20.408923 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed75b82b1df7a82dd6b442dc51cc23f6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed75b82b1df7a82dd6b442dc51cc23f6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:32:20.410508 kubelet[2878]: I0306 02:32:20.408968 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed75b82b1df7a82dd6b442dc51cc23f6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ed75b82b1df7a82dd6b442dc51cc23f6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:32:20.412315 kubelet[2878]: I0306 02:32:20.411220 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:32:20.412315 kubelet[2878]: I0306 02:32:20.411265 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:32:20.412315 kubelet[2878]: I0306 02:32:20.411287 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:32:20.412315 kubelet[2878]: I0306 02:32:20.411305 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:32:20.412315 kubelet[2878]: I0306 02:32:20.411320 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:32:20.412538 kubelet[2878]: I0306 02:32:20.411334 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 02:32:20.412538 kubelet[2878]: I0306 02:32:20.411346 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed75b82b1df7a82dd6b442dc51cc23f6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed75b82b1df7a82dd6b442dc51cc23f6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:32:20.448197 kubelet[2878]: I0306 02:32:20.447822 2878 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 6 02:32:20.448197 kubelet[2878]: I0306 02:32:20.447938 2878 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 02:32:20.573538 kubelet[2878]: E0306 02:32:20.573500 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:20.577462 kubelet[2878]: E0306 02:32:20.575451 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:20.577462 kubelet[2878]: E0306 02:32:20.575730 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:20.754710 kubelet[2878]: I0306 02:32:20.754508 2878 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 02:32:20.756421 containerd[1590]: time="2026-03-06T02:32:20.756367455Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 02:32:20.760381 kubelet[2878]: I0306 02:32:20.759927 2878 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 02:32:20.783663 kubelet[2878]: I0306 02:32:20.783477 2878 apiserver.go:52] "Watching apiserver" Mar 6 02:32:20.802841 kubelet[2878]: I0306 02:32:20.802640 2878 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 02:32:20.930933 kubelet[2878]: I0306 02:32:20.929844 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/954e0f1e-e371-450e-b93d-ba3087aa057e-kube-proxy\") pod \"kube-proxy-jjg4g\" (UID: \"954e0f1e-e371-450e-b93d-ba3087aa057e\") " pod="kube-system/kube-proxy-jjg4g" Mar 6 02:32:20.930933 kubelet[2878]: I0306 02:32:20.929896 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/954e0f1e-e371-450e-b93d-ba3087aa057e-xtables-lock\") pod \"kube-proxy-jjg4g\" (UID: \"954e0f1e-e371-450e-b93d-ba3087aa057e\") " pod="kube-system/kube-proxy-jjg4g" Mar 6 02:32:20.930933 kubelet[2878]: I0306 02:32:20.929918 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/954e0f1e-e371-450e-b93d-ba3087aa057e-lib-modules\") pod \"kube-proxy-jjg4g\" (UID: \"954e0f1e-e371-450e-b93d-ba3087aa057e\") " pod="kube-system/kube-proxy-jjg4g" Mar 6 02:32:20.930933 kubelet[2878]: I0306 02:32:20.929946 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thcdx\" (UniqueName: \"kubernetes.io/projected/954e0f1e-e371-450e-b93d-ba3087aa057e-kube-api-access-thcdx\") pod \"kube-proxy-jjg4g\" (UID: \"954e0f1e-e371-450e-b93d-ba3087aa057e\") " pod="kube-system/kube-proxy-jjg4g" Mar 6 02:32:20.933512 systemd[1]: Created slice kubepods-besteffort-pod954e0f1e_e371_450e_b93d_ba3087aa057e.slice - libcontainer container kubepods-besteffort-pod954e0f1e_e371_450e_b93d_ba3087aa057e.slice. Mar 6 02:32:21.078377 kubelet[2878]: E0306 02:32:21.077542 2878 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 6 02:32:21.078377 kubelet[2878]: E0306 02:32:21.077746 2878 projected.go:194] Error preparing data for projected volume kube-api-access-thcdx for pod kube-system/kube-proxy-jjg4g: configmap "kube-root-ca.crt" not found Mar 6 02:32:21.078377 kubelet[2878]: E0306 02:32:21.078183 2878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/954e0f1e-e371-450e-b93d-ba3087aa057e-kube-api-access-thcdx podName:954e0f1e-e371-450e-b93d-ba3087aa057e nodeName:}" failed. No retries permitted until 2026-03-06 02:32:21.577960584 +0000 UTC m=+2.414378639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-thcdx" (UniqueName: "kubernetes.io/projected/954e0f1e-e371-450e-b93d-ba3087aa057e-kube-api-access-thcdx") pod "kube-proxy-jjg4g" (UID: "954e0f1e-e371-450e-b93d-ba3087aa057e") : configmap "kube-root-ca.crt" not found Mar 6 02:32:21.158624 sudo[2895]: pam_unix(sudo:session): session closed for user root Mar 6 02:32:21.173714 kubelet[2878]: E0306 02:32:21.173577 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:21.174841 kubelet[2878]: E0306 02:32:21.174697 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:21.184368 kubelet[2878]: E0306 02:32:21.184225 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:21.646851 kubelet[2878]: E0306 02:32:21.646223 2878 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 6 02:32:21.646851 kubelet[2878]: E0306 02:32:21.646266 2878 projected.go:194] Error preparing data for projected volume kube-api-access-thcdx for pod kube-system/kube-proxy-jjg4g: configmap "kube-root-ca.crt" not found Mar 6 02:32:21.646851 kubelet[2878]: E0306 02:32:21.646328 2878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/954e0f1e-e371-450e-b93d-ba3087aa057e-kube-api-access-thcdx podName:954e0f1e-e371-450e-b93d-ba3087aa057e nodeName:}" failed. No retries permitted until 2026-03-06 02:32:22.646307745 +0000 UTC m=+3.482725790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-thcdx" (UniqueName: "kubernetes.io/projected/954e0f1e-e371-450e-b93d-ba3087aa057e-kube-api-access-thcdx") pod "kube-proxy-jjg4g" (UID: "954e0f1e-e371-450e-b93d-ba3087aa057e") : configmap "kube-root-ca.crt" not found Mar 6 02:32:22.177306 kubelet[2878]: E0306 02:32:22.176666 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:22.180184 kubelet[2878]: E0306 02:32:22.179875 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:22.181826 kubelet[2878]: E0306 02:32:22.181364 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:23.093581 kubelet[2878]: E0306 02:32:23.092473 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:23.269805 containerd[1590]: time="2026-03-06T02:32:23.268679130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jjg4g,Uid:954e0f1e-e371-450e-b93d-ba3087aa057e,Namespace:kube-system,Attempt:0,}" Mar 6 02:32:23.329347 kubelet[2878]: E0306 02:32:23.328307 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:23.686269 kubelet[2878]: I0306 02:32:23.683229 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-bpf-maps\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.686269 kubelet[2878]: I0306 02:32:23.683483 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-config-path\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.686269 kubelet[2878]: I0306 02:32:23.683515 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-cgroup\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.686269 kubelet[2878]: I0306 02:32:23.683632 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-xtables-lock\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.686269 kubelet[2878]: I0306 02:32:23.683654 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-lib-modules\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.686269 kubelet[2878]: I0306 02:32:23.683675 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-clustermesh-secrets\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.688324 kubelet[2878]: I0306 02:32:23.683695 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-hostproc\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.688324 kubelet[2878]: I0306 02:32:23.683719 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-host-proc-sys-net\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.688324 kubelet[2878]: I0306 02:32:23.683741 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6tbn\" (UniqueName: \"kubernetes.io/projected/946d5c4c-9c33-47b1-ba8f-6e5cce6555e3-kube-api-access-r6tbn\") pod \"cilium-operator-6c4d7847fc-6f4px\" (UID: \"946d5c4c-9c33-47b1-ba8f-6e5cce6555e3\") " pod="kube-system/cilium-operator-6c4d7847fc-6f4px" Mar 6 02:32:23.688324 kubelet[2878]: I0306 02:32:23.683764 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/946d5c4c-9c33-47b1-ba8f-6e5cce6555e3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6f4px\" (UID: \"946d5c4c-9c33-47b1-ba8f-6e5cce6555e3\") " pod="kube-system/cilium-operator-6c4d7847fc-6f4px" Mar 6 02:32:23.688324 kubelet[2878]: I0306 02:32:23.683784 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rchl2\" (UniqueName: \"kubernetes.io/projected/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-kube-api-access-rchl2\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.688514 kubelet[2878]: I0306 02:32:23.683818 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-run\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.688514 kubelet[2878]: I0306 02:32:23.683845 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-etc-cni-netd\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.688514 kubelet[2878]: I0306 02:32:23.683873 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-host-proc-sys-kernel\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.688514 kubelet[2878]: I0306 02:32:23.683897 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-hubble-tls\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.688514 kubelet[2878]: I0306 02:32:23.683915 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cni-path\") pod \"cilium-sg7m4\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " pod="kube-system/cilium-sg7m4" Mar 6 02:32:23.723564 systemd[1]: Created slice kubepods-burstable-podc00e6568_9ff9_41e6_94ab_9c2c36d856bc.slice - libcontainer container kubepods-burstable-podc00e6568_9ff9_41e6_94ab_9c2c36d856bc.slice. Mar 6 02:32:23.753412 systemd[1]: Created slice kubepods-besteffort-pod946d5c4c_9c33_47b1_ba8f_6e5cce6555e3.slice - libcontainer container kubepods-besteffort-pod946d5c4c_9c33_47b1_ba8f_6e5cce6555e3.slice. Mar 6 02:32:23.992219 containerd[1590]: time="2026-03-06T02:32:23.988960877Z" level=info msg="connecting to shim 1672df90b62f128e47db03c4fda6af6afbce6de11366e586ed1e7bc9a861d2d9" address="unix:///run/containerd/s/be07a023c52649a37688cd4220dc4269b898cd4100f9d5516bee1426b9edca0a" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:32:24.640255 kubelet[2878]: E0306 02:32:24.639427 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:24.653925 containerd[1590]: time="2026-03-06T02:32:24.652972451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sg7m4,Uid:c00e6568-9ff9-41e6-94ab-9c2c36d856bc,Namespace:kube-system,Attempt:0,}" Mar 6 02:32:24.761629 systemd[1]: Started cri-containerd-1672df90b62f128e47db03c4fda6af6afbce6de11366e586ed1e7bc9a861d2d9.scope - libcontainer container 1672df90b62f128e47db03c4fda6af6afbce6de11366e586ed1e7bc9a861d2d9. Mar 6 02:32:24.982386 kubelet[2878]: E0306 02:32:24.981764 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:25.051959 containerd[1590]: time="2026-03-06T02:32:25.050559622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6f4px,Uid:946d5c4c-9c33-47b1-ba8f-6e5cce6555e3,Namespace:kube-system,Attempt:0,}" Mar 6 02:32:25.318455 containerd[1590]: time="2026-03-06T02:32:25.318401402Z" level=info msg="connecting to shim bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1" address="unix:///run/containerd/s/f867bbc025dccbe912655a3882b5292f3e8760accfe05efdc18bd0275a909098" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:32:25.442638 containerd[1590]: time="2026-03-06T02:32:25.441851720Z" level=info msg="connecting to shim 082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88" address="unix:///run/containerd/s/6477e7b9a365f6737da6cdeda5571f320a313f09c4365b4100273e581cc8f4e8" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:32:25.801621 containerd[1590]: time="2026-03-06T02:32:25.801546655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jjg4g,Uid:954e0f1e-e371-450e-b93d-ba3087aa057e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1672df90b62f128e47db03c4fda6af6afbce6de11366e586ed1e7bc9a861d2d9\"" Mar 6 02:32:25.825577 kubelet[2878]: E0306 02:32:25.814835 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:25.864391 containerd[1590]: time="2026-03-06T02:32:25.863494486Z" level=info msg="CreateContainer within sandbox \"1672df90b62f128e47db03c4fda6af6afbce6de11366e586ed1e7bc9a861d2d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 02:32:25.961859 containerd[1590]: time="2026-03-06T02:32:25.961199751Z" level=info msg="Container 2d9e541e16cc1812c9fcc50f5e8619e9a33cb5c2556451c0e60344d937e82c76: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:32:26.115673 systemd[1]: Started cri-containerd-bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1.scope - libcontainer container bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1. Mar 6 02:32:26.185515 containerd[1590]: time="2026-03-06T02:32:26.185301540Z" level=info msg="CreateContainer within sandbox \"1672df90b62f128e47db03c4fda6af6afbce6de11366e586ed1e7bc9a861d2d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d9e541e16cc1812c9fcc50f5e8619e9a33cb5c2556451c0e60344d937e82c76\"" Mar 6 02:32:26.186324 systemd[1]: Started cri-containerd-082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88.scope - libcontainer container 082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88. Mar 6 02:32:26.197597 containerd[1590]: time="2026-03-06T02:32:26.197557739Z" level=info msg="StartContainer for \"2d9e541e16cc1812c9fcc50f5e8619e9a33cb5c2556451c0e60344d937e82c76\"" Mar 6 02:32:26.238774 containerd[1590]: time="2026-03-06T02:32:26.238716709Z" level=info msg="connecting to shim 2d9e541e16cc1812c9fcc50f5e8619e9a33cb5c2556451c0e60344d937e82c76" address="unix:///run/containerd/s/be07a023c52649a37688cd4220dc4269b898cd4100f9d5516bee1426b9edca0a" protocol=ttrpc version=3 Mar 6 02:32:26.497499 systemd[1]: Started cri-containerd-2d9e541e16cc1812c9fcc50f5e8619e9a33cb5c2556451c0e60344d937e82c76.scope - libcontainer container 2d9e541e16cc1812c9fcc50f5e8619e9a33cb5c2556451c0e60344d937e82c76. Mar 6 02:32:27.190839 containerd[1590]: time="2026-03-06T02:32:27.190754247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sg7m4,Uid:c00e6568-9ff9-41e6-94ab-9c2c36d856bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\"" Mar 6 02:32:27.254361 kubelet[2878]: E0306 02:32:27.253355 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:27.370466 containerd[1590]: time="2026-03-06T02:32:27.368627078Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 6 02:32:27.878438 containerd[1590]: time="2026-03-06T02:32:27.877259710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6f4px,Uid:946d5c4c-9c33-47b1-ba8f-6e5cce6555e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\"" Mar 6 02:32:27.996259 kubelet[2878]: E0306 02:32:27.994371 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:28.532665 containerd[1590]: time="2026-03-06T02:32:28.532423654Z" level=info msg="StartContainer for \"2d9e541e16cc1812c9fcc50f5e8619e9a33cb5c2556451c0e60344d937e82c76\" returns successfully" Mar 6 02:32:29.281603 kubelet[2878]: E0306 02:32:29.281561 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:29.375819 kubelet[2878]: I0306 02:32:29.372753 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jjg4g" podStartSLOduration=9.37272928 podStartE2EDuration="9.37272928s" podCreationTimestamp="2026-03-06 02:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:32:29.372486978 +0000 UTC m=+10.208905033" watchObservedRunningTime="2026-03-06 02:32:29.37272928 +0000 UTC m=+10.209147326" Mar 6 02:32:29.646378 kubelet[2878]: E0306 02:32:29.644558 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:29.944845 kubelet[2878]: E0306 02:32:29.943799 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:30.325509 kubelet[2878]: E0306 02:32:30.325248 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:30.332298 kubelet[2878]: E0306 02:32:30.327667 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:30.333418 kubelet[2878]: E0306 02:32:30.329355 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:31.330539 kubelet[2878]: E0306 02:32:31.329386 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:32:55.374832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053202925.mount: Deactivated successfully. Mar 6 02:33:13.936504 kubelet[2878]: E0306 02:33:13.932688 2878 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.661s" Mar 6 02:33:21.689903 containerd[1590]: time="2026-03-06T02:33:21.688918207Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:33:21.699932 containerd[1590]: time="2026-03-06T02:33:21.695615578Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 6 02:33:21.702487 containerd[1590]: time="2026-03-06T02:33:21.701720975Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:33:21.710909 containerd[1590]: time="2026-03-06T02:33:21.709960499Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 54.319728916s" Mar 6 02:33:21.710909 containerd[1590]: time="2026-03-06T02:33:21.710408736Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 6 02:33:21.725682 containerd[1590]: time="2026-03-06T02:33:21.723578044Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 6 02:33:21.825755 containerd[1590]: time="2026-03-06T02:33:21.825683813Z" level=info msg="CreateContainer within sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 6 02:33:21.962975 containerd[1590]: time="2026-03-06T02:33:21.960639878Z" level=info msg="Container be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:33:22.081448 containerd[1590]: time="2026-03-06T02:33:22.081391492Z" level=info msg="CreateContainer within sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6\"" Mar 6 02:33:22.097394 containerd[1590]: time="2026-03-06T02:33:22.094864261Z" level=info msg="StartContainer for \"be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6\"" Mar 6 02:33:22.132664 containerd[1590]: time="2026-03-06T02:33:22.132612521Z" level=info msg="connecting to shim be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6" address="unix:///run/containerd/s/f867bbc025dccbe912655a3882b5292f3e8760accfe05efdc18bd0275a909098" protocol=ttrpc version=3 Mar 6 02:33:22.601595 systemd[1]: Started cri-containerd-be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6.scope - libcontainer container be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6. Mar 6 02:33:23.154940 containerd[1590]: time="2026-03-06T02:33:23.152652632Z" level=info msg="StartContainer for \"be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6\" returns successfully" Mar 6 02:33:23.239782 systemd[1]: cri-containerd-be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6.scope: Deactivated successfully. Mar 6 02:33:23.364506 containerd[1590]: time="2026-03-06T02:33:23.357898702Z" level=info msg="received container exit event container_id:\"be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6\" id:\"be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6\" pid:3288 exited_at:{seconds:1772764403 nanos:339665221}" Mar 6 02:33:23.359859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452632918.mount: Deactivated successfully. Mar 6 02:33:23.548650 kubelet[2878]: E0306 02:33:23.545966 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:23.807576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6-rootfs.mount: Deactivated successfully. Mar 6 02:33:24.596323 kubelet[2878]: E0306 02:33:24.595895 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:24.678587 containerd[1590]: time="2026-03-06T02:33:24.677960263Z" level=info msg="CreateContainer within sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 6 02:33:24.776607 containerd[1590]: time="2026-03-06T02:33:24.774729894Z" level=info msg="Container c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:33:24.785802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775327185.mount: Deactivated successfully. Mar 6 02:33:24.831769 containerd[1590]: time="2026-03-06T02:33:24.831513052Z" level=info msg="CreateContainer within sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74\"" Mar 6 02:33:24.837850 containerd[1590]: time="2026-03-06T02:33:24.834803401Z" level=info msg="StartContainer for \"c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74\"" Mar 6 02:33:24.867823 containerd[1590]: time="2026-03-06T02:33:24.865847548Z" level=info msg="connecting to shim c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74" address="unix:///run/containerd/s/f867bbc025dccbe912655a3882b5292f3e8760accfe05efdc18bd0275a909098" protocol=ttrpc version=3 Mar 6 02:33:25.185973 systemd[1]: Started cri-containerd-c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74.scope - libcontainer container c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74. Mar 6 02:33:25.771624 containerd[1590]: time="2026-03-06T02:33:25.771579901Z" level=info msg="StartContainer for \"c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74\" returns successfully" Mar 6 02:33:25.945837 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 02:33:25.957683 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:33:25.963566 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:33:25.969785 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:33:25.977867 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 02:33:26.005745 systemd[1]: cri-containerd-c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74.scope: Deactivated successfully. Mar 6 02:33:26.016974 systemd[1]: cri-containerd-c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74.scope: Consumed 164ms CPU time, 4.5M memory peak, 56K read from disk, 2.2M written to disk. Mar 6 02:33:26.035484 containerd[1590]: time="2026-03-06T02:33:26.029742242Z" level=info msg="received container exit event container_id:\"c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74\" id:\"c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74\" pid:3346 exited_at:{seconds:1772764406 nanos:22904616}" Mar 6 02:33:26.238619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:33:26.387751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74-rootfs.mount: Deactivated successfully. Mar 6 02:33:26.735507 kubelet[2878]: E0306 02:33:26.733769 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:26.760497 containerd[1590]: time="2026-03-06T02:33:26.759767002Z" level=info msg="CreateContainer within sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 6 02:33:26.886448 containerd[1590]: time="2026-03-06T02:33:26.885816074Z" level=info msg="Container ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:33:26.999852 containerd[1590]: time="2026-03-06T02:33:26.996594564Z" level=info msg="CreateContainer within sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc\"" Mar 6 02:33:27.014565 containerd[1590]: time="2026-03-06T02:33:27.012911036Z" level=info msg="StartContainer for \"ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc\"" Mar 6 02:33:27.021853 containerd[1590]: time="2026-03-06T02:33:27.017885128Z" level=info msg="connecting to shim ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc" address="unix:///run/containerd/s/f867bbc025dccbe912655a3882b5292f3e8760accfe05efdc18bd0275a909098" protocol=ttrpc version=3 Mar 6 02:33:27.466881 systemd[1]: Started cri-containerd-ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc.scope - libcontainer container ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc. Mar 6 02:33:28.771678 containerd[1590]: time="2026-03-06T02:33:28.770915785Z" level=info msg="StartContainer for \"ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc\" returns successfully" Mar 6 02:33:28.778890 systemd[1]: cri-containerd-ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc.scope: Deactivated successfully. Mar 6 02:33:28.826886 containerd[1590]: time="2026-03-06T02:33:28.826630539Z" level=info msg="received container exit event container_id:\"ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc\" id:\"ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc\" pid:3395 exited_at:{seconds:1772764408 nanos:816462777}" Mar 6 02:33:29.007587 kubelet[2878]: E0306 02:33:28.997919 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:29.696949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc-rootfs.mount: Deactivated successfully. Mar 6 02:33:30.039490 kubelet[2878]: E0306 02:33:30.038608 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:30.153431 containerd[1590]: time="2026-03-06T02:33:30.153383673Z" level=info msg="CreateContainer within sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 6 02:33:30.297604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1497485851.mount: Deactivated successfully. Mar 6 02:33:30.320824 containerd[1590]: time="2026-03-06T02:33:30.320486593Z" level=info msg="Container 80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:33:30.366876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314668196.mount: Deactivated successfully. Mar 6 02:33:30.480815 containerd[1590]: time="2026-03-06T02:33:30.479961579Z" level=info msg="CreateContainer within sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c\"" Mar 6 02:33:30.488738 containerd[1590]: time="2026-03-06T02:33:30.488698076Z" level=info msg="StartContainer for \"80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c\"" Mar 6 02:33:30.563874 containerd[1590]: time="2026-03-06T02:33:30.563819875Z" level=info msg="connecting to shim 80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c" address="unix:///run/containerd/s/f867bbc025dccbe912655a3882b5292f3e8760accfe05efdc18bd0275a909098" protocol=ttrpc version=3 Mar 6 02:33:30.856755 systemd[1]: Started cri-containerd-80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c.scope - libcontainer container 80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c. Mar 6 02:33:31.138742 kubelet[2878]: E0306 02:33:31.136618 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:31.349579 systemd[1]: cri-containerd-80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c.scope: Deactivated successfully. Mar 6 02:33:31.391789 containerd[1590]: time="2026-03-06T02:33:31.383966295Z" level=info msg="received container exit event container_id:\"80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c\" id:\"80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c\" pid:3437 exited_at:{seconds:1772764411 nanos:369590475}" Mar 6 02:33:31.421967 containerd[1590]: time="2026-03-06T02:33:31.421916911Z" level=info msg="StartContainer for \"80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c\" returns successfully" Mar 6 02:33:31.933676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c-rootfs.mount: Deactivated successfully. Mar 6 02:33:32.226719 kubelet[2878]: E0306 02:33:32.226574 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:32.258728 containerd[1590]: time="2026-03-06T02:33:32.255942135Z" level=info msg="CreateContainer within sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 6 02:33:32.515853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937738547.mount: Deactivated successfully. Mar 6 02:33:32.534852 containerd[1590]: time="2026-03-06T02:33:32.533688309Z" level=info msg="Container 2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:33:32.626708 containerd[1590]: time="2026-03-06T02:33:32.626438195Z" level=info msg="CreateContainer within sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\"" Mar 6 02:33:32.635524 containerd[1590]: time="2026-03-06T02:33:32.634959918Z" level=info msg="StartContainer for \"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\"" Mar 6 02:33:32.643566 containerd[1590]: time="2026-03-06T02:33:32.642846211Z" level=info msg="connecting to shim 2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641" address="unix:///run/containerd/s/f867bbc025dccbe912655a3882b5292f3e8760accfe05efdc18bd0275a909098" protocol=ttrpc version=3 Mar 6 02:33:32.953696 systemd[1]: Started cri-containerd-2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641.scope - libcontainer container 2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641. Mar 6 02:33:33.402717 containerd[1590]: time="2026-03-06T02:33:33.402555270Z" level=info msg="StartContainer for \"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\" returns successfully" Mar 6 02:33:33.769972 containerd[1590]: time="2026-03-06T02:33:33.768726189Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:33:33.773529 containerd[1590]: time="2026-03-06T02:33:33.773492754Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 6 02:33:33.778861 containerd[1590]: time="2026-03-06T02:33:33.778821043Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:33:33.785930 containerd[1590]: time="2026-03-06T02:33:33.785883673Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 12.062265003s" Mar 6 02:33:33.792796 containerd[1590]: time="2026-03-06T02:33:33.789885640Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 6 02:33:33.922495 containerd[1590]: time="2026-03-06T02:33:33.921721240Z" level=info msg="CreateContainer within sandbox \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 6 02:33:34.073642 containerd[1590]: time="2026-03-06T02:33:34.072376241Z" level=info msg="Container d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:33:34.076644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount409918285.mount: Deactivated successfully. Mar 6 02:33:34.231758 containerd[1590]: time="2026-03-06T02:33:34.230932394Z" level=info msg="CreateContainer within sandbox \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\"" Mar 6 02:33:34.237359 containerd[1590]: time="2026-03-06T02:33:34.236311363Z" level=info msg="StartContainer for \"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\"" Mar 6 02:33:34.254233 containerd[1590]: time="2026-03-06T02:33:34.252724154Z" level=info msg="connecting to shim d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0" address="unix:///run/containerd/s/6477e7b9a365f6737da6cdeda5571f320a313f09c4365b4100273e581cc8f4e8" protocol=ttrpc version=3 Mar 6 02:33:34.528976 kubelet[2878]: I0306 02:33:34.526505 2878 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 6 02:33:34.527866 systemd[1]: Started cri-containerd-d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0.scope - libcontainer container d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0. Mar 6 02:33:34.938582 systemd[1]: Created slice kubepods-burstable-pod9a25e354_1107_43b4_a151_0cabdd699918.slice - libcontainer container kubepods-burstable-pod9a25e354_1107_43b4_a151_0cabdd699918.slice. Mar 6 02:33:34.952695 kubelet[2878]: I0306 02:33:34.952521 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dnb\" (UniqueName: \"kubernetes.io/projected/9a25e354-1107-43b4-a151-0cabdd699918-kube-api-access-d8dnb\") pod \"coredns-674b8bbfcf-v5ljd\" (UID: \"9a25e354-1107-43b4-a151-0cabdd699918\") " pod="kube-system/coredns-674b8bbfcf-v5ljd" Mar 6 02:33:34.952695 kubelet[2878]: I0306 02:33:34.952680 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/708bc38d-6465-4264-bad9-a88c129d7496-config-volume\") pod \"coredns-674b8bbfcf-rrpzl\" (UID: \"708bc38d-6465-4264-bad9-a88c129d7496\") " pod="kube-system/coredns-674b8bbfcf-rrpzl" Mar 6 02:33:34.952888 kubelet[2878]: I0306 02:33:34.952716 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frmrt\" (UniqueName: \"kubernetes.io/projected/708bc38d-6465-4264-bad9-a88c129d7496-kube-api-access-frmrt\") pod \"coredns-674b8bbfcf-rrpzl\" (UID: \"708bc38d-6465-4264-bad9-a88c129d7496\") " pod="kube-system/coredns-674b8bbfcf-rrpzl" Mar 6 02:33:34.952888 kubelet[2878]: I0306 02:33:34.952744 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a25e354-1107-43b4-a151-0cabdd699918-config-volume\") pod \"coredns-674b8bbfcf-v5ljd\" (UID: \"9a25e354-1107-43b4-a151-0cabdd699918\") " pod="kube-system/coredns-674b8bbfcf-v5ljd" Mar 6 02:33:34.996916 systemd[1]: Created slice kubepods-burstable-pod708bc38d_6465_4264_bad9_a88c129d7496.slice - libcontainer container kubepods-burstable-pod708bc38d_6465_4264_bad9_a88c129d7496.slice. Mar 6 02:33:35.283677 kubelet[2878]: E0306 02:33:35.282779 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:35.320595 kubelet[2878]: E0306 02:33:35.320546 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:35.330586 containerd[1590]: time="2026-03-06T02:33:35.329890212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrpzl,Uid:708bc38d-6465-4264-bad9-a88c129d7496,Namespace:kube-system,Attempt:0,}" Mar 6 02:33:35.332845 containerd[1590]: time="2026-03-06T02:33:35.331650443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v5ljd,Uid:9a25e354-1107-43b4-a151-0cabdd699918,Namespace:kube-system,Attempt:0,}" Mar 6 02:33:35.379826 containerd[1590]: time="2026-03-06T02:33:35.379232647Z" level=info msg="StartContainer for \"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\" returns successfully" Mar 6 02:33:35.772511 kubelet[2878]: E0306 02:33:35.769861 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:35.776517 kubelet[2878]: E0306 02:33:35.775680 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:35.859880 kubelet[2878]: I0306 02:33:35.859666 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6f4px" podStartSLOduration=8.068864488 podStartE2EDuration="1m13.859642952s" podCreationTimestamp="2026-03-06 02:32:22 +0000 UTC" firstStartedPulling="2026-03-06 02:32:28.033778796 +0000 UTC m=+8.870196842" lastFinishedPulling="2026-03-06 02:33:33.82455726 +0000 UTC m=+74.660975306" observedRunningTime="2026-03-06 02:33:35.852522993 +0000 UTC m=+76.688941058" watchObservedRunningTime="2026-03-06 02:33:35.859642952 +0000 UTC m=+76.696060997" Mar 6 02:33:36.804510 kubelet[2878]: E0306 02:33:36.803808 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:36.808557 kubelet[2878]: E0306 02:33:36.806594 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:37.829708 kubelet[2878]: E0306 02:33:37.829532 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:42.515380 systemd-networkd[1451]: cilium_host: Link UP Mar 6 02:33:42.515602 systemd-networkd[1451]: cilium_net: Link UP Mar 6 02:33:42.515841 systemd-networkd[1451]: cilium_host: Gained carrier Mar 6 02:33:42.518573 systemd-networkd[1451]: cilium_net: Gained carrier Mar 6 02:33:43.302658 systemd-networkd[1451]: cilium_host: Gained IPv6LL Mar 6 02:33:43.487817 systemd-networkd[1451]: cilium_net: Gained IPv6LL Mar 6 02:33:43.775380 systemd-networkd[1451]: cilium_vxlan: Link UP Mar 6 02:33:43.775392 systemd-networkd[1451]: cilium_vxlan: Gained carrier Mar 6 02:33:45.116785 kernel: NET: Registered PF_ALG protocol family Mar 6 02:33:45.665585 systemd-networkd[1451]: cilium_vxlan: Gained IPv6LL Mar 6 02:33:47.340786 systemd-networkd[1451]: lxc_health: Link UP Mar 6 02:33:47.344417 systemd-networkd[1451]: lxc_health: Gained carrier Mar 6 02:33:47.416933 kubelet[2878]: E0306 02:33:47.416221 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:47.855820 systemd-networkd[1451]: lxc9392dc7ad654: Link UP Mar 6 02:33:47.856310 kernel: eth0: renamed from tmp58c53 Mar 6 02:33:47.912776 systemd-networkd[1451]: lxc9392dc7ad654: Gained carrier Mar 6 02:33:47.948663 systemd-networkd[1451]: lxc78287aab5917: Link UP Mar 6 02:33:47.968378 kernel: eth0: renamed from tmp8ba4e Mar 6 02:33:47.981242 systemd-networkd[1451]: lxc78287aab5917: Gained carrier Mar 6 02:33:48.654845 kubelet[2878]: E0306 02:33:48.654676 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:48.726220 kubelet[2878]: I0306 02:33:48.725708 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sg7m4" podStartSLOduration=32.351580023 podStartE2EDuration="1m26.725688007s" podCreationTimestamp="2026-03-06 02:32:22 +0000 UTC" firstStartedPulling="2026-03-06 02:32:27.346672704 +0000 UTC m=+8.183090749" lastFinishedPulling="2026-03-06 02:33:21.720780688 +0000 UTC m=+62.557198733" observedRunningTime="2026-03-06 02:33:36.015746291 +0000 UTC m=+76.852164336" watchObservedRunningTime="2026-03-06 02:33:48.725688007 +0000 UTC m=+89.562106083" Mar 6 02:33:48.927291 systemd-networkd[1451]: lxc_health: Gained IPv6LL Mar 6 02:33:49.142226 kubelet[2878]: E0306 02:33:49.141465 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:49.247257 systemd-networkd[1451]: lxc9392dc7ad654: Gained IPv6LL Mar 6 02:33:49.821588 systemd-networkd[1451]: lxc78287aab5917: Gained IPv6LL Mar 6 02:33:50.130748 kubelet[2878]: E0306 02:33:50.129621 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:53.132207 kubelet[2878]: E0306 02:33:53.131314 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:56.161247 kubelet[2878]: E0306 02:33:56.160926 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:58.407304 containerd[1590]: time="2026-03-06T02:33:58.406336375Z" level=info msg="connecting to shim 8ba4e0af0c3b0a00d89f45d19609fa6b3c1bed1280663d0867416acfe5357f03" address="unix:///run/containerd/s/a180ec18711dfe04eba7543494083aa794f6ad79d3a1d2155bad289c9c6c9908" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:33:58.423445 containerd[1590]: time="2026-03-06T02:33:58.423393912Z" level=info msg="connecting to shim 58c5365e513444d9be941cce4082c31777a45698818b33e2decdc04e4427d532" address="unix:///run/containerd/s/3f307fb9b1a5c28fe471a7abdbee88ad75dc739157ad90ab486a944edbe1da3a" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:33:58.479503 sudo[1787]: pam_unix(sudo:session): session closed for user root Mar 6 02:33:58.495449 sshd[1786]: Connection closed by 10.0.0.1 port 39686 Mar 6 02:33:58.496418 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Mar 6 02:33:58.525794 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. Mar 6 02:33:58.529337 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:39686.service: Deactivated successfully. Mar 6 02:33:58.536371 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 02:33:58.537569 systemd[1]: session-9.scope: Consumed 21.414s CPU time, 240.1M memory peak. Mar 6 02:33:58.548478 systemd-logind[1552]: Removed session 9. Mar 6 02:33:58.570610 systemd[1]: Started cri-containerd-8ba4e0af0c3b0a00d89f45d19609fa6b3c1bed1280663d0867416acfe5357f03.scope - libcontainer container 8ba4e0af0c3b0a00d89f45d19609fa6b3c1bed1280663d0867416acfe5357f03. Mar 6 02:33:58.582843 systemd[1]: Started cri-containerd-58c5365e513444d9be941cce4082c31777a45698818b33e2decdc04e4427d532.scope - libcontainer container 58c5365e513444d9be941cce4082c31777a45698818b33e2decdc04e4427d532. Mar 6 02:33:58.656688 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:33:58.725615 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:33:58.882384 containerd[1590]: time="2026-03-06T02:33:58.882281349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrpzl,Uid:708bc38d-6465-4264-bad9-a88c129d7496,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ba4e0af0c3b0a00d89f45d19609fa6b3c1bed1280663d0867416acfe5357f03\"" Mar 6 02:33:58.887288 kubelet[2878]: E0306 02:33:58.886564 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:58.932611 containerd[1590]: time="2026-03-06T02:33:58.932330756Z" level=info msg="CreateContainer within sandbox \"8ba4e0af0c3b0a00d89f45d19609fa6b3c1bed1280663d0867416acfe5357f03\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 02:33:58.948506 containerd[1590]: time="2026-03-06T02:33:58.947699472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v5ljd,Uid:9a25e354-1107-43b4-a151-0cabdd699918,Namespace:kube-system,Attempt:0,} returns sandbox id \"58c5365e513444d9be941cce4082c31777a45698818b33e2decdc04e4427d532\"" Mar 6 02:33:58.955791 kubelet[2878]: E0306 02:33:58.955434 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:33:58.994937 containerd[1590]: time="2026-03-06T02:33:58.991723814Z" level=info msg="CreateContainer within sandbox \"58c5365e513444d9be941cce4082c31777a45698818b33e2decdc04e4427d532\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 02:33:59.144728 containerd[1590]: time="2026-03-06T02:33:59.102313084Z" level=info msg="Container 45be5e464e760d7eed05eadde0c58f4a859e19ed4f8b3aa09cc8ea3c69e552de: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:33:59.257336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980436917.mount: Deactivated successfully. Mar 6 02:33:59.268330 containerd[1590]: time="2026-03-06T02:33:59.268240592Z" level=info msg="CreateContainer within sandbox \"8ba4e0af0c3b0a00d89f45d19609fa6b3c1bed1280663d0867416acfe5357f03\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45be5e464e760d7eed05eadde0c58f4a859e19ed4f8b3aa09cc8ea3c69e552de\"" Mar 6 02:33:59.291239 containerd[1590]: time="2026-03-06T02:33:59.286417399Z" level=info msg="Container 1c96d0b92330a9a152d5fd982b31303f683beb07058c4b16019b88ad624f9e20: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:33:59.291239 containerd[1590]: time="2026-03-06T02:33:59.287430674Z" level=info msg="StartContainer for \"45be5e464e760d7eed05eadde0c58f4a859e19ed4f8b3aa09cc8ea3c69e552de\"" Mar 6 02:33:59.330672 containerd[1590]: time="2026-03-06T02:33:59.330626270Z" level=info msg="connecting to shim 45be5e464e760d7eed05eadde0c58f4a859e19ed4f8b3aa09cc8ea3c69e552de" address="unix:///run/containerd/s/a180ec18711dfe04eba7543494083aa794f6ad79d3a1d2155bad289c9c6c9908" protocol=ttrpc version=3 Mar 6 02:33:59.373293 containerd[1590]: time="2026-03-06T02:33:59.372722230Z" level=info msg="CreateContainer within sandbox \"58c5365e513444d9be941cce4082c31777a45698818b33e2decdc04e4427d532\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c96d0b92330a9a152d5fd982b31303f683beb07058c4b16019b88ad624f9e20\"" Mar 6 02:33:59.377941 containerd[1590]: time="2026-03-06T02:33:59.377901836Z" level=info msg="StartContainer for \"1c96d0b92330a9a152d5fd982b31303f683beb07058c4b16019b88ad624f9e20\"" Mar 6 02:33:59.385798 containerd[1590]: time="2026-03-06T02:33:59.385761289Z" level=info msg="connecting to shim 1c96d0b92330a9a152d5fd982b31303f683beb07058c4b16019b88ad624f9e20" address="unix:///run/containerd/s/3f307fb9b1a5c28fe471a7abdbee88ad75dc739157ad90ab486a944edbe1da3a" protocol=ttrpc version=3 Mar 6 02:33:59.429474 systemd[1]: Started cri-containerd-45be5e464e760d7eed05eadde0c58f4a859e19ed4f8b3aa09cc8ea3c69e552de.scope - libcontainer container 45be5e464e760d7eed05eadde0c58f4a859e19ed4f8b3aa09cc8ea3c69e552de. Mar 6 02:33:59.551525 systemd[1]: Started cri-containerd-1c96d0b92330a9a152d5fd982b31303f683beb07058c4b16019b88ad624f9e20.scope - libcontainer container 1c96d0b92330a9a152d5fd982b31303f683beb07058c4b16019b88ad624f9e20. Mar 6 02:33:59.674787 containerd[1590]: time="2026-03-06T02:33:59.672476263Z" level=info msg="StartContainer for \"45be5e464e760d7eed05eadde0c58f4a859e19ed4f8b3aa09cc8ea3c69e552de\" returns successfully" Mar 6 02:33:59.745642 containerd[1590]: time="2026-03-06T02:33:59.745428917Z" level=info msg="StartContainer for \"1c96d0b92330a9a152d5fd982b31303f683beb07058c4b16019b88ad624f9e20\" returns successfully" Mar 6 02:34:00.575529 kubelet[2878]: E0306 02:34:00.574924 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:00.583232 kubelet[2878]: E0306 02:34:00.582785 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:00.649595 kubelet[2878]: I0306 02:34:00.645576 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v5ljd" podStartSLOduration=99.64555792 podStartE2EDuration="1m39.64555792s" podCreationTimestamp="2026-03-06 02:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:34:00.639656423 +0000 UTC m=+101.476074468" watchObservedRunningTime="2026-03-06 02:34:00.64555792 +0000 UTC m=+101.481975965" Mar 6 02:34:01.603955 kubelet[2878]: E0306 02:34:01.592867 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:01.603955 kubelet[2878]: E0306 02:34:01.596653 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:01.672469 kubelet[2878]: I0306 02:34:01.671924 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rrpzl" podStartSLOduration=100.671899914 podStartE2EDuration="1m40.671899914s" podCreationTimestamp="2026-03-06 02:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:34:00.736867542 +0000 UTC m=+101.573285578" watchObservedRunningTime="2026-03-06 02:34:01.671899914 +0000 UTC m=+102.508317959" Mar 6 02:34:02.632403 kubelet[2878]: E0306 02:34:02.628445 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:02.632403 kubelet[2878]: E0306 02:34:02.628882 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:03.639542 kubelet[2878]: E0306 02:34:03.638904 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:03.645539 kubelet[2878]: E0306 02:34:03.642791 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:43.128860 kubelet[2878]: E0306 02:34:43.128345 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:49.130401 kubelet[2878]: E0306 02:34:49.128592 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:53.139941 kubelet[2878]: E0306 02:34:53.132395 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:34:55.132570 kubelet[2878]: E0306 02:34:55.128911 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:35:06.142609 kubelet[2878]: E0306 02:35:06.140713 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:35:07.130230 kubelet[2878]: E0306 02:35:07.129709 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:35:08.161622 kubelet[2878]: E0306 02:35:08.160821 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:35:19.134589 kubelet[2878]: E0306 02:35:19.132939 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:36:01.134078 kubelet[2878]: E0306 02:36:01.133857 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:36:03.138267 kubelet[2878]: E0306 02:36:03.134553 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:36:08.521707 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:41428.service - OpenSSH per-connection server daemon (10.0.0.1:41428). Mar 6 02:36:08.763928 sshd[4445]: Accepted publickey for core from 10.0.0.1 port 41428 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:08.768890 sshd-session[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:08.796204 systemd-logind[1552]: New session 10 of user core. Mar 6 02:36:08.808729 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 02:36:09.130247 sshd[4448]: Connection closed by 10.0.0.1 port 41428 Mar 6 02:36:09.131318 sshd-session[4445]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:09.143717 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. Mar 6 02:36:09.144484 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:41428.service: Deactivated successfully. Mar 6 02:36:09.148895 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 02:36:09.153329 systemd-logind[1552]: Removed session 10. Mar 6 02:36:10.140659 kubelet[2878]: E0306 02:36:10.135744 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:36:14.156722 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:54366.service - OpenSSH per-connection server daemon (10.0.0.1:54366). Mar 6 02:36:14.332770 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 54366 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:14.334868 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:14.357782 systemd-logind[1552]: New session 11 of user core. Mar 6 02:36:14.367690 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 02:36:14.825945 sshd[4469]: Connection closed by 10.0.0.1 port 54366 Mar 6 02:36:14.826537 sshd-session[4466]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:14.844846 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. Mar 6 02:36:14.846194 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:54366.service: Deactivated successfully. Mar 6 02:36:14.854325 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 02:36:14.873891 systemd-logind[1552]: Removed session 11. Mar 6 02:36:17.131233 kubelet[2878]: E0306 02:36:17.130694 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:36:19.867535 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:54376.service - OpenSSH per-connection server daemon (10.0.0.1:54376). Mar 6 02:36:20.149575 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 54376 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:20.156236 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:20.193574 systemd-logind[1552]: New session 12 of user core. Mar 6 02:36:20.216279 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 02:36:20.826740 sshd[4489]: Connection closed by 10.0.0.1 port 54376 Mar 6 02:36:20.829673 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:20.866304 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:54376.service: Deactivated successfully. Mar 6 02:36:20.873872 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 02:36:20.888538 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. Mar 6 02:36:20.894767 systemd-logind[1552]: Removed session 12. Mar 6 02:36:21.145299 kubelet[2878]: E0306 02:36:21.144734 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:36:25.863951 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:35670.service - OpenSSH per-connection server daemon (10.0.0.1:35670). Mar 6 02:36:26.065794 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 35670 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:26.072673 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:26.108346 systemd-logind[1552]: New session 13 of user core. Mar 6 02:36:26.133705 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 02:36:26.736967 sshd[4508]: Connection closed by 10.0.0.1 port 35670 Mar 6 02:36:26.741381 sshd-session[4505]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:26.751371 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:35670.service: Deactivated successfully. Mar 6 02:36:26.758626 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 02:36:26.768654 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. Mar 6 02:36:26.782660 systemd-logind[1552]: Removed session 13. Mar 6 02:36:31.139945 kubelet[2878]: E0306 02:36:31.139712 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:36:31.793849 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:35680.service - OpenSSH per-connection server daemon (10.0.0.1:35680). Mar 6 02:36:32.037811 sshd[4525]: Accepted publickey for core from 10.0.0.1 port 35680 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:32.046336 sshd-session[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:32.094222 systemd-logind[1552]: New session 14 of user core. Mar 6 02:36:32.138583 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 02:36:32.162717 kubelet[2878]: E0306 02:36:32.157747 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:36:32.162717 kubelet[2878]: E0306 02:36:32.162605 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:36:32.884241 sshd[4528]: Connection closed by 10.0.0.1 port 35680 Mar 6 02:36:32.886255 sshd-session[4525]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:32.905961 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:35680.service: Deactivated successfully. Mar 6 02:36:32.918650 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 02:36:32.924388 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. Mar 6 02:36:32.932696 systemd-logind[1552]: Removed session 14. Mar 6 02:36:37.958592 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:35846.service - OpenSSH per-connection server daemon (10.0.0.1:35846). Mar 6 02:36:38.269462 sshd[4544]: Accepted publickey for core from 10.0.0.1 port 35846 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:38.277657 sshd-session[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:38.342903 systemd-logind[1552]: New session 15 of user core. Mar 6 02:36:38.360722 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 02:36:38.831852 sshd[4547]: Connection closed by 10.0.0.1 port 35846 Mar 6 02:36:38.835870 sshd-session[4544]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:38.851222 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:35846.service: Deactivated successfully. Mar 6 02:36:38.859724 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 02:36:38.866887 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. Mar 6 02:36:38.870955 systemd-logind[1552]: Removed session 15. Mar 6 02:36:39.616260 containerd[1590]: time="2026-03-06T02:36:39.497970120Z" level=warning msg="container event discarded" container=baff8d281730be23a85c4940952e5c82b5c79aa23ba7f2a410e42b2c32cd8488 type=CONTAINER_CREATED_EVENT Mar 6 02:36:39.641328 containerd[1590]: time="2026-03-06T02:36:39.632954100Z" level=warning msg="container event discarded" container=baff8d281730be23a85c4940952e5c82b5c79aa23ba7f2a410e42b2c32cd8488 type=CONTAINER_STARTED_EVENT Mar 6 02:36:39.729401 containerd[1590]: time="2026-03-06T02:36:39.728135511Z" level=warning msg="container event discarded" container=e50972ddbe618dbeaccfeac2c4c1415a7e83209e6f31d723b12d9bee026ce3e6 type=CONTAINER_CREATED_EVENT Mar 6 02:36:39.729401 containerd[1590]: time="2026-03-06T02:36:39.728302273Z" level=warning msg="container event discarded" container=e50972ddbe618dbeaccfeac2c4c1415a7e83209e6f31d723b12d9bee026ce3e6 type=CONTAINER_STARTED_EVENT Mar 6 02:36:39.729401 containerd[1590]: time="2026-03-06T02:36:39.728324715Z" level=warning msg="container event discarded" container=e69cbf88eedf244ff44e822278a453cbad7a9027e67556a6f1b6ad1f845c2b30 type=CONTAINER_CREATED_EVENT Mar 6 02:36:39.729401 containerd[1590]: time="2026-03-06T02:36:39.728335444Z" level=warning msg="container event discarded" container=e69cbf88eedf244ff44e822278a453cbad7a9027e67556a6f1b6ad1f845c2b30 type=CONTAINER_STARTED_EVENT Mar 6 02:36:39.819606 containerd[1590]: time="2026-03-06T02:36:39.819413518Z" level=warning msg="container event discarded" container=69611c1829f3c3915580d539bbc65699cdf33b65d6fc2e58fe4e1f61b5b97511 type=CONTAINER_CREATED_EVENT Mar 6 02:36:39.866229 containerd[1590]: time="2026-03-06T02:36:39.865311388Z" level=warning msg="container event discarded" container=de3ed7d6e1c536219011ddd7efe5f7b4f0376858dfbd42ae34f70bfae690016c type=CONTAINER_CREATED_EVENT Mar 6 02:36:39.866229 containerd[1590]: time="2026-03-06T02:36:39.865470135Z" level=warning msg="container event discarded" container=f3435396b6c5c11cc8092a225d85cfa330b73ff19ad9bd48d090534e788c83b4 type=CONTAINER_CREATED_EVENT Mar 6 02:36:41.936657 containerd[1590]: time="2026-03-06T02:36:41.934950494Z" level=warning msg="container event discarded" container=de3ed7d6e1c536219011ddd7efe5f7b4f0376858dfbd42ae34f70bfae690016c type=CONTAINER_STARTED_EVENT Mar 6 02:36:41.951934 containerd[1590]: time="2026-03-06T02:36:41.951877746Z" level=warning msg="container event discarded" container=69611c1829f3c3915580d539bbc65699cdf33b65d6fc2e58fe4e1f61b5b97511 type=CONTAINER_STARTED_EVENT Mar 6 02:36:42.511191 containerd[1590]: time="2026-03-06T02:36:42.510769746Z" level=warning msg="container event discarded" container=f3435396b6c5c11cc8092a225d85cfa330b73ff19ad9bd48d090534e788c83b4 type=CONTAINER_STARTED_EVENT Mar 6 02:36:43.861297 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:53862.service - OpenSSH per-connection server daemon (10.0.0.1:53862). Mar 6 02:36:44.059675 sshd[4564]: Accepted publickey for core from 10.0.0.1 port 53862 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:44.063279 sshd-session[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:44.092459 systemd-logind[1552]: New session 16 of user core. Mar 6 02:36:44.125845 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 02:36:44.572447 sshd[4567]: Connection closed by 10.0.0.1 port 53862 Mar 6 02:36:44.574445 sshd-session[4564]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:44.587392 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:53862.service: Deactivated successfully. Mar 6 02:36:44.617713 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 02:36:44.631387 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. Mar 6 02:36:44.653640 systemd-logind[1552]: Removed session 16. Mar 6 02:36:49.664685 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:53872.service - OpenSSH per-connection server daemon (10.0.0.1:53872). Mar 6 02:36:50.138216 sshd[4585]: Accepted publickey for core from 10.0.0.1 port 53872 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:50.142452 sshd-session[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:50.171461 systemd-logind[1552]: New session 17 of user core. Mar 6 02:36:50.185870 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 02:36:50.647463 sshd[4588]: Connection closed by 10.0.0.1 port 53872 Mar 6 02:36:50.648398 sshd-session[4585]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:50.661397 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:53872.service: Deactivated successfully. Mar 6 02:36:50.668506 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 02:36:50.682961 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. Mar 6 02:36:50.690390 systemd-logind[1552]: Removed session 17. Mar 6 02:36:55.682195 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:60700.service - OpenSSH per-connection server daemon (10.0.0.1:60700). Mar 6 02:36:55.890909 sshd[4602]: Accepted publickey for core from 10.0.0.1 port 60700 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:55.895439 sshd-session[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:55.977456 systemd-logind[1552]: New session 18 of user core. Mar 6 02:36:55.988972 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 02:36:56.520609 sshd[4605]: Connection closed by 10.0.0.1 port 60700 Mar 6 02:36:56.519748 sshd-session[4602]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:56.566376 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:60700.service: Deactivated successfully. Mar 6 02:36:56.575642 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 02:36:56.584334 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. Mar 6 02:36:56.603865 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:60706.service - OpenSSH per-connection server daemon (10.0.0.1:60706). Mar 6 02:36:56.616768 systemd-logind[1552]: Removed session 18. Mar 6 02:36:56.868713 sshd[4620]: Accepted publickey for core from 10.0.0.1 port 60706 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:56.879604 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:56.915402 systemd-logind[1552]: New session 19 of user core. Mar 6 02:36:56.938381 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 02:36:57.529136 sshd[4623]: Connection closed by 10.0.0.1 port 60706 Mar 6 02:36:57.529400 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:57.561871 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:60706.service: Deactivated successfully. Mar 6 02:36:57.581916 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 02:36:57.596794 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. Mar 6 02:36:57.613954 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:60720.service - OpenSSH per-connection server daemon (10.0.0.1:60720). Mar 6 02:36:57.624867 systemd-logind[1552]: Removed session 19. Mar 6 02:36:57.759208 sshd[4635]: Accepted publickey for core from 10.0.0.1 port 60720 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:36:57.761290 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:36:57.790858 systemd-logind[1552]: New session 20 of user core. Mar 6 02:36:57.803429 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 02:36:58.244608 sshd[4638]: Connection closed by 10.0.0.1 port 60720 Mar 6 02:36:58.246445 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Mar 6 02:36:58.273850 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:60720.service: Deactivated successfully. Mar 6 02:36:58.285894 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 02:36:58.294383 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. Mar 6 02:36:58.320963 systemd-logind[1552]: Removed session 20. Mar 6 02:37:03.327810 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:51504.service - OpenSSH per-connection server daemon (10.0.0.1:51504). Mar 6 02:37:03.857836 sshd[4654]: Accepted publickey for core from 10.0.0.1 port 51504 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:37:03.865684 sshd-session[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:03.984929 systemd-logind[1552]: New session 21 of user core. Mar 6 02:37:03.998719 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 02:37:04.769352 sshd[4657]: Connection closed by 10.0.0.1 port 51504 Mar 6 02:37:04.760613 sshd-session[4654]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:04.812881 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:51504.service: Deactivated successfully. Mar 6 02:37:04.841637 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 02:37:04.856255 systemd-logind[1552]: Session 21 logged out. Waiting for processes to exit. Mar 6 02:37:04.866739 systemd-logind[1552]: Removed session 21. Mar 6 02:37:09.805725 systemd[1]: Started sshd@21-10.0.0.53:22-10.0.0.1:51512.service - OpenSSH per-connection server daemon (10.0.0.1:51512). Mar 6 02:37:09.990754 sshd[4670]: Accepted publickey for core from 10.0.0.1 port 51512 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:37:09.993385 sshd-session[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:10.032705 systemd-logind[1552]: New session 22 of user core. Mar 6 02:37:10.054741 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 02:37:10.585354 sshd[4673]: Connection closed by 10.0.0.1 port 51512 Mar 6 02:37:10.588656 sshd-session[4670]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:10.618249 systemd[1]: sshd@21-10.0.0.53:22-10.0.0.1:51512.service: Deactivated successfully. Mar 6 02:37:10.624242 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 02:37:10.635359 systemd-logind[1552]: Session 22 logged out. Waiting for processes to exit. Mar 6 02:37:10.650695 systemd-logind[1552]: Removed session 22. Mar 6 02:37:15.629814 systemd[1]: Started sshd@22-10.0.0.53:22-10.0.0.1:48382.service - OpenSSH per-connection server daemon (10.0.0.1:48382). Mar 6 02:37:15.830358 sshd[4687]: Accepted publickey for core from 10.0.0.1 port 48382 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:37:15.832965 sshd-session[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:15.867931 systemd-logind[1552]: New session 23 of user core. Mar 6 02:37:15.878790 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 02:37:16.364695 sshd[4690]: Connection closed by 10.0.0.1 port 48382 Mar 6 02:37:16.364922 sshd-session[4687]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:16.379889 systemd[1]: sshd@22-10.0.0.53:22-10.0.0.1:48382.service: Deactivated successfully. Mar 6 02:37:16.384244 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 02:37:16.388727 systemd-logind[1552]: Session 23 logged out. Waiting for processes to exit. Mar 6 02:37:16.399292 systemd-logind[1552]: Removed session 23. Mar 6 02:37:19.642422 kubelet[2878]: E0306 02:37:19.638839 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:37:22.451799 systemd[1]: Started sshd@23-10.0.0.53:22-10.0.0.1:48396.service - OpenSSH per-connection server daemon (10.0.0.1:48396). Mar 6 02:37:22.872240 kubelet[2878]: E0306 02:37:22.871318 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:37:23.056141 sshd[4705]: Accepted publickey for core from 10.0.0.1 port 48396 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:37:23.066834 sshd-session[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:23.135424 systemd-logind[1552]: New session 24 of user core. Mar 6 02:37:23.165724 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 02:37:25.832976 containerd[1590]: time="2026-03-06T02:37:25.813425566Z" level=warning msg="container event discarded" container=1672df90b62f128e47db03c4fda6af6afbce6de11366e586ed1e7bc9a861d2d9 type=CONTAINER_CREATED_EVENT Mar 6 02:37:25.832976 containerd[1590]: time="2026-03-06T02:37:25.813977287Z" level=warning msg="container event discarded" container=1672df90b62f128e47db03c4fda6af6afbce6de11366e586ed1e7bc9a861d2d9 type=CONTAINER_STARTED_EVENT Mar 6 02:37:26.141330 kubelet[2878]: E0306 02:37:26.135909 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:37:26.153423 containerd[1590]: time="2026-03-06T02:37:26.146215989Z" level=warning msg="container event discarded" container=2d9e541e16cc1812c9fcc50f5e8619e9a33cb5c2556451c0e60344d937e82c76 type=CONTAINER_CREATED_EVENT Mar 6 02:37:27.349726 kubelet[2878]: E0306 02:37:27.341923 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:37:27.351717 containerd[1590]: time="2026-03-06T02:37:27.351259030Z" level=warning msg="container event discarded" container=bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1 type=CONTAINER_CREATED_EVENT Mar 6 02:37:27.354869 containerd[1590]: time="2026-03-06T02:37:27.354831755Z" level=warning msg="container event discarded" container=bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1 type=CONTAINER_STARTED_EVENT Mar 6 02:37:27.453328 sshd[4708]: Connection closed by 10.0.0.1 port 48396 Mar 6 02:37:27.458957 sshd-session[4705]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:27.494265 systemd[1]: sshd@23-10.0.0.53:22-10.0.0.1:48396.service: Deactivated successfully. Mar 6 02:37:27.509372 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 02:37:27.538390 systemd-logind[1552]: Session 24 logged out. Waiting for processes to exit. Mar 6 02:37:27.549425 systemd-logind[1552]: Removed session 24. Mar 6 02:37:27.748211 containerd[1590]: time="2026-03-06T02:37:27.746944028Z" level=warning msg="container event discarded" container=082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88 type=CONTAINER_CREATED_EVENT Mar 6 02:37:27.748211 containerd[1590]: time="2026-03-06T02:37:27.747971377Z" level=warning msg="container event discarded" container=082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88 type=CONTAINER_STARTED_EVENT Mar 6 02:37:28.532376 containerd[1590]: time="2026-03-06T02:37:28.532266144Z" level=warning msg="container event discarded" container=2d9e541e16cc1812c9fcc50f5e8619e9a33cb5c2556451c0e60344d937e82c76 type=CONTAINER_STARTED_EVENT Mar 6 02:37:32.530466 systemd[1]: Started sshd@24-10.0.0.53:22-10.0.0.1:46370.service - OpenSSH per-connection server daemon (10.0.0.1:46370). Mar 6 02:37:32.855723 sshd[4723]: Accepted publickey for core from 10.0.0.1 port 46370 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:37:32.868539 sshd-session[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:32.955366 systemd-logind[1552]: New session 25 of user core. Mar 6 02:37:32.976229 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 02:37:34.670479 sshd[4726]: Connection closed by 10.0.0.1 port 46370 Mar 6 02:37:34.678433 sshd-session[4723]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:34.700535 systemd[1]: sshd@24-10.0.0.53:22-10.0.0.1:46370.service: Deactivated successfully. Mar 6 02:37:34.732374 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 02:37:34.748527 systemd-logind[1552]: Session 25 logged out. Waiting for processes to exit. Mar 6 02:37:34.774700 systemd-logind[1552]: Removed session 25. Mar 6 02:37:38.164964 kubelet[2878]: E0306 02:37:38.162750 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:37:39.759348 systemd[1]: Started sshd@25-10.0.0.53:22-10.0.0.1:46382.service - OpenSSH per-connection server daemon (10.0.0.1:46382). Mar 6 02:37:40.017295 sshd[4740]: Accepted publickey for core from 10.0.0.1 port 46382 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:37:40.020933 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:40.065293 systemd-logind[1552]: New session 26 of user core. Mar 6 02:37:40.108796 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 6 02:37:41.041235 sshd[4743]: Connection closed by 10.0.0.1 port 46382 Mar 6 02:37:41.043813 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:41.062509 systemd[1]: sshd@25-10.0.0.53:22-10.0.0.1:46382.service: Deactivated successfully. Mar 6 02:37:41.073944 systemd[1]: session-26.scope: Deactivated successfully. Mar 6 02:37:41.099521 systemd-logind[1552]: Session 26 logged out. Waiting for processes to exit. Mar 6 02:37:41.126414 systemd-logind[1552]: Removed session 26. Mar 6 02:37:44.177888 kubelet[2878]: E0306 02:37:44.171327 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:37:46.094899 systemd[1]: Started sshd@26-10.0.0.53:22-10.0.0.1:40426.service - OpenSSH per-connection server daemon (10.0.0.1:40426). Mar 6 02:37:46.334811 sshd[4757]: Accepted publickey for core from 10.0.0.1 port 40426 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:37:46.338888 sshd-session[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:46.370147 systemd-logind[1552]: New session 27 of user core. Mar 6 02:37:46.403543 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 6 02:37:46.778774 sshd[4760]: Connection closed by 10.0.0.1 port 40426 Mar 6 02:37:46.781202 sshd-session[4757]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:46.797934 systemd[1]: sshd@26-10.0.0.53:22-10.0.0.1:40426.service: Deactivated successfully. Mar 6 02:37:46.806922 systemd-logind[1552]: Session 27 logged out. Waiting for processes to exit. Mar 6 02:37:46.823439 systemd[1]: session-27.scope: Deactivated successfully. Mar 6 02:37:46.833792 systemd-logind[1552]: Removed session 27. Mar 6 02:37:48.132899 kubelet[2878]: E0306 02:37:48.129534 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:37:51.853300 systemd[1]: Started sshd@27-10.0.0.53:22-10.0.0.1:40428.service - OpenSSH per-connection server daemon (10.0.0.1:40428). Mar 6 02:37:52.212851 sshd[4773]: Accepted publickey for core from 10.0.0.1 port 40428 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:37:52.228417 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:52.256829 systemd-logind[1552]: New session 28 of user core. Mar 6 02:37:52.287478 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 6 02:37:52.780161 sshd[4776]: Connection closed by 10.0.0.1 port 40428 Mar 6 02:37:52.783163 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:52.814713 systemd[1]: sshd@27-10.0.0.53:22-10.0.0.1:40428.service: Deactivated successfully. Mar 6 02:37:52.819145 systemd[1]: session-28.scope: Deactivated successfully. Mar 6 02:37:52.822905 systemd-logind[1552]: Session 28 logged out. Waiting for processes to exit. Mar 6 02:37:52.834133 systemd-logind[1552]: Removed session 28. Mar 6 02:37:57.825327 systemd[1]: Started sshd@28-10.0.0.53:22-10.0.0.1:51172.service - OpenSSH per-connection server daemon (10.0.0.1:51172). Mar 6 02:37:57.996660 sshd[4789]: Accepted publickey for core from 10.0.0.1 port 51172 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:37:58.023344 sshd-session[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:58.042416 systemd-logind[1552]: New session 29 of user core. Mar 6 02:37:58.046797 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 6 02:37:58.427814 sshd[4792]: Connection closed by 10.0.0.1 port 51172 Mar 6 02:37:58.428525 sshd-session[4789]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:58.439508 systemd[1]: sshd@28-10.0.0.53:22-10.0.0.1:51172.service: Deactivated successfully. Mar 6 02:37:58.450208 systemd[1]: session-29.scope: Deactivated successfully. Mar 6 02:37:58.455115 systemd-logind[1552]: Session 29 logged out. Waiting for processes to exit. Mar 6 02:37:58.464736 systemd-logind[1552]: Removed session 29. Mar 6 02:37:59.128409 kubelet[2878]: E0306 02:37:59.127857 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:03.482373 systemd[1]: Started sshd@29-10.0.0.53:22-10.0.0.1:34578.service - OpenSSH per-connection server daemon (10.0.0.1:34578). Mar 6 02:38:03.630970 sshd[4808]: Accepted publickey for core from 10.0.0.1 port 34578 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:03.633699 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:03.650129 systemd-logind[1552]: New session 30 of user core. Mar 6 02:38:03.661766 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 6 02:38:03.931466 sshd[4811]: Connection closed by 10.0.0.1 port 34578 Mar 6 02:38:03.933164 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:03.945256 systemd[1]: sshd@29-10.0.0.53:22-10.0.0.1:34578.service: Deactivated successfully. Mar 6 02:38:03.952297 systemd[1]: session-30.scope: Deactivated successfully. Mar 6 02:38:03.960194 systemd-logind[1552]: Session 30 logged out. Waiting for processes to exit. Mar 6 02:38:03.967769 systemd-logind[1552]: Removed session 30. Mar 6 02:38:08.958800 systemd[1]: Started sshd@30-10.0.0.53:22-10.0.0.1:34580.service - OpenSSH per-connection server daemon (10.0.0.1:34580). Mar 6 02:38:09.143244 sshd[4825]: Accepted publickey for core from 10.0.0.1 port 34580 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:09.145809 sshd-session[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:09.174239 systemd-logind[1552]: New session 31 of user core. Mar 6 02:38:09.199209 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 6 02:38:09.543212 sshd[4828]: Connection closed by 10.0.0.1 port 34580 Mar 6 02:38:09.543775 sshd-session[4825]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:09.552314 systemd[1]: sshd@30-10.0.0.53:22-10.0.0.1:34580.service: Deactivated successfully. Mar 6 02:38:09.557758 systemd[1]: session-31.scope: Deactivated successfully. Mar 6 02:38:09.564407 systemd-logind[1552]: Session 31 logged out. Waiting for processes to exit. Mar 6 02:38:09.567819 systemd-logind[1552]: Removed session 31. Mar 6 02:38:14.591226 systemd[1]: Started sshd@31-10.0.0.53:22-10.0.0.1:43712.service - OpenSSH per-connection server daemon (10.0.0.1:43712). Mar 6 02:38:14.827324 sshd[4842]: Accepted publickey for core from 10.0.0.1 port 43712 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:14.829339 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:14.857778 systemd-logind[1552]: New session 32 of user core. Mar 6 02:38:14.872954 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 6 02:38:15.164178 sshd[4845]: Connection closed by 10.0.0.1 port 43712 Mar 6 02:38:15.164538 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:15.186794 systemd[1]: sshd@31-10.0.0.53:22-10.0.0.1:43712.service: Deactivated successfully. Mar 6 02:38:15.194245 systemd[1]: session-32.scope: Deactivated successfully. Mar 6 02:38:15.201509 systemd-logind[1552]: Session 32 logged out. Waiting for processes to exit. Mar 6 02:38:15.208251 systemd[1]: Started sshd@32-10.0.0.53:22-10.0.0.1:43724.service - OpenSSH per-connection server daemon (10.0.0.1:43724). Mar 6 02:38:15.214783 systemd-logind[1552]: Removed session 32. Mar 6 02:38:15.335197 sshd[4858]: Accepted publickey for core from 10.0.0.1 port 43724 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:15.338242 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:15.364561 systemd-logind[1552]: New session 33 of user core. Mar 6 02:38:15.374825 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 6 02:38:16.783828 sshd[4861]: Connection closed by 10.0.0.1 port 43724 Mar 6 02:38:16.783928 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:16.836476 systemd[1]: sshd@32-10.0.0.53:22-10.0.0.1:43724.service: Deactivated successfully. Mar 6 02:38:16.846466 systemd[1]: session-33.scope: Deactivated successfully. Mar 6 02:38:16.852221 systemd-logind[1552]: Session 33 logged out. Waiting for processes to exit. Mar 6 02:38:16.862384 systemd[1]: Started sshd@33-10.0.0.53:22-10.0.0.1:43732.service - OpenSSH per-connection server daemon (10.0.0.1:43732). Mar 6 02:38:16.866384 systemd-logind[1552]: Removed session 33. Mar 6 02:38:17.091359 sshd[4873]: Accepted publickey for core from 10.0.0.1 port 43732 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:17.113496 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:17.147928 systemd-logind[1552]: New session 34 of user core. Mar 6 02:38:17.174911 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 6 02:38:19.574172 sshd[4876]: Connection closed by 10.0.0.1 port 43732 Mar 6 02:38:19.574236 sshd-session[4873]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:19.599938 systemd[1]: sshd@33-10.0.0.53:22-10.0.0.1:43732.service: Deactivated successfully. Mar 6 02:38:19.617907 systemd[1]: session-34.scope: Deactivated successfully. Mar 6 02:38:19.618477 systemd[1]: session-34.scope: Consumed 1.005s CPU time, 40M memory peak. Mar 6 02:38:19.619774 systemd-logind[1552]: Session 34 logged out. Waiting for processes to exit. Mar 6 02:38:19.630548 systemd[1]: Started sshd@34-10.0.0.53:22-10.0.0.1:43748.service - OpenSSH per-connection server daemon (10.0.0.1:43748). Mar 6 02:38:19.633921 systemd-logind[1552]: Removed session 34. Mar 6 02:38:19.850429 sshd[4896]: Accepted publickey for core from 10.0.0.1 port 43748 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:19.854593 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:19.884462 systemd-logind[1552]: New session 35 of user core. Mar 6 02:38:19.907592 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 6 02:38:20.633603 sshd[4899]: Connection closed by 10.0.0.1 port 43748 Mar 6 02:38:20.636564 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:20.664575 systemd[1]: sshd@34-10.0.0.53:22-10.0.0.1:43748.service: Deactivated successfully. Mar 6 02:38:20.675250 systemd[1]: session-35.scope: Deactivated successfully. Mar 6 02:38:20.691947 systemd-logind[1552]: Session 35 logged out. Waiting for processes to exit. Mar 6 02:38:20.739383 systemd[1]: Started sshd@35-10.0.0.53:22-10.0.0.1:43754.service - OpenSSH per-connection server daemon (10.0.0.1:43754). Mar 6 02:38:20.760390 systemd-logind[1552]: Removed session 35. Mar 6 02:38:20.918877 sshd[4913]: Accepted publickey for core from 10.0.0.1 port 43754 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:20.926203 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:20.951825 systemd-logind[1552]: New session 36 of user core. Mar 6 02:38:20.966511 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 6 02:38:21.388933 sshd[4916]: Connection closed by 10.0.0.1 port 43754 Mar 6 02:38:21.389554 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:21.396805 systemd[1]: sshd@35-10.0.0.53:22-10.0.0.1:43754.service: Deactivated successfully. Mar 6 02:38:21.411355 systemd[1]: session-36.scope: Deactivated successfully. Mar 6 02:38:21.440896 systemd-logind[1552]: Session 36 logged out. Waiting for processes to exit. Mar 6 02:38:21.445417 systemd-logind[1552]: Removed session 36. Mar 6 02:38:22.095464 containerd[1590]: time="2026-03-06T02:38:22.095339115Z" level=warning msg="container event discarded" container=be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6 type=CONTAINER_CREATED_EVENT Mar 6 02:38:23.156377 containerd[1590]: time="2026-03-06T02:38:23.155555774Z" level=warning msg="container event discarded" container=be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6 type=CONTAINER_STARTED_EVENT Mar 6 02:38:24.039452 containerd[1590]: time="2026-03-06T02:38:24.037394662Z" level=warning msg="container event discarded" container=be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6 type=CONTAINER_STOPPED_EVENT Mar 6 02:38:24.842261 containerd[1590]: time="2026-03-06T02:38:24.841827408Z" level=warning msg="container event discarded" container=c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74 type=CONTAINER_CREATED_EVENT Mar 6 02:38:25.778196 containerd[1590]: time="2026-03-06T02:38:25.777622808Z" level=warning msg="container event discarded" container=c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74 type=CONTAINER_STARTED_EVENT Mar 6 02:38:26.136917 kubelet[2878]: E0306 02:38:26.135259 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:26.451525 systemd[1]: Started sshd@36-10.0.0.53:22-10.0.0.1:39404.service - OpenSSH per-connection server daemon (10.0.0.1:39404). Mar 6 02:38:26.483202 containerd[1590]: time="2026-03-06T02:38:26.483130637Z" level=warning msg="container event discarded" container=c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74 type=CONTAINER_STOPPED_EVENT Mar 6 02:38:26.637275 sshd[4930]: Accepted publickey for core from 10.0.0.1 port 39404 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:26.643444 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:26.681138 systemd-logind[1552]: New session 37 of user core. Mar 6 02:38:26.699886 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 6 02:38:27.001511 containerd[1590]: time="2026-03-06T02:38:27.001425610Z" level=warning msg="container event discarded" container=ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc type=CONTAINER_CREATED_EVENT Mar 6 02:38:27.169771 sshd[4933]: Connection closed by 10.0.0.1 port 39404 Mar 6 02:38:27.169591 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:27.179420 systemd[1]: sshd@36-10.0.0.53:22-10.0.0.1:39404.service: Deactivated successfully. Mar 6 02:38:27.189296 systemd[1]: session-37.scope: Deactivated successfully. Mar 6 02:38:27.198325 systemd-logind[1552]: Session 37 logged out. Waiting for processes to exit. Mar 6 02:38:27.214243 systemd-logind[1552]: Removed session 37. Mar 6 02:38:28.765264 containerd[1590]: time="2026-03-06T02:38:28.764343232Z" level=warning msg="container event discarded" container=ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc type=CONTAINER_STARTED_EVENT Mar 6 02:38:29.974339 containerd[1590]: time="2026-03-06T02:38:29.974262812Z" level=warning msg="container event discarded" container=ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc type=CONTAINER_STOPPED_EVENT Mar 6 02:38:30.483259 containerd[1590]: time="2026-03-06T02:38:30.480227285Z" level=warning msg="container event discarded" container=80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c type=CONTAINER_CREATED_EVENT Mar 6 02:38:31.404557 containerd[1590]: time="2026-03-06T02:38:31.400971184Z" level=warning msg="container event discarded" container=80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c type=CONTAINER_STARTED_EVENT Mar 6 02:38:32.092153 containerd[1590]: time="2026-03-06T02:38:32.091906821Z" level=warning msg="container event discarded" container=80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c type=CONTAINER_STOPPED_EVENT Mar 6 02:38:32.200327 systemd[1]: Started sshd@37-10.0.0.53:22-10.0.0.1:47316.service - OpenSSH per-connection server daemon (10.0.0.1:47316). Mar 6 02:38:32.471304 sshd[4949]: Accepted publickey for core from 10.0.0.1 port 47316 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:32.485397 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:32.575370 systemd-logind[1552]: New session 38 of user core. Mar 6 02:38:32.588162 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 6 02:38:32.633892 containerd[1590]: time="2026-03-06T02:38:32.633446876Z" level=warning msg="container event discarded" container=2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641 type=CONTAINER_CREATED_EVENT Mar 6 02:38:33.115237 sshd[4952]: Connection closed by 10.0.0.1 port 47316 Mar 6 02:38:33.117388 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:33.126753 systemd[1]: sshd@37-10.0.0.53:22-10.0.0.1:47316.service: Deactivated successfully. Mar 6 02:38:33.131476 systemd[1]: session-38.scope: Deactivated successfully. Mar 6 02:38:33.137800 systemd-logind[1552]: Session 38 logged out. Waiting for processes to exit. Mar 6 02:38:33.142311 systemd-logind[1552]: Removed session 38. Mar 6 02:38:33.410619 containerd[1590]: time="2026-03-06T02:38:33.409634651Z" level=warning msg="container event discarded" container=2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641 type=CONTAINER_STARTED_EVENT Mar 6 02:38:34.224485 containerd[1590]: time="2026-03-06T02:38:34.224372418Z" level=warning msg="container event discarded" container=d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0 type=CONTAINER_CREATED_EVENT Mar 6 02:38:35.362916 containerd[1590]: time="2026-03-06T02:38:35.361545607Z" level=warning msg="container event discarded" container=d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0 type=CONTAINER_STARTED_EVENT Mar 6 02:38:38.131269 kubelet[2878]: E0306 02:38:38.129185 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:38.131269 kubelet[2878]: E0306 02:38:38.129436 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:38.140572 systemd[1]: Started sshd@38-10.0.0.53:22-10.0.0.1:47318.service - OpenSSH per-connection server daemon (10.0.0.1:47318). Mar 6 02:38:38.321608 sshd[4966]: Accepted publickey for core from 10.0.0.1 port 47318 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:38.329293 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:38.352225 systemd-logind[1552]: New session 39 of user core. Mar 6 02:38:38.360466 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 6 02:38:38.699496 sshd[4969]: Connection closed by 10.0.0.1 port 47318 Mar 6 02:38:38.701391 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:38.723772 systemd[1]: sshd@38-10.0.0.53:22-10.0.0.1:47318.service: Deactivated successfully. Mar 6 02:38:38.729278 systemd[1]: session-39.scope: Deactivated successfully. Mar 6 02:38:38.731482 systemd-logind[1552]: Session 39 logged out. Waiting for processes to exit. Mar 6 02:38:38.745367 systemd-logind[1552]: Removed session 39. Mar 6 02:38:42.141761 kubelet[2878]: E0306 02:38:42.141532 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:43.729279 systemd[1]: Started sshd@39-10.0.0.53:22-10.0.0.1:48958.service - OpenSSH per-connection server daemon (10.0.0.1:48958). Mar 6 02:38:43.869790 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 48958 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:43.873240 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:43.894549 systemd-logind[1552]: New session 40 of user core. Mar 6 02:38:43.907347 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 6 02:38:44.183269 sshd[4990]: Connection closed by 10.0.0.1 port 48958 Mar 6 02:38:44.183960 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:44.195276 systemd[1]: sshd@39-10.0.0.53:22-10.0.0.1:48958.service: Deactivated successfully. Mar 6 02:38:44.201612 systemd[1]: session-40.scope: Deactivated successfully. Mar 6 02:38:44.219346 systemd-logind[1552]: Session 40 logged out. Waiting for processes to exit. Mar 6 02:38:44.225843 systemd-logind[1552]: Removed session 40. Mar 6 02:38:49.219941 systemd[1]: Started sshd@40-10.0.0.53:22-10.0.0.1:48968.service - OpenSSH per-connection server daemon (10.0.0.1:48968). Mar 6 02:38:49.369676 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 48968 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:49.373925 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:49.396373 systemd-logind[1552]: New session 41 of user core. Mar 6 02:38:49.408490 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 6 02:38:49.817824 sshd[5009]: Connection closed by 10.0.0.1 port 48968 Mar 6 02:38:49.822401 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:49.834304 systemd[1]: sshd@40-10.0.0.53:22-10.0.0.1:48968.service: Deactivated successfully. Mar 6 02:38:49.840218 systemd[1]: session-41.scope: Deactivated successfully. Mar 6 02:38:49.845192 systemd-logind[1552]: Session 41 logged out. Waiting for processes to exit. Mar 6 02:38:49.863551 systemd-logind[1552]: Removed session 41. Mar 6 02:38:52.131310 kubelet[2878]: E0306 02:38:52.130659 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:54.129882 kubelet[2878]: E0306 02:38:54.128397 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:54.846463 systemd[1]: Started sshd@41-10.0.0.53:22-10.0.0.1:49708.service - OpenSSH per-connection server daemon (10.0.0.1:49708). Mar 6 02:38:54.988159 sshd[5022]: Accepted publickey for core from 10.0.0.1 port 49708 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:54.991201 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:55.011938 systemd-logind[1552]: New session 42 of user core. Mar 6 02:38:55.029323 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 6 02:38:55.319461 sshd[5025]: Connection closed by 10.0.0.1 port 49708 Mar 6 02:38:55.318487 sshd-session[5022]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:55.347818 systemd[1]: sshd@41-10.0.0.53:22-10.0.0.1:49708.service: Deactivated successfully. Mar 6 02:38:55.352851 systemd[1]: session-42.scope: Deactivated successfully. Mar 6 02:38:55.358448 systemd-logind[1552]: Session 42 logged out. Waiting for processes to exit. Mar 6 02:38:55.362552 systemd[1]: Started sshd@42-10.0.0.53:22-10.0.0.1:49710.service - OpenSSH per-connection server daemon (10.0.0.1:49710). Mar 6 02:38:55.373381 systemd-logind[1552]: Removed session 42. Mar 6 02:38:55.518236 sshd[5039]: Accepted publickey for core from 10.0.0.1 port 49710 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:38:55.530455 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:38:55.553824 systemd-logind[1552]: New session 43 of user core. Mar 6 02:38:55.568836 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 6 02:38:58.066292 containerd[1590]: time="2026-03-06T02:38:58.064492209Z" level=info msg="StopContainer for \"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\" with timeout 30 (s)" Mar 6 02:38:58.106232 containerd[1590]: time="2026-03-06T02:38:58.106163768Z" level=info msg="Stop container \"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\" with signal terminated" Mar 6 02:38:58.222378 systemd[1]: cri-containerd-d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0.scope: Deactivated successfully. Mar 6 02:38:58.222977 systemd[1]: cri-containerd-d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0.scope: Consumed 4.008s CPU time, 28.7M memory peak, 4K written to disk. Mar 6 02:38:58.232515 containerd[1590]: time="2026-03-06T02:38:58.231656984Z" level=info msg="received container exit event container_id:\"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\" id:\"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\" pid:3552 exited_at:{seconds:1772764738 nanos:230539441}" Mar 6 02:38:58.268642 containerd[1590]: time="2026-03-06T02:38:58.268271621Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 02:38:58.293149 containerd[1590]: time="2026-03-06T02:38:58.292874423Z" level=info msg="StopContainer for \"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\" with timeout 2 (s)" Mar 6 02:38:58.296405 containerd[1590]: time="2026-03-06T02:38:58.296369131Z" level=info msg="Stop container \"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\" with signal terminated" Mar 6 02:38:58.340640 systemd-networkd[1451]: lxc_health: Link DOWN Mar 6 02:38:58.340654 systemd-networkd[1451]: lxc_health: Lost carrier Mar 6 02:38:58.407585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0-rootfs.mount: Deactivated successfully. Mar 6 02:38:58.444282 systemd[1]: cri-containerd-2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641.scope: Deactivated successfully. Mar 6 02:38:58.445186 systemd[1]: cri-containerd-2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641.scope: Consumed 25.858s CPU time, 141.6M memory peak, 652K read from disk, 13.3M written to disk. Mar 6 02:38:58.464907 containerd[1590]: time="2026-03-06T02:38:58.464409184Z" level=info msg="received container exit event container_id:\"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\" id:\"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\" pid:3475 exited_at:{seconds:1772764738 nanos:463589520}" Mar 6 02:38:58.515850 containerd[1590]: time="2026-03-06T02:38:58.515652462Z" level=info msg="StopContainer for \"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\" returns successfully" Mar 6 02:38:58.522949 containerd[1590]: time="2026-03-06T02:38:58.522704530Z" level=info msg="StopPodSandbox for \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\"" Mar 6 02:38:58.546456 containerd[1590]: time="2026-03-06T02:38:58.546388722Z" level=info msg="Container to stop \"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:38:58.579631 systemd[1]: cri-containerd-082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88.scope: Deactivated successfully. Mar 6 02:38:58.588974 containerd[1590]: time="2026-03-06T02:38:58.588835735Z" level=info msg="received sandbox exit event container_id:\"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" id:\"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" exit_status:137 exited_at:{seconds:1772764738 nanos:587144637}" monitor_name=podsandbox Mar 6 02:38:58.649895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641-rootfs.mount: Deactivated successfully. Mar 6 02:38:58.715343 containerd[1590]: time="2026-03-06T02:38:58.712671106Z" level=info msg="StopContainer for \"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\" returns successfully" Mar 6 02:38:58.715343 containerd[1590]: time="2026-03-06T02:38:58.714699319Z" level=info msg="StopPodSandbox for \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\"" Mar 6 02:38:58.715343 containerd[1590]: time="2026-03-06T02:38:58.714881610Z" level=info msg="Container to stop \"ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:38:58.715343 containerd[1590]: time="2026-03-06T02:38:58.714903841Z" level=info msg="Container to stop \"80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:38:58.715343 containerd[1590]: time="2026-03-06T02:38:58.714917867Z" level=info msg="Container to stop \"be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:38:58.715343 containerd[1590]: time="2026-03-06T02:38:58.714929680Z" level=info msg="Container to stop \"c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:38:58.715343 containerd[1590]: time="2026-03-06T02:38:58.714942193Z" level=info msg="Container to stop \"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:38:58.785840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88-rootfs.mount: Deactivated successfully. Mar 6 02:38:58.791434 containerd[1590]: time="2026-03-06T02:38:58.791319158Z" level=info msg="received sandbox exit event container_id:\"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" id:\"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" exit_status:137 exited_at:{seconds:1772764738 nanos:790547886}" monitor_name=podsandbox Mar 6 02:38:58.791472 systemd[1]: cri-containerd-bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1.scope: Deactivated successfully. Mar 6 02:38:58.818231 containerd[1590]: time="2026-03-06T02:38:58.815278116Z" level=info msg="shim disconnected" id=082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88 namespace=k8s.io Mar 6 02:38:58.818231 containerd[1590]: time="2026-03-06T02:38:58.815411576Z" level=warning msg="cleaning up after shim disconnected" id=082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88 namespace=k8s.io Mar 6 02:38:58.835588 containerd[1590]: time="2026-03-06T02:38:58.815429038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 02:38:58.893240 containerd[1590]: time="2026-03-06T02:38:58.889951518Z" level=warning msg="container event discarded" container=8ba4e0af0c3b0a00d89f45d19609fa6b3c1bed1280663d0867416acfe5357f03 type=CONTAINER_CREATED_EVENT Mar 6 02:38:58.898293 containerd[1590]: time="2026-03-06T02:38:58.895164246Z" level=warning msg="container event discarded" container=8ba4e0af0c3b0a00d89f45d19609fa6b3c1bed1280663d0867416acfe5357f03 type=CONTAINER_STARTED_EVENT Mar 6 02:38:58.961629 containerd[1590]: time="2026-03-06T02:38:58.961487075Z" level=warning msg="container event discarded" container=58c5365e513444d9be941cce4082c31777a45698818b33e2decdc04e4427d532 type=CONTAINER_CREATED_EVENT Mar 6 02:38:58.967664 containerd[1590]: time="2026-03-06T02:38:58.967622209Z" level=warning msg="container event discarded" container=58c5365e513444d9be941cce4082c31777a45698818b33e2decdc04e4427d532 type=CONTAINER_STARTED_EVENT Mar 6 02:38:58.999893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1-rootfs.mount: Deactivated successfully. Mar 6 02:38:59.032377 containerd[1590]: time="2026-03-06T02:38:59.032329451Z" level=info msg="shim disconnected" id=bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1 namespace=k8s.io Mar 6 02:38:59.032691 containerd[1590]: time="2026-03-06T02:38:59.032662061Z" level=warning msg="cleaning up after shim disconnected" id=bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1 namespace=k8s.io Mar 6 02:38:59.033199 containerd[1590]: time="2026-03-06T02:38:59.032967162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 02:38:59.049952 containerd[1590]: time="2026-03-06T02:38:59.049667199Z" level=info msg="received sandbox container exit event sandbox_id:\"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" exit_status:137 exited_at:{seconds:1772764738 nanos:587144637}" monitor_name=criService Mar 6 02:38:59.056214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88-shm.mount: Deactivated successfully. Mar 6 02:38:59.056526 containerd[1590]: time="2026-03-06T02:38:59.056216508Z" level=info msg="TearDown network for sandbox \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" successfully" Mar 6 02:38:59.056526 containerd[1590]: time="2026-03-06T02:38:59.056319210Z" level=info msg="StopPodSandbox for \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" returns successfully" Mar 6 02:38:59.116147 containerd[1590]: time="2026-03-06T02:38:59.112482438Z" level=info msg="received sandbox container exit event sandbox_id:\"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" exit_status:137 exited_at:{seconds:1772764738 nanos:790547886}" monitor_name=criService Mar 6 02:38:59.147638 containerd[1590]: time="2026-03-06T02:38:59.147344621Z" level=info msg="TearDown network for sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" successfully" Mar 6 02:38:59.147638 containerd[1590]: time="2026-03-06T02:38:59.147480726Z" level=info msg="StopPodSandbox for \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" returns successfully" Mar 6 02:38:59.200126 kubelet[2878]: I0306 02:38:59.199364 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6tbn\" (UniqueName: \"kubernetes.io/projected/946d5c4c-9c33-47b1-ba8f-6e5cce6555e3-kube-api-access-r6tbn\") pod \"946d5c4c-9c33-47b1-ba8f-6e5cce6555e3\" (UID: \"946d5c4c-9c33-47b1-ba8f-6e5cce6555e3\") " Mar 6 02:38:59.200126 kubelet[2878]: I0306 02:38:59.199544 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/946d5c4c-9c33-47b1-ba8f-6e5cce6555e3-cilium-config-path\") pod \"946d5c4c-9c33-47b1-ba8f-6e5cce6555e3\" (UID: \"946d5c4c-9c33-47b1-ba8f-6e5cce6555e3\") " Mar 6 02:38:59.211843 kubelet[2878]: I0306 02:38:59.211442 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946d5c4c-9c33-47b1-ba8f-6e5cce6555e3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "946d5c4c-9c33-47b1-ba8f-6e5cce6555e3" (UID: "946d5c4c-9c33-47b1-ba8f-6e5cce6555e3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 02:38:59.228458 kubelet[2878]: I0306 02:38:59.228402 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946d5c4c-9c33-47b1-ba8f-6e5cce6555e3-kube-api-access-r6tbn" (OuterVolumeSpecName: "kube-api-access-r6tbn") pod "946d5c4c-9c33-47b1-ba8f-6e5cce6555e3" (UID: "946d5c4c-9c33-47b1-ba8f-6e5cce6555e3"). InnerVolumeSpecName "kube-api-access-r6tbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 02:38:59.278906 containerd[1590]: time="2026-03-06T02:38:59.278471897Z" level=warning msg="container event discarded" container=45be5e464e760d7eed05eadde0c58f4a859e19ed4f8b3aa09cc8ea3c69e552de type=CONTAINER_CREATED_EVENT Mar 6 02:38:59.305226 kubelet[2878]: I0306 02:38:59.304460 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rchl2\" (UniqueName: \"kubernetes.io/projected/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-kube-api-access-rchl2\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305226 kubelet[2878]: I0306 02:38:59.304527 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-hostproc\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305226 kubelet[2878]: I0306 02:38:59.304552 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-host-proc-sys-kernel\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305226 kubelet[2878]: I0306 02:38:59.304580 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-clustermesh-secrets\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305226 kubelet[2878]: I0306 02:38:59.304606 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-bpf-maps\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305226 kubelet[2878]: I0306 02:38:59.304633 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-hubble-tls\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305584 kubelet[2878]: I0306 02:38:59.304658 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-config-path\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305584 kubelet[2878]: I0306 02:38:59.304678 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-host-proc-sys-net\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305584 kubelet[2878]: I0306 02:38:59.304702 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-etc-cni-netd\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305584 kubelet[2878]: I0306 02:38:59.304852 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cni-path\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305584 kubelet[2878]: I0306 02:38:59.304873 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-xtables-lock\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.305584 kubelet[2878]: I0306 02:38:59.304893 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-cgroup\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.308898 kubelet[2878]: I0306 02:38:59.304920 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-lib-modules\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.308898 kubelet[2878]: I0306 02:38:59.304941 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-run\") pod \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\" (UID: \"c00e6568-9ff9-41e6-94ab-9c2c36d856bc\") " Mar 6 02:38:59.308898 kubelet[2878]: I0306 02:38:59.305172 2878 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r6tbn\" (UniqueName: \"kubernetes.io/projected/946d5c4c-9c33-47b1-ba8f-6e5cce6555e3-kube-api-access-r6tbn\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.308898 kubelet[2878]: I0306 02:38:59.305192 2878 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/946d5c4c-9c33-47b1-ba8f-6e5cce6555e3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.308898 kubelet[2878]: I0306 02:38:59.305264 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:38:59.308898 kubelet[2878]: I0306 02:38:59.307476 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:38:59.309374 kubelet[2878]: I0306 02:38:59.307513 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:38:59.309374 kubelet[2878]: I0306 02:38:59.307534 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:38:59.309374 kubelet[2878]: I0306 02:38:59.309148 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cni-path" (OuterVolumeSpecName: "cni-path") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:38:59.309374 kubelet[2878]: I0306 02:38:59.309184 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:38:59.309374 kubelet[2878]: I0306 02:38:59.309203 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:38:59.309521 kubelet[2878]: I0306 02:38:59.309224 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-hostproc" (OuterVolumeSpecName: "hostproc") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:38:59.309521 kubelet[2878]: I0306 02:38:59.309243 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:38:59.309521 kubelet[2878]: I0306 02:38:59.309261 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:38:59.322179 kubelet[2878]: I0306 02:38:59.321380 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 02:38:59.346315 kubelet[2878]: I0306 02:38:59.346232 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 02:38:59.349163 kubelet[2878]: I0306 02:38:59.348272 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-kube-api-access-rchl2" (OuterVolumeSpecName: "kube-api-access-rchl2") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "kube-api-access-rchl2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 02:38:59.367424 kubelet[2878]: I0306 02:38:59.366909 2878 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c00e6568-9ff9-41e6-94ab-9c2c36d856bc" (UID: "c00e6568-9ff9-41e6-94ab-9c2c36d856bc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 02:38:59.406250 kubelet[2878]: I0306 02:38:59.405956 2878 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rchl2\" (UniqueName: \"kubernetes.io/projected/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-kube-api-access-rchl2\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.406548 kubelet[2878]: I0306 02:38:59.406431 2878 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.406851 kubelet[2878]: I0306 02:38:59.406553 2878 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.406851 kubelet[2878]: I0306 02:38:59.406570 2878 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.406851 kubelet[2878]: I0306 02:38:59.406583 2878 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.406851 kubelet[2878]: I0306 02:38:59.406597 2878 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.406851 kubelet[2878]: I0306 02:38:59.406611 2878 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.406851 kubelet[2878]: I0306 02:38:59.406625 2878 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.406851 kubelet[2878]: I0306 02:38:59.406636 2878 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.406851 kubelet[2878]: I0306 02:38:59.406647 2878 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.407569 kubelet[2878]: I0306 02:38:59.406659 2878 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.407569 kubelet[2878]: I0306 02:38:59.406669 2878 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.407569 kubelet[2878]: I0306 02:38:59.406680 2878 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.407569 kubelet[2878]: I0306 02:38:59.406696 2878 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c00e6568-9ff9-41e6-94ab-9c2c36d856bc-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 6 02:38:59.409253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1-shm.mount: Deactivated successfully. Mar 6 02:38:59.409508 systemd[1]: var-lib-kubelet-pods-946d5c4c\x2d9c33\x2d47b1\x2dba8f\x2d6e5cce6555e3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr6tbn.mount: Deactivated successfully. Mar 6 02:38:59.409641 systemd[1]: var-lib-kubelet-pods-c00e6568\x2d9ff9\x2d41e6\x2d94ab\x2d9c2c36d856bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drchl2.mount: Deactivated successfully. Mar 6 02:38:59.409871 systemd[1]: var-lib-kubelet-pods-c00e6568\x2d9ff9\x2d41e6\x2d94ab\x2d9c2c36d856bc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 6 02:38:59.410189 systemd[1]: var-lib-kubelet-pods-c00e6568\x2d9ff9\x2d41e6\x2d94ab\x2d9c2c36d856bc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 6 02:38:59.437198 containerd[1590]: time="2026-03-06T02:38:59.435261864Z" level=warning msg="container event discarded" container=1c96d0b92330a9a152d5fd982b31303f683beb07058c4b16019b88ad624f9e20 type=CONTAINER_CREATED_EVENT Mar 6 02:38:59.689967 containerd[1590]: time="2026-03-06T02:38:59.687374505Z" level=warning msg="container event discarded" container=45be5e464e760d7eed05eadde0c58f4a859e19ed4f8b3aa09cc8ea3c69e552de type=CONTAINER_STARTED_EVENT Mar 6 02:38:59.740299 sshd[5042]: Connection closed by 10.0.0.1 port 49710 Mar 6 02:38:59.744814 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:59.753452 containerd[1590]: time="2026-03-06T02:38:59.753283648Z" level=warning msg="container event discarded" container=1c96d0b92330a9a152d5fd982b31303f683beb07058c4b16019b88ad624f9e20 type=CONTAINER_STARTED_EVENT Mar 6 02:38:59.781439 systemd[1]: sshd@42-10.0.0.53:22-10.0.0.1:49710.service: Deactivated successfully. Mar 6 02:38:59.792955 systemd[1]: session-43.scope: Deactivated successfully. Mar 6 02:38:59.794844 systemd[1]: session-43.scope: Consumed 1.363s CPU time, 25.3M memory peak. Mar 6 02:38:59.799340 systemd-logind[1552]: Session 43 logged out. Waiting for processes to exit. Mar 6 02:38:59.811471 systemd[1]: Started sshd@43-10.0.0.53:22-10.0.0.1:49716.service - OpenSSH per-connection server daemon (10.0.0.1:49716). Mar 6 02:38:59.824159 systemd-logind[1552]: Removed session 43. Mar 6 02:38:59.947327 kubelet[2878]: I0306 02:38:59.946437 2878 scope.go:117] "RemoveContainer" containerID="d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0" Mar 6 02:38:59.990234 systemd[1]: Removed slice kubepods-besteffort-pod946d5c4c_9c33_47b1_ba8f_6e5cce6555e3.slice - libcontainer container kubepods-besteffort-pod946d5c4c_9c33_47b1_ba8f_6e5cce6555e3.slice. Mar 6 02:38:59.990565 systemd[1]: kubepods-besteffort-pod946d5c4c_9c33_47b1_ba8f_6e5cce6555e3.slice: Consumed 4.278s CPU time, 29M memory peak, 4K written to disk. Mar 6 02:39:00.016435 containerd[1590]: time="2026-03-06T02:39:00.015467768Z" level=info msg="RemoveContainer for \"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\"" Mar 6 02:39:00.120467 containerd[1590]: time="2026-03-06T02:39:00.120173519Z" level=info msg="RemoveContainer for \"d7c0192342d763df454390c05084dfbeb56d089259d3b31e821264793bbe44a0\" returns successfully" Mar 6 02:39:00.124542 systemd[1]: Removed slice kubepods-burstable-podc00e6568_9ff9_41e6_94ab_9c2c36d856bc.slice - libcontainer container kubepods-burstable-podc00e6568_9ff9_41e6_94ab_9c2c36d856bc.slice. Mar 6 02:39:00.124836 systemd[1]: kubepods-burstable-podc00e6568_9ff9_41e6_94ab_9c2c36d856bc.slice: Consumed 26.692s CPU time, 141.9M memory peak, 716K read from disk, 15.6M written to disk. Mar 6 02:39:00.137679 kubelet[2878]: I0306 02:39:00.137386 2878 scope.go:117] "RemoveContainer" containerID="2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641" Mar 6 02:39:00.148932 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 49716 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:39:00.154709 kubelet[2878]: I0306 02:39:00.154561 2878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946d5c4c-9c33-47b1-ba8f-6e5cce6555e3" path="/var/lib/kubelet/pods/946d5c4c-9c33-47b1-ba8f-6e5cce6555e3/volumes" Mar 6 02:39:00.156436 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:39:00.157449 containerd[1590]: time="2026-03-06T02:39:00.157319511Z" level=info msg="RemoveContainer for \"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\"" Mar 6 02:39:00.190273 containerd[1590]: time="2026-03-06T02:39:00.190226960Z" level=info msg="RemoveContainer for \"2611a96af1c2130b06466f0e0a56cddcce2e4ed50893ce1d19bef977a8d15641\" returns successfully" Mar 6 02:39:00.198290 kubelet[2878]: I0306 02:39:00.193386 2878 scope.go:117] "RemoveContainer" containerID="80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c" Mar 6 02:39:00.195861 systemd-logind[1552]: New session 44 of user core. Mar 6 02:39:00.214264 containerd[1590]: time="2026-03-06T02:39:00.214207990Z" level=info msg="RemoveContainer for \"80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c\"" Mar 6 02:39:00.214687 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 6 02:39:00.242536 containerd[1590]: time="2026-03-06T02:39:00.242492590Z" level=info msg="RemoveContainer for \"80c4996e4e1b05a96c5a51e73e79212a0f142d3eb605396a615a66514c38799c\" returns successfully" Mar 6 02:39:00.247277 kubelet[2878]: I0306 02:39:00.246842 2878 scope.go:117] "RemoveContainer" containerID="ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc" Mar 6 02:39:00.254526 containerd[1590]: time="2026-03-06T02:39:00.252533415Z" level=info msg="RemoveContainer for \"ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc\"" Mar 6 02:39:00.279341 containerd[1590]: time="2026-03-06T02:39:00.277706486Z" level=info msg="RemoveContainer for \"ac15e1ce45e5056e9efc9ba06b9e765d75dec0704ddfe93d23f6dba6641e9fdc\" returns successfully" Mar 6 02:39:00.284695 kubelet[2878]: I0306 02:39:00.283692 2878 scope.go:117] "RemoveContainer" containerID="c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74" Mar 6 02:39:00.289704 containerd[1590]: time="2026-03-06T02:39:00.289333527Z" level=info msg="RemoveContainer for \"c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74\"" Mar 6 02:39:00.308957 containerd[1590]: time="2026-03-06T02:39:00.308842804Z" level=info msg="RemoveContainer for \"c1bb2c52889fc35d56ceb44ebff80d45ef602db358c1040389f8827619008c74\" returns successfully" Mar 6 02:39:00.311517 kubelet[2878]: I0306 02:39:00.311383 2878 scope.go:117] "RemoveContainer" containerID="be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6" Mar 6 02:39:00.321920 containerd[1590]: time="2026-03-06T02:39:00.321883573Z" level=info msg="RemoveContainer for \"be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6\"" Mar 6 02:39:00.340419 containerd[1590]: time="2026-03-06T02:39:00.340370135Z" level=info msg="RemoveContainer for \"be9765a150aadb80484b01381d78a82c1517c3be4c2983c90078328a698f07f6\" returns successfully" Mar 6 02:39:01.130080 kubelet[2878]: E0306 02:39:01.129939 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:01.759713 sshd[5192]: Connection closed by 10.0.0.1 port 49716 Mar 6 02:39:01.761599 sshd-session[5189]: pam_unix(sshd:session): session closed for user core Mar 6 02:39:01.787334 systemd[1]: sshd@43-10.0.0.53:22-10.0.0.1:49716.service: Deactivated successfully. Mar 6 02:39:01.791469 systemd[1]: session-44.scope: Deactivated successfully. Mar 6 02:39:01.793224 systemd[1]: session-44.scope: Consumed 1.093s CPU time, 25M memory peak. Mar 6 02:39:01.796929 systemd-logind[1552]: Session 44 logged out. Waiting for processes to exit. Mar 6 02:39:01.804196 systemd[1]: Started sshd@44-10.0.0.53:22-10.0.0.1:49724.service - OpenSSH per-connection server daemon (10.0.0.1:49724). Mar 6 02:39:01.814901 systemd-logind[1552]: Removed session 44. Mar 6 02:39:01.965352 sshd[5207]: Accepted publickey for core from 10.0.0.1 port 49724 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:39:01.967939 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:39:02.003934 systemd-logind[1552]: New session 45 of user core. Mar 6 02:39:02.014217 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 6 02:39:02.057474 kubelet[2878]: I0306 02:39:02.057327 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-cilium-config-path\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.057474 kubelet[2878]: I0306 02:39:02.057458 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-hostproc\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.057474 kubelet[2878]: I0306 02:39:02.057482 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-hubble-tls\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.058637 kubelet[2878]: I0306 02:39:02.057501 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7br\" (UniqueName: \"kubernetes.io/projected/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-kube-api-access-ch7br\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.058637 kubelet[2878]: I0306 02:39:02.057524 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-lib-modules\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.058637 kubelet[2878]: I0306 02:39:02.057543 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-cilium-ipsec-secrets\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.058637 kubelet[2878]: I0306 02:39:02.057561 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-cni-path\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.058637 kubelet[2878]: I0306 02:39:02.057582 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-xtables-lock\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.058637 kubelet[2878]: I0306 02:39:02.057605 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-host-proc-sys-net\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.065608 kubelet[2878]: I0306 02:39:02.057624 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-host-proc-sys-kernel\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.065608 kubelet[2878]: I0306 02:39:02.057642 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-bpf-maps\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.065608 kubelet[2878]: I0306 02:39:02.057673 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-cilium-cgroup\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.065608 kubelet[2878]: I0306 02:39:02.057691 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-etc-cni-netd\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.065608 kubelet[2878]: I0306 02:39:02.057710 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-clustermesh-secrets\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.065608 kubelet[2878]: I0306 02:39:02.057834 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40-cilium-run\") pod \"cilium-gg5xr\" (UID: \"e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40\") " pod="kube-system/cilium-gg5xr" Mar 6 02:39:02.070821 systemd[1]: Created slice kubepods-burstable-pode3a96d41_8d5f_43d8_b1f2_32d7e4d45b40.slice - libcontainer container kubepods-burstable-pode3a96d41_8d5f_43d8_b1f2_32d7e4d45b40.slice. Mar 6 02:39:02.080393 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Mar 6 02:39:02.080602 sshd[5210]: Connection closed by 10.0.0.1 port 49724 Mar 6 02:39:02.113517 systemd[1]: sshd@44-10.0.0.53:22-10.0.0.1:49724.service: Deactivated successfully. Mar 6 02:39:02.122423 systemd[1]: session-45.scope: Deactivated successfully. Mar 6 02:39:02.131229 systemd-logind[1552]: Session 45 logged out. Waiting for processes to exit. Mar 6 02:39:02.148975 systemd[1]: Started sshd@45-10.0.0.53:22-10.0.0.1:43602.service - OpenSSH per-connection server daemon (10.0.0.1:43602). Mar 6 02:39:02.158256 kubelet[2878]: I0306 02:39:02.157536 2878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00e6568-9ff9-41e6-94ab-9c2c36d856bc" path="/var/lib/kubelet/pods/c00e6568-9ff9-41e6-94ab-9c2c36d856bc/volumes" Mar 6 02:39:02.161297 systemd-logind[1552]: Removed session 45. Mar 6 02:39:02.389341 kubelet[2878]: E0306 02:39:02.381625 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:02.389479 containerd[1590]: time="2026-03-06T02:39:02.384510686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gg5xr,Uid:e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40,Namespace:kube-system,Attempt:0,}" Mar 6 02:39:02.412264 sshd[5217]: Accepted publickey for core from 10.0.0.1 port 43602 ssh2: RSA SHA256:ScMF4t+sRFLe42Axw5QjqGy4QurXMGM75Y6m1mn+/uU Mar 6 02:39:02.416641 sshd-session[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:39:02.458617 systemd-logind[1552]: New session 46 of user core. Mar 6 02:39:02.466608 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 6 02:39:02.599927 containerd[1590]: time="2026-03-06T02:39:02.597451788Z" level=info msg="connecting to shim b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923" address="unix:///run/containerd/s/1bcaf2ae4515627f0851a70200a450cfaa6338f8096df0196078062c68d0ec49" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:39:02.718375 systemd[1]: Started cri-containerd-b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923.scope - libcontainer container b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923. Mar 6 02:39:02.929562 kubelet[2878]: E0306 02:39:02.929326 2878 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 6 02:39:02.949226 containerd[1590]: time="2026-03-06T02:39:02.944341058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gg5xr,Uid:e3a96d41-8d5f-43d8-b1f2-32d7e4d45b40,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\"" Mar 6 02:39:02.961305 kubelet[2878]: E0306 02:39:02.958385 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:02.996215 containerd[1590]: time="2026-03-06T02:39:02.995535562Z" level=info msg="CreateContainer within sandbox \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 6 02:39:03.037911 containerd[1590]: time="2026-03-06T02:39:03.037607374Z" level=info msg="Container 319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:03.058260 containerd[1590]: time="2026-03-06T02:39:03.057511194Z" level=info msg="CreateContainer within sandbox \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05\"" Mar 6 02:39:03.062279 containerd[1590]: time="2026-03-06T02:39:03.061841897Z" level=info msg="StartContainer for \"319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05\"" Mar 6 02:39:03.064117 containerd[1590]: time="2026-03-06T02:39:03.063544788Z" level=info msg="connecting to shim 319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05" address="unix:///run/containerd/s/1bcaf2ae4515627f0851a70200a450cfaa6338f8096df0196078062c68d0ec49" protocol=ttrpc version=3 Mar 6 02:39:03.152497 systemd[1]: Started cri-containerd-319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05.scope - libcontainer container 319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05. Mar 6 02:39:03.352296 containerd[1590]: time="2026-03-06T02:39:03.351333135Z" level=info msg="StartContainer for \"319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05\" returns successfully" Mar 6 02:39:03.444844 systemd[1]: cri-containerd-319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05.scope: Deactivated successfully. Mar 6 02:39:03.453651 containerd[1590]: time="2026-03-06T02:39:03.449679920Z" level=info msg="received container exit event container_id:\"319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05\" id:\"319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05\" pid:5291 exited_at:{seconds:1772764743 nanos:448461710}" Mar 6 02:39:03.618234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-319bacbeef9d416d2456a43c2a80a7a10806f421b4628fc0fccaf8e0348b3f05-rootfs.mount: Deactivated successfully. Mar 6 02:39:04.209560 kubelet[2878]: E0306 02:39:04.207453 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:04.283464 containerd[1590]: time="2026-03-06T02:39:04.282578915Z" level=info msg="CreateContainer within sandbox \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 6 02:39:04.362557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742088421.mount: Deactivated successfully. Mar 6 02:39:04.383346 containerd[1590]: time="2026-03-06T02:39:04.380454911Z" level=info msg="Container 8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:04.425114 containerd[1590]: time="2026-03-06T02:39:04.423202480Z" level=info msg="CreateContainer within sandbox \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33\"" Mar 6 02:39:04.433330 containerd[1590]: time="2026-03-06T02:39:04.432628009Z" level=info msg="StartContainer for \"8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33\"" Mar 6 02:39:04.438607 containerd[1590]: time="2026-03-06T02:39:04.438475282Z" level=info msg="connecting to shim 8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33" address="unix:///run/containerd/s/1bcaf2ae4515627f0851a70200a450cfaa6338f8096df0196078062c68d0ec49" protocol=ttrpc version=3 Mar 6 02:39:04.594703 systemd[1]: Started cri-containerd-8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33.scope - libcontainer container 8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33. Mar 6 02:39:04.745499 containerd[1590]: time="2026-03-06T02:39:04.745330869Z" level=info msg="StartContainer for \"8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33\" returns successfully" Mar 6 02:39:04.800580 systemd[1]: cri-containerd-8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33.scope: Deactivated successfully. Mar 6 02:39:04.819417 containerd[1590]: time="2026-03-06T02:39:04.818921458Z" level=info msg="received container exit event container_id:\"8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33\" id:\"8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33\" pid:5335 exited_at:{seconds:1772764744 nanos:813306388}" Mar 6 02:39:05.100325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ee969f7a61f5da14a3237aea4d532ed3162ba3c9bc36dc592787227eb9a1c33-rootfs.mount: Deactivated successfully. Mar 6 02:39:05.267281 kubelet[2878]: E0306 02:39:05.265849 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:05.314179 containerd[1590]: time="2026-03-06T02:39:05.311657815Z" level=info msg="CreateContainer within sandbox \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 6 02:39:05.414701 containerd[1590]: time="2026-03-06T02:39:05.413688059Z" level=info msg="Container aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:05.425417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070626796.mount: Deactivated successfully. Mar 6 02:39:05.465342 containerd[1590]: time="2026-03-06T02:39:05.464881891Z" level=info msg="CreateContainer within sandbox \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6\"" Mar 6 02:39:05.469670 containerd[1590]: time="2026-03-06T02:39:05.469439504Z" level=info msg="StartContainer for \"aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6\"" Mar 6 02:39:05.477335 containerd[1590]: time="2026-03-06T02:39:05.475381765Z" level=info msg="connecting to shim aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6" address="unix:///run/containerd/s/1bcaf2ae4515627f0851a70200a450cfaa6338f8096df0196078062c68d0ec49" protocol=ttrpc version=3 Mar 6 02:39:05.684837 systemd[1]: Started cri-containerd-aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6.scope - libcontainer container aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6. Mar 6 02:39:06.132695 containerd[1590]: time="2026-03-06T02:39:06.126277792Z" level=info msg="StartContainer for \"aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6\" returns successfully" Mar 6 02:39:06.166384 systemd[1]: cri-containerd-aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6.scope: Deactivated successfully. Mar 6 02:39:06.172580 containerd[1590]: time="2026-03-06T02:39:06.172292960Z" level=info msg="received container exit event container_id:\"aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6\" id:\"aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6\" pid:5380 exited_at:{seconds:1772764746 nanos:171551926}" Mar 6 02:39:06.353250 kubelet[2878]: E0306 02:39:06.344504 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:06.444598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa12056af73f118b06fcbccb6aad0272fa4cf50b90534afdde541c0e191548f6-rootfs.mount: Deactivated successfully. Mar 6 02:39:07.447169 kubelet[2878]: E0306 02:39:07.446697 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:07.526154 containerd[1590]: time="2026-03-06T02:39:07.524868976Z" level=info msg="CreateContainer within sandbox \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 6 02:39:07.623487 containerd[1590]: time="2026-03-06T02:39:07.623431266Z" level=info msg="Container 7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:07.633410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2448019602.mount: Deactivated successfully. Mar 6 02:39:07.671895 containerd[1590]: time="2026-03-06T02:39:07.671564674Z" level=info msg="CreateContainer within sandbox \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358\"" Mar 6 02:39:07.678338 containerd[1590]: time="2026-03-06T02:39:07.676568976Z" level=info msg="StartContainer for \"7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358\"" Mar 6 02:39:07.685347 containerd[1590]: time="2026-03-06T02:39:07.684390915Z" level=info msg="connecting to shim 7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358" address="unix:///run/containerd/s/1bcaf2ae4515627f0851a70200a450cfaa6338f8096df0196078062c68d0ec49" protocol=ttrpc version=3 Mar 6 02:39:07.803956 systemd[1]: Started cri-containerd-7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358.scope - libcontainer container 7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358. Mar 6 02:39:07.939569 kubelet[2878]: E0306 02:39:07.937978 2878 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 6 02:39:08.039940 systemd[1]: cri-containerd-7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358.scope: Deactivated successfully. Mar 6 02:39:08.054301 containerd[1590]: time="2026-03-06T02:39:08.053975517Z" level=info msg="received container exit event container_id:\"7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358\" id:\"7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358\" pid:5419 exited_at:{seconds:1772764748 nanos:45634696}" Mar 6 02:39:08.121271 containerd[1590]: time="2026-03-06T02:39:08.120474919Z" level=info msg="StartContainer for \"7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358\" returns successfully" Mar 6 02:39:08.233522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7590df609735b8879fa498e9ac889740fb16fe3da35aa3db3890f49e8f051358-rootfs.mount: Deactivated successfully. Mar 6 02:39:08.497165 kubelet[2878]: E0306 02:39:08.494665 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:08.534701 containerd[1590]: time="2026-03-06T02:39:08.534519442Z" level=info msg="CreateContainer within sandbox \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 6 02:39:08.632189 containerd[1590]: time="2026-03-06T02:39:08.628247739Z" level=info msg="Container 22d7b636ed71aa177020eae26f7a9de3dd17691199215df4b62756fab6d9540c: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:08.631158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067421622.mount: Deactivated successfully. Mar 6 02:39:08.665940 containerd[1590]: time="2026-03-06T02:39:08.665700938Z" level=info msg="CreateContainer within sandbox \"b4880b6f966f84cace6685c0ab6042eaee0cf73b1848c9d41d58e53c207cf923\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"22d7b636ed71aa177020eae26f7a9de3dd17691199215df4b62756fab6d9540c\"" Mar 6 02:39:08.678159 containerd[1590]: time="2026-03-06T02:39:08.677411265Z" level=info msg="StartContainer for \"22d7b636ed71aa177020eae26f7a9de3dd17691199215df4b62756fab6d9540c\"" Mar 6 02:39:08.683410 containerd[1590]: time="2026-03-06T02:39:08.683295805Z" level=info msg="connecting to shim 22d7b636ed71aa177020eae26f7a9de3dd17691199215df4b62756fab6d9540c" address="unix:///run/containerd/s/1bcaf2ae4515627f0851a70200a450cfaa6338f8096df0196078062c68d0ec49" protocol=ttrpc version=3 Mar 6 02:39:08.810717 systemd[1]: Started cri-containerd-22d7b636ed71aa177020eae26f7a9de3dd17691199215df4b62756fab6d9540c.scope - libcontainer container 22d7b636ed71aa177020eae26f7a9de3dd17691199215df4b62756fab6d9540c. Mar 6 02:39:09.138233 kubelet[2878]: E0306 02:39:09.128501 2878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-v5ljd" podUID="9a25e354-1107-43b4-a151-0cabdd699918" Mar 6 02:39:09.187258 containerd[1590]: time="2026-03-06T02:39:09.184585548Z" level=info msg="StartContainer for \"22d7b636ed71aa177020eae26f7a9de3dd17691199215df4b62756fab6d9540c\" returns successfully" Mar 6 02:39:10.607643 kubelet[2878]: E0306 02:39:10.607402 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:10.691720 kubelet[2878]: I0306 02:39:10.690895 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gg5xr" podStartSLOduration=9.690873612 podStartE2EDuration="9.690873612s" podCreationTimestamp="2026-03-06 02:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:39:10.67962984 +0000 UTC m=+411.516047906" watchObservedRunningTime="2026-03-06 02:39:10.690873612 +0000 UTC m=+411.527291677" Mar 6 02:39:11.074354 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 6 02:39:11.132248 kubelet[2878]: E0306 02:39:11.128581 2878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-v5ljd" podUID="9a25e354-1107-43b4-a151-0cabdd699918" Mar 6 02:39:11.873426 kubelet[2878]: I0306 02:39:11.873171 2878 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-06T02:39:11Z","lastTransitionTime":"2026-03-06T02:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 6 02:39:12.385208 kubelet[2878]: E0306 02:39:12.384928 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:13.151237 kubelet[2878]: E0306 02:39:13.137931 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:20.041215 containerd[1590]: time="2026-03-06T02:39:20.040901773Z" level=info msg="StopPodSandbox for \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\"" Mar 6 02:39:20.043494 containerd[1590]: time="2026-03-06T02:39:20.043363752Z" level=info msg="TearDown network for sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" successfully" Mar 6 02:39:20.043494 containerd[1590]: time="2026-03-06T02:39:20.043487293Z" level=info msg="StopPodSandbox for \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" returns successfully" Mar 6 02:39:20.045432 containerd[1590]: time="2026-03-06T02:39:20.045307153Z" level=info msg="RemovePodSandbox for \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\"" Mar 6 02:39:20.045432 containerd[1590]: time="2026-03-06T02:39:20.045423160Z" level=info msg="Forcibly stopping sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\"" Mar 6 02:39:20.045530 containerd[1590]: time="2026-03-06T02:39:20.045515433Z" level=info msg="TearDown network for sandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" successfully" Mar 6 02:39:20.051727 containerd[1590]: time="2026-03-06T02:39:20.050312114Z" level=info msg="Ensure that sandbox bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1 in task-service has been cleanup successfully" Mar 6 02:39:20.087167 containerd[1590]: time="2026-03-06T02:39:20.086884250Z" level=info msg="RemovePodSandbox \"bd3f3cd0946526937d54246542cd48c9acba4b57072e01285e270cc1c6a7a0a1\" returns successfully" Mar 6 02:39:20.090383 containerd[1590]: time="2026-03-06T02:39:20.090328325Z" level=info msg="StopPodSandbox for \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\"" Mar 6 02:39:20.092871 containerd[1590]: time="2026-03-06T02:39:20.092476078Z" level=info msg="TearDown network for sandbox \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" successfully" Mar 6 02:39:20.092871 containerd[1590]: time="2026-03-06T02:39:20.092596232Z" level=info msg="StopPodSandbox for \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" returns successfully" Mar 6 02:39:20.098689 containerd[1590]: time="2026-03-06T02:39:20.095409998Z" level=info msg="RemovePodSandbox for \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\"" Mar 6 02:39:20.098689 containerd[1590]: time="2026-03-06T02:39:20.096363149Z" level=info msg="Forcibly stopping sandbox \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\"" Mar 6 02:39:20.098689 containerd[1590]: time="2026-03-06T02:39:20.096458166Z" level=info msg="TearDown network for sandbox \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" successfully" Mar 6 02:39:20.100173 containerd[1590]: time="2026-03-06T02:39:20.098954289Z" level=info msg="Ensure that sandbox 082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88 in task-service has been cleanup successfully" Mar 6 02:39:20.129953 containerd[1590]: time="2026-03-06T02:39:20.129344022Z" level=info msg="RemovePodSandbox \"082a7825c1cc2af8d5c4cbd77860b954913d890512f3a069bdb81b50ce95af88\" returns successfully" Mar 6 02:39:22.576267 systemd-networkd[1451]: lxc_health: Link UP Mar 6 02:39:22.591498 systemd-networkd[1451]: lxc_health: Gained carrier Mar 6 02:39:24.425571 kubelet[2878]: E0306 02:39:24.424554 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:24.610174 systemd-networkd[1451]: lxc_health: Gained IPv6LL Mar 6 02:39:24.818165 kubelet[2878]: E0306 02:39:24.811949 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:25.822338 kubelet[2878]: E0306 02:39:25.821206 2878 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:29.241465 sshd[5224]: Connection closed by 10.0.0.1 port 43602 Mar 6 02:39:29.242599 sshd-session[5217]: pam_unix(sshd:session): session closed for user core Mar 6 02:39:29.255719 systemd-logind[1552]: Session 46 logged out. Waiting for processes to exit. Mar 6 02:39:29.256954 systemd[1]: sshd@45-10.0.0.53:22-10.0.0.1:43602.service: Deactivated successfully. Mar 6 02:39:29.272412 systemd[1]: session-46.scope: Deactivated successfully. Mar 6 02:39:29.272893 systemd[1]: session-46.scope: Consumed 1.284s CPU time, 27M memory peak. Mar 6 02:39:29.284473 systemd-logind[1552]: Removed session 46.