May 16 05:29:28.808150 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 03:57:41 -00 2025 May 16 05:29:28.808170 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3d34a100811c7d0b0788e18c6e7d09762e715c8b4f61568827372c0312d26be3 May 16 05:29:28.808181 kernel: BIOS-provided physical RAM map: May 16 05:29:28.808188 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 05:29:28.808194 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 05:29:28.808200 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 05:29:28.808208 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 16 05:29:28.808214 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 05:29:28.808222 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 05:29:28.808229 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 05:29:28.808235 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 16 05:29:28.808248 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 05:29:28.808254 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 05:29:28.808261 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 05:29:28.808271 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 05:29:28.808278 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 05:29:28.808285 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 16 05:29:28.808292 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 16 05:29:28.808299 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 16 05:29:28.808305 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 16 05:29:28.808312 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 05:29:28.808319 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 05:29:28.808326 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 05:29:28.808332 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 05:29:28.808339 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 05:29:28.808347 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 05:29:28.808366 kernel: NX (Execute Disable) protection: active May 16 05:29:28.808372 kernel: APIC: Static calls initialized May 16 05:29:28.808379 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 16 05:29:28.808386 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 16 05:29:28.808393 kernel: extended physical RAM map: May 16 05:29:28.808400 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 05:29:28.808406 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 05:29:28.808413 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 05:29:28.808420 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 16 05:29:28.808427 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 05:29:28.808436 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 05:29:28.808443 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 05:29:28.808450 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 16 05:29:28.808457 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 16 05:29:28.808467 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 16 05:29:28.808474 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 16 05:29:28.808482 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 16 05:29:28.808490 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 05:29:28.808497 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 05:29:28.808504 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 05:29:28.808511 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 05:29:28.808518 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 05:29:28.808525 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 16 05:29:28.808533 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 16 05:29:28.808540 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 16 05:29:28.808549 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 16 05:29:28.808556 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 05:29:28.808564 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 05:29:28.808571 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 05:29:28.808578 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 05:29:28.808585 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 05:29:28.808592 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 05:29:28.808599 kernel: efi: EFI v2.7 by EDK II May 16 05:29:28.808606 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 16 05:29:28.808613 kernel: random: crng init done May 16 05:29:28.808621 kernel: efi: Remove mem149: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 16 05:29:28.808628 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 16 05:29:28.808637 kernel: secureboot: Secure boot disabled May 16 05:29:28.808644 kernel: SMBIOS 2.8 present. May 16 05:29:28.808651 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 16 05:29:28.808658 kernel: DMI: Memory slots populated: 1/1 May 16 05:29:28.808665 kernel: Hypervisor detected: KVM May 16 05:29:28.808672 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 05:29:28.808679 kernel: kvm-clock: using sched offset of 3586666241 cycles May 16 05:29:28.808686 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 05:29:28.808694 kernel: tsc: Detected 2794.748 MHz processor May 16 05:29:28.808702 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 05:29:28.808709 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 05:29:28.808718 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 16 05:29:28.808725 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 16 05:29:28.808733 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 05:29:28.808740 kernel: Using GB pages for direct mapping May 16 05:29:28.808747 kernel: ACPI: Early table checksum verification disabled May 16 05:29:28.808754 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 16 05:29:28.808762 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 16 05:29:28.808769 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:29:28.808776 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:29:28.808785 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 16 05:29:28.808792 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:29:28.808800 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:29:28.808807 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:29:28.808814 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:29:28.808821 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 16 05:29:28.808828 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 16 05:29:28.808836 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 16 05:29:28.808845 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 16 05:29:28.808852 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 16 05:29:28.808859 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 16 05:29:28.808866 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 16 05:29:28.808873 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 16 05:29:28.808881 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 16 05:29:28.808888 kernel: No NUMA configuration found May 16 05:29:28.808895 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 16 05:29:28.808903 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 16 05:29:28.808916 kernel: Zone ranges: May 16 05:29:28.808930 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 05:29:28.808944 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 16 05:29:28.808955 kernel: Normal empty May 16 05:29:28.808972 kernel: Device empty May 16 05:29:28.808979 kernel: Movable zone start for each node May 16 05:29:28.808987 kernel: Early memory node ranges May 16 05:29:28.808994 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 16 05:29:28.809001 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 16 05:29:28.809008 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 16 05:29:28.809017 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 16 05:29:28.809025 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 16 05:29:28.809032 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 16 05:29:28.809039 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 16 05:29:28.809046 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 16 05:29:28.809053 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 16 05:29:28.809060 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 05:29:28.809068 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 16 05:29:28.809083 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 16 05:29:28.809091 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 05:29:28.809098 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 16 05:29:28.809106 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 16 05:29:28.809115 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 16 05:29:28.809123 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 16 05:29:28.809130 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 16 05:29:28.809138 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 05:29:28.809145 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 05:29:28.809154 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 05:29:28.809162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 05:29:28.809170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 05:29:28.809177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 05:29:28.809193 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 05:29:28.809209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 05:29:28.809217 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 05:29:28.809224 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 05:29:28.809232 kernel: TSC deadline timer available May 16 05:29:28.809247 kernel: CPU topo: Max. logical packages: 1 May 16 05:29:28.809254 kernel: CPU topo: Max. logical dies: 1 May 16 05:29:28.809262 kernel: CPU topo: Max. dies per package: 1 May 16 05:29:28.809269 kernel: CPU topo: Max. threads per core: 1 May 16 05:29:28.809277 kernel: CPU topo: Num. cores per package: 4 May 16 05:29:28.809285 kernel: CPU topo: Num. threads per package: 4 May 16 05:29:28.809292 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 16 05:29:28.809300 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 05:29:28.809307 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 05:29:28.809315 kernel: kvm-guest: setup PV sched yield May 16 05:29:28.809324 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 16 05:29:28.809332 kernel: Booting paravirtualized kernel on KVM May 16 05:29:28.809340 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 05:29:28.809347 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 05:29:28.809366 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 16 05:29:28.809374 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 16 05:29:28.809381 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 05:29:28.809389 kernel: kvm-guest: PV spinlocks enabled May 16 05:29:28.809396 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 05:29:28.809407 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3d34a100811c7d0b0788e18c6e7d09762e715c8b4f61568827372c0312d26be3 May 16 05:29:28.809416 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 05:29:28.809423 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 05:29:28.809431 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 05:29:28.809438 kernel: Fallback order for Node 0: 0 May 16 05:29:28.809446 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 16 05:29:28.809453 kernel: Policy zone: DMA32 May 16 05:29:28.809461 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 05:29:28.809471 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 05:29:28.809479 kernel: ftrace: allocating 40065 entries in 157 pages May 16 05:29:28.809486 kernel: ftrace: allocated 157 pages with 5 groups May 16 05:29:28.809494 kernel: Dynamic Preempt: voluntary May 16 05:29:28.809501 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 05:29:28.809510 kernel: rcu: RCU event tracing is enabled. May 16 05:29:28.809517 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 05:29:28.809525 kernel: Trampoline variant of Tasks RCU enabled. May 16 05:29:28.809533 kernel: Rude variant of Tasks RCU enabled. May 16 05:29:28.809543 kernel: Tracing variant of Tasks RCU enabled. May 16 05:29:28.809551 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 05:29:28.809559 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 05:29:28.809566 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 05:29:28.809574 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 05:29:28.809582 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 05:29:28.809589 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 05:29:28.809597 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 05:29:28.809604 kernel: Console: colour dummy device 80x25 May 16 05:29:28.809614 kernel: printk: legacy console [ttyS0] enabled May 16 05:29:28.809622 kernel: ACPI: Core revision 20240827 May 16 05:29:28.809629 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 05:29:28.809637 kernel: APIC: Switch to symmetric I/O mode setup May 16 05:29:28.809644 kernel: x2apic enabled May 16 05:29:28.809652 kernel: APIC: Switched APIC routing to: physical x2apic May 16 05:29:28.809660 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 05:29:28.809668 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 05:29:28.809675 kernel: kvm-guest: setup PV IPIs May 16 05:29:28.809684 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 05:29:28.809692 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 05:29:28.809700 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 05:29:28.809708 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 05:29:28.809715 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 05:29:28.809723 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 05:29:28.809730 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 05:29:28.809738 kernel: Spectre V2 : Mitigation: Retpolines May 16 05:29:28.809745 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 16 05:29:28.809755 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 16 05:29:28.809763 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 05:29:28.809770 kernel: RETBleed: Mitigation: untrained return thunk May 16 05:29:28.809778 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 05:29:28.809785 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 05:29:28.809793 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 05:29:28.809801 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 05:29:28.809809 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 05:29:28.809818 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 05:29:28.809826 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 05:29:28.809833 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 05:29:28.809841 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 05:29:28.809848 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 05:29:28.809856 kernel: Freeing SMP alternatives memory: 32K May 16 05:29:28.809863 kernel: pid_max: default: 32768 minimum: 301 May 16 05:29:28.809871 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 05:29:28.809878 kernel: landlock: Up and running. May 16 05:29:28.809887 kernel: SELinux: Initializing. May 16 05:29:28.809895 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 05:29:28.809902 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 05:29:28.809910 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 05:29:28.809918 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 05:29:28.809925 kernel: ... version: 0 May 16 05:29:28.809933 kernel: ... bit width: 48 May 16 05:29:28.809940 kernel: ... generic registers: 6 May 16 05:29:28.809948 kernel: ... value mask: 0000ffffffffffff May 16 05:29:28.809957 kernel: ... max period: 00007fffffffffff May 16 05:29:28.809964 kernel: ... fixed-purpose events: 0 May 16 05:29:28.809972 kernel: ... event mask: 000000000000003f May 16 05:29:28.809979 kernel: signal: max sigframe size: 1776 May 16 05:29:28.809987 kernel: rcu: Hierarchical SRCU implementation. May 16 05:29:28.809994 kernel: rcu: Max phase no-delay instances is 400. May 16 05:29:28.810002 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 05:29:28.810009 kernel: smp: Bringing up secondary CPUs ... May 16 05:29:28.810017 kernel: smpboot: x86: Booting SMP configuration: May 16 05:29:28.810025 kernel: .... node #0, CPUs: #1 #2 #3 May 16 05:29:28.810034 kernel: smp: Brought up 1 node, 4 CPUs May 16 05:29:28.810042 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 05:29:28.810049 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 137196K reserved, 0K cma-reserved) May 16 05:29:28.810057 kernel: devtmpfs: initialized May 16 05:29:28.810064 kernel: x86/mm: Memory block size: 128MB May 16 05:29:28.810072 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 16 05:29:28.810080 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 16 05:29:28.810087 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 16 05:29:28.810097 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 16 05:29:28.810104 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 16 05:29:28.810112 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 16 05:29:28.810120 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 05:29:28.810127 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 05:29:28.810135 kernel: pinctrl core: initialized pinctrl subsystem May 16 05:29:28.810143 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 05:29:28.810150 kernel: audit: initializing netlink subsys (disabled) May 16 05:29:28.810158 kernel: audit: type=2000 audit(1747373367.088:1): state=initialized audit_enabled=0 res=1 May 16 05:29:28.810167 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 05:29:28.810175 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 05:29:28.810182 kernel: cpuidle: using governor menu May 16 05:29:28.810189 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 05:29:28.810197 kernel: dca service started, version 1.12.1 May 16 05:29:28.810205 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 16 05:29:28.810212 kernel: PCI: Using configuration type 1 for base access May 16 05:29:28.810220 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 05:29:28.810227 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 05:29:28.810243 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 05:29:28.810250 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 05:29:28.810258 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 05:29:28.810265 kernel: ACPI: Added _OSI(Module Device) May 16 05:29:28.810273 kernel: ACPI: Added _OSI(Processor Device) May 16 05:29:28.810280 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 05:29:28.810288 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 05:29:28.810295 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 05:29:28.810311 kernel: ACPI: Interpreter enabled May 16 05:29:28.810321 kernel: ACPI: PM: (supports S0 S3 S5) May 16 05:29:28.810335 kernel: ACPI: Using IOAPIC for interrupt routing May 16 05:29:28.810369 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 05:29:28.810378 kernel: PCI: Using E820 reservations for host bridge windows May 16 05:29:28.810386 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 05:29:28.810393 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 05:29:28.810565 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 05:29:28.810683 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 05:29:28.810798 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 05:29:28.810808 kernel: PCI host bridge to bus 0000:00 May 16 05:29:28.810923 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 05:29:28.811027 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 05:29:28.811132 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 05:29:28.811294 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 16 05:29:28.811421 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 16 05:29:28.811529 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 16 05:29:28.811631 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 05:29:28.811761 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 16 05:29:28.811891 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 16 05:29:28.812004 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 16 05:29:28.812115 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 16 05:29:28.812230 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 16 05:29:28.812363 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 05:29:28.812514 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 05:29:28.812629 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 16 05:29:28.812743 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 16 05:29:28.812857 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 16 05:29:28.812987 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 16 05:29:28.813109 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 16 05:29:28.813221 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 16 05:29:28.813345 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 16 05:29:28.813483 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 16 05:29:28.813600 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 16 05:29:28.813712 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 16 05:29:28.813826 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 16 05:29:28.813943 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 16 05:29:28.814063 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 16 05:29:28.814175 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 05:29:28.814304 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 16 05:29:28.814432 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 16 05:29:28.814549 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 16 05:29:28.814673 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 16 05:29:28.814785 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 16 05:29:28.814796 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 05:29:28.814804 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 05:29:28.814811 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 05:29:28.814819 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 05:29:28.814826 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 05:29:28.814834 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 05:29:28.814842 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 05:29:28.814852 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 05:29:28.814860 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 05:29:28.814867 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 05:29:28.814875 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 05:29:28.814882 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 05:29:28.814890 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 05:29:28.814897 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 05:29:28.814905 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 05:29:28.814915 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 05:29:28.814922 kernel: iommu: Default domain type: Translated May 16 05:29:28.814930 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 05:29:28.814937 kernel: efivars: Registered efivars operations May 16 05:29:28.814945 kernel: PCI: Using ACPI for IRQ routing May 16 05:29:28.814952 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 05:29:28.814960 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 16 05:29:28.814967 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 16 05:29:28.814975 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 16 05:29:28.814982 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 16 05:29:28.814991 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 16 05:29:28.814999 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 16 05:29:28.815007 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 16 05:29:28.815014 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 16 05:29:28.815126 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 05:29:28.815244 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 05:29:28.815369 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 05:29:28.815382 kernel: vgaarb: loaded May 16 05:29:28.815390 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 05:29:28.815398 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 05:29:28.815405 kernel: clocksource: Switched to clocksource kvm-clock May 16 05:29:28.815413 kernel: VFS: Disk quotas dquot_6.6.0 May 16 05:29:28.815421 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 05:29:28.815428 kernel: pnp: PnP ACPI init May 16 05:29:28.815574 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 16 05:29:28.815591 kernel: pnp: PnP ACPI: found 6 devices May 16 05:29:28.815599 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 05:29:28.815607 kernel: NET: Registered PF_INET protocol family May 16 05:29:28.815615 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 05:29:28.815622 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 05:29:28.815630 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 05:29:28.815638 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 05:29:28.815646 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 05:29:28.815654 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 05:29:28.815664 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 05:29:28.815672 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 05:29:28.815679 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 05:29:28.815687 kernel: NET: Registered PF_XDP protocol family May 16 05:29:28.815801 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 16 05:29:28.815914 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 16 05:29:28.816017 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 05:29:28.816119 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 05:29:28.816225 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 05:29:28.816336 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 16 05:29:28.816452 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 16 05:29:28.816556 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 16 05:29:28.816566 kernel: PCI: CLS 0 bytes, default 64 May 16 05:29:28.816574 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 05:29:28.816583 kernel: Initialise system trusted keyrings May 16 05:29:28.816594 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 05:29:28.816602 kernel: Key type asymmetric registered May 16 05:29:28.816610 kernel: Asymmetric key parser 'x509' registered May 16 05:29:28.816618 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 05:29:28.816626 kernel: io scheduler mq-deadline registered May 16 05:29:28.816633 kernel: io scheduler kyber registered May 16 05:29:28.816641 kernel: io scheduler bfq registered May 16 05:29:28.816649 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 05:29:28.816660 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 05:29:28.816668 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 05:29:28.816676 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 05:29:28.816685 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 05:29:28.816694 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 05:29:28.816701 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 05:29:28.816710 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 05:29:28.816717 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 05:29:28.816832 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 05:29:28.816846 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 05:29:28.816953 kernel: rtc_cmos 00:04: registered as rtc0 May 16 05:29:28.817059 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T05:29:28 UTC (1747373368) May 16 05:29:28.817164 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 05:29:28.817175 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 05:29:28.817183 kernel: efifb: probing for efifb May 16 05:29:28.817191 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 16 05:29:28.817202 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 16 05:29:28.817210 kernel: efifb: scrolling: redraw May 16 05:29:28.817217 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 05:29:28.817225 kernel: Console: switching to colour frame buffer device 160x50 May 16 05:29:28.817233 kernel: fb0: EFI VGA frame buffer device May 16 05:29:28.817248 kernel: pstore: Using crash dump compression: deflate May 16 05:29:28.817256 kernel: pstore: Registered efi_pstore as persistent store backend May 16 05:29:28.817263 kernel: NET: Registered PF_INET6 protocol family May 16 05:29:28.817271 kernel: Segment Routing with IPv6 May 16 05:29:28.817279 kernel: In-situ OAM (IOAM) with IPv6 May 16 05:29:28.817289 kernel: NET: Registered PF_PACKET protocol family May 16 05:29:28.817297 kernel: Key type dns_resolver registered May 16 05:29:28.817305 kernel: IPI shorthand broadcast: enabled May 16 05:29:28.817313 kernel: sched_clock: Marking stable (2747002158, 157560102)->(2919939712, -15377452) May 16 05:29:28.817321 kernel: registered taskstats version 1 May 16 05:29:28.817329 kernel: Loading compiled-in X.509 certificates May 16 05:29:28.817337 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: bdcb483d7df54fb18e2c40c35e34935cfa928c44' May 16 05:29:28.817344 kernel: Demotion targets for Node 0: null May 16 05:29:28.817366 kernel: Key type .fscrypt registered May 16 05:29:28.817385 kernel: Key type fscrypt-provisioning registered May 16 05:29:28.817400 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 05:29:28.817416 kernel: ima: Allocated hash algorithm: sha1 May 16 05:29:28.817424 kernel: ima: No architecture policies found May 16 05:29:28.817432 kernel: clk: Disabling unused clocks May 16 05:29:28.817440 kernel: Warning: unable to open an initial console. May 16 05:29:28.817448 kernel: Freeing unused kernel image (initmem) memory: 54416K May 16 05:29:28.817456 kernel: Write protecting the kernel read-only data: 24576k May 16 05:29:28.817466 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 16 05:29:28.817474 kernel: Run /init as init process May 16 05:29:28.817482 kernel: with arguments: May 16 05:29:28.817490 kernel: /init May 16 05:29:28.817498 kernel: with environment: May 16 05:29:28.817506 kernel: HOME=/ May 16 05:29:28.817513 kernel: TERM=linux May 16 05:29:28.817521 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 05:29:28.817530 systemd[1]: Successfully made /usr/ read-only. May 16 05:29:28.817544 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 05:29:28.817553 systemd[1]: Detected virtualization kvm. May 16 05:29:28.817561 systemd[1]: Detected architecture x86-64. May 16 05:29:28.817569 systemd[1]: Running in initrd. May 16 05:29:28.817577 systemd[1]: No hostname configured, using default hostname. May 16 05:29:28.817586 systemd[1]: Hostname set to . May 16 05:29:28.817594 systemd[1]: Initializing machine ID from VM UUID. May 16 05:29:28.817604 systemd[1]: Queued start job for default target initrd.target. May 16 05:29:28.817613 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 05:29:28.817621 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 05:29:28.817630 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 05:29:28.817639 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 05:29:28.817647 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 05:29:28.817656 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 05:29:28.817669 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 05:29:28.817677 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 05:29:28.817686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 05:29:28.817694 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 05:29:28.817702 systemd[1]: Reached target paths.target - Path Units. May 16 05:29:28.817711 systemd[1]: Reached target slices.target - Slice Units. May 16 05:29:28.817719 systemd[1]: Reached target swap.target - Swaps. May 16 05:29:28.817727 systemd[1]: Reached target timers.target - Timer Units. May 16 05:29:28.817736 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 05:29:28.817746 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 05:29:28.817754 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 05:29:28.817762 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 05:29:28.817771 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 05:29:28.817779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 05:29:28.817787 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 05:29:28.817796 systemd[1]: Reached target sockets.target - Socket Units. May 16 05:29:28.817804 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 05:29:28.817814 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 05:29:28.817823 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 05:29:28.817832 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 05:29:28.817840 systemd[1]: Starting systemd-fsck-usr.service... May 16 05:29:28.817849 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 05:29:28.817857 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 05:29:28.817865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:29:28.817874 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 05:29:28.817884 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 05:29:28.817893 systemd[1]: Finished systemd-fsck-usr.service. May 16 05:29:28.817901 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 05:29:28.817932 systemd-journald[220]: Collecting audit messages is disabled. May 16 05:29:28.817953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:29:28.817963 systemd-journald[220]: Journal started May 16 05:29:28.817981 systemd-journald[220]: Runtime Journal (/run/log/journal/2e9c3a0170394a7eb2207c59d3622621) is 6M, max 48.5M, 42.4M free. May 16 05:29:28.809226 systemd-modules-load[222]: Inserted module 'overlay' May 16 05:29:28.820383 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 05:29:28.822373 systemd[1]: Started systemd-journald.service - Journal Service. May 16 05:29:28.823304 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 05:29:28.835388 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 05:29:28.836918 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 05:29:28.839831 kernel: Bridge firewalling registered May 16 05:29:28.839799 systemd-modules-load[222]: Inserted module 'br_netfilter' May 16 05:29:28.841465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 05:29:28.843561 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 05:29:28.847191 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 05:29:28.849893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 05:29:28.852089 systemd-tmpfiles[246]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 05:29:28.852883 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 05:29:28.857042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 05:29:28.859291 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 05:29:28.865027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 05:29:28.866929 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 05:29:28.879859 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3d34a100811c7d0b0788e18c6e7d09762e715c8b4f61568827372c0312d26be3 May 16 05:29:28.916204 systemd-resolved[264]: Positive Trust Anchors: May 16 05:29:28.916219 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 05:29:28.916259 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 05:29:28.918707 systemd-resolved[264]: Defaulting to hostname 'linux'. May 16 05:29:28.919709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 05:29:28.925156 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 05:29:28.988389 kernel: SCSI subsystem initialized May 16 05:29:28.997377 kernel: Loading iSCSI transport class v2.0-870. May 16 05:29:29.008380 kernel: iscsi: registered transport (tcp) May 16 05:29:29.029375 kernel: iscsi: registered transport (qla4xxx) May 16 05:29:29.029394 kernel: QLogic iSCSI HBA Driver May 16 05:29:29.049398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 05:29:29.070643 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 05:29:29.072045 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 05:29:29.127872 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 05:29:29.130454 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 05:29:29.191385 kernel: raid6: avx2x4 gen() 30202 MB/s May 16 05:29:29.208374 kernel: raid6: avx2x2 gen() 30805 MB/s May 16 05:29:29.225462 kernel: raid6: avx2x1 gen() 25899 MB/s May 16 05:29:29.225498 kernel: raid6: using algorithm avx2x2 gen() 30805 MB/s May 16 05:29:29.243470 kernel: raid6: .... xor() 19935 MB/s, rmw enabled May 16 05:29:29.243498 kernel: raid6: using avx2x2 recovery algorithm May 16 05:29:29.263424 kernel: xor: automatically using best checksumming function avx May 16 05:29:29.425393 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 05:29:29.433761 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 05:29:29.435828 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 05:29:29.471091 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 16 05:29:29.476546 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 05:29:29.478496 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 05:29:29.505202 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation May 16 05:29:29.534793 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 05:29:29.536617 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 05:29:29.605815 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 05:29:29.609205 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 05:29:29.644745 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 05:29:29.672224 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 05:29:29.673380 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 05:29:29.673394 kernel: GPT:9289727 != 19775487 May 16 05:29:29.673404 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 05:29:29.673414 kernel: GPT:9289727 != 19775487 May 16 05:29:29.673423 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 05:29:29.673435 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:29:29.668922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 05:29:29.670756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:29:29.672183 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:29:29.676905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:29:29.681374 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 16 05:29:29.682388 kernel: cryptd: max_cpu_qlen set to 1000 May 16 05:29:29.686392 kernel: libata version 3.00 loaded. May 16 05:29:29.693393 kernel: AES CTR mode by8 optimization enabled May 16 05:29:29.696262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 05:29:29.702012 kernel: ahci 0000:00:1f.2: version 3.0 May 16 05:29:29.736962 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 05:29:29.736978 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 16 05:29:29.737122 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 16 05:29:29.737269 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 05:29:29.737414 kernel: scsi host0: ahci May 16 05:29:29.737557 kernel: scsi host1: ahci May 16 05:29:29.737692 kernel: scsi host2: ahci May 16 05:29:29.738713 kernel: scsi host3: ahci May 16 05:29:29.738865 kernel: scsi host4: ahci May 16 05:29:29.739004 kernel: scsi host5: ahci May 16 05:29:29.739136 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 16 05:29:29.739147 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 16 05:29:29.739157 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 16 05:29:29.739167 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 16 05:29:29.739181 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 16 05:29:29.739191 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 16 05:29:29.696412 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:29:29.702318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:29:29.733590 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 05:29:29.738964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:29:29.752835 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 05:29:29.756065 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 05:29:29.765861 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 05:29:29.774517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 05:29:29.775621 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 05:29:29.808542 disk-uuid[637]: Primary Header is updated. May 16 05:29:29.808542 disk-uuid[637]: Secondary Entries is updated. May 16 05:29:29.808542 disk-uuid[637]: Secondary Header is updated. May 16 05:29:29.812388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:29:29.816379 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:29:30.041392 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 05:29:30.041472 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 05:29:30.042384 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 05:29:30.043725 kernel: ata3.00: applying bridge limits May 16 05:29:30.043739 kernel: ata3.00: configured for UDMA/100 May 16 05:29:30.049378 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 05:29:30.049408 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 05:29:30.050381 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 05:29:30.051387 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 05:29:30.052387 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 05:29:30.108023 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 05:29:30.128003 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 05:29:30.128015 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 05:29:30.450815 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 05:29:30.452654 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 05:29:30.454454 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 05:29:30.454894 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 05:29:30.456061 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 05:29:30.487732 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 05:29:30.838859 disk-uuid[638]: The operation has completed successfully. May 16 05:29:30.840075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:29:30.869364 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 05:29:30.869482 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 05:29:30.903114 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 05:29:30.931524 sh[666]: Success May 16 05:29:30.956478 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 05:29:30.956504 kernel: device-mapper: uevent: version 1.0.3 May 16 05:29:30.957565 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 05:29:30.966383 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 16 05:29:30.995099 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 05:29:30.997324 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 05:29:31.019544 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 05:29:31.032865 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 05:29:31.032918 kernel: BTRFS: device fsid 902d5020-5ef8-4867-9c12-521b17a28d91 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (678) May 16 05:29:31.034154 kernel: BTRFS info (device dm-0): first mount of filesystem 902d5020-5ef8-4867-9c12-521b17a28d91 May 16 05:29:31.034192 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 05:29:31.035649 kernel: BTRFS info (device dm-0): using free-space-tree May 16 05:29:31.039649 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 05:29:31.040420 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 05:29:31.041813 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 05:29:31.042639 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 05:29:31.046252 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 05:29:31.081389 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (711) May 16 05:29:31.081439 kernel: BTRFS info (device vda6): first mount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:29:31.082374 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 05:29:31.083830 kernel: BTRFS info (device vda6): using free-space-tree May 16 05:29:31.090370 kernel: BTRFS info (device vda6): last unmount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:29:31.090766 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 05:29:31.092329 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 05:29:31.172591 ignition[762]: Ignition 2.21.0 May 16 05:29:31.172610 ignition[762]: Stage: fetch-offline May 16 05:29:31.172640 ignition[762]: no configs at "/usr/lib/ignition/base.d" May 16 05:29:31.172649 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:29:31.172739 ignition[762]: parsed url from cmdline: "" May 16 05:29:31.172742 ignition[762]: no config URL provided May 16 05:29:31.172747 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" May 16 05:29:31.172756 ignition[762]: no config at "/usr/lib/ignition/user.ign" May 16 05:29:31.172778 ignition[762]: op(1): [started] loading QEMU firmware config module May 16 05:29:31.172783 ignition[762]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 05:29:31.182073 ignition[762]: op(1): [finished] loading QEMU firmware config module May 16 05:29:31.184379 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 05:29:31.189175 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 05:29:31.225014 ignition[762]: parsing config with SHA512: 28d29a97144c8a891c6ec8240ad4a6600cb35ebdd92d0cc260423b4d9bc7dac0a49df60452450caf3414dfcbd67aa1aa0e6ef66dc7c870404ebc3cbbf2daf34e May 16 05:29:31.231569 unknown[762]: fetched base config from "system" May 16 05:29:31.232424 unknown[762]: fetched user config from "qemu" May 16 05:29:31.232842 ignition[762]: fetch-offline: fetch-offline passed May 16 05:29:31.232907 ignition[762]: Ignition finished successfully May 16 05:29:31.238043 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 05:29:31.245527 systemd-networkd[857]: lo: Link UP May 16 05:29:31.245540 systemd-networkd[857]: lo: Gained carrier May 16 05:29:31.247211 systemd-networkd[857]: Enumeration completed May 16 05:29:31.247608 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:29:31.247612 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 05:29:31.248265 systemd-networkd[857]: eth0: Link UP May 16 05:29:31.248269 systemd-networkd[857]: eth0: Gained carrier May 16 05:29:31.248277 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:29:31.248624 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 05:29:31.257082 systemd[1]: Reached target network.target - Network. May 16 05:29:31.257376 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 05:29:31.260832 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 05:29:31.271424 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 05:29:31.291329 ignition[861]: Ignition 2.21.0 May 16 05:29:31.291343 ignition[861]: Stage: kargs May 16 05:29:31.291533 ignition[861]: no configs at "/usr/lib/ignition/base.d" May 16 05:29:31.291545 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:29:31.294058 ignition[861]: kargs: kargs passed May 16 05:29:31.294600 ignition[861]: Ignition finished successfully May 16 05:29:31.299611 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 05:29:31.300966 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 05:29:31.334451 ignition[870]: Ignition 2.21.0 May 16 05:29:31.334468 ignition[870]: Stage: disks May 16 05:29:31.334633 ignition[870]: no configs at "/usr/lib/ignition/base.d" May 16 05:29:31.334648 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:29:31.337879 ignition[870]: disks: disks passed May 16 05:29:31.338457 ignition[870]: Ignition finished successfully May 16 05:29:31.342393 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 05:29:31.344626 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 05:29:31.345123 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 05:29:31.345638 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 05:29:31.345968 systemd[1]: Reached target sysinit.target - System Initialization. May 16 05:29:31.346723 systemd[1]: Reached target basic.target - Basic System. May 16 05:29:31.354568 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 05:29:31.378403 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 05:29:31.386851 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 05:29:31.390854 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 05:29:31.502377 kernel: EXT4-fs (vda9): mounted filesystem c6031f04-b45d-4ec8-a78e-9b0eb2cfd779 r/w with ordered data mode. Quota mode: none. May 16 05:29:31.502606 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 05:29:31.504022 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 05:29:31.506663 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 05:29:31.508435 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 05:29:31.509721 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 05:29:31.509759 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 05:29:31.509780 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 05:29:31.521286 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 05:29:31.522791 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 05:29:31.527380 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (888) May 16 05:29:31.530091 kernel: BTRFS info (device vda6): first mount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:29:31.530117 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 05:29:31.530128 kernel: BTRFS info (device vda6): using free-space-tree May 16 05:29:31.534588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 05:29:31.558084 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory May 16 05:29:31.563107 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory May 16 05:29:31.567885 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory May 16 05:29:31.572547 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory May 16 05:29:31.653768 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 05:29:31.656931 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 05:29:31.657866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 05:29:31.677389 kernel: BTRFS info (device vda6): last unmount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:29:31.688522 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 05:29:31.702062 ignition[1002]: INFO : Ignition 2.21.0 May 16 05:29:31.702062 ignition[1002]: INFO : Stage: mount May 16 05:29:31.703798 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 05:29:31.703798 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:29:31.707000 ignition[1002]: INFO : mount: mount passed May 16 05:29:31.707000 ignition[1002]: INFO : Ignition finished successfully May 16 05:29:31.709664 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 05:29:31.712622 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 05:29:32.032110 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 05:29:32.033901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 05:29:32.065930 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1014) May 16 05:29:32.065960 kernel: BTRFS info (device vda6): first mount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:29:32.065972 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 05:29:32.067442 kernel: BTRFS info (device vda6): using free-space-tree May 16 05:29:32.070994 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 05:29:32.107376 ignition[1031]: INFO : Ignition 2.21.0 May 16 05:29:32.108574 ignition[1031]: INFO : Stage: files May 16 05:29:32.108574 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 05:29:32.108574 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:29:32.111569 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping May 16 05:29:32.113051 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 05:29:32.113051 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 05:29:32.116029 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 05:29:32.116029 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 05:29:32.116029 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 05:29:32.115799 unknown[1031]: wrote ssh authorized keys file for user: core May 16 05:29:32.121296 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 16 05:29:32.121296 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 16 05:29:32.194637 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 05:29:32.327160 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 16 05:29:32.327160 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 05:29:32.331230 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 05:29:32.763243 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 05:29:32.857549 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 05:29:32.857549 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 05:29:32.861477 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 05:29:32.861477 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 05:29:32.861477 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 05:29:32.861477 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 05:29:32.861477 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 05:29:32.861477 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 05:29:32.861477 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 05:29:32.874474 systemd-networkd[857]: eth0: Gained IPv6LL May 16 05:29:32.908410 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 05:29:32.910395 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 05:29:32.910395 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 05:29:32.983394 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 05:29:32.983394 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 05:29:32.988151 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 16 05:29:33.793297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 05:29:34.233888 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 05:29:34.233888 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 05:29:34.237772 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 05:29:34.253314 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 05:29:34.253314 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 05:29:34.253314 ignition[1031]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 05:29:34.258808 ignition[1031]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 05:29:34.258808 ignition[1031]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 05:29:34.258808 ignition[1031]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 05:29:34.258808 ignition[1031]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 05:29:34.272682 ignition[1031]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 05:29:34.276167 ignition[1031]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 05:29:34.278134 ignition[1031]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 05:29:34.278134 ignition[1031]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 05:29:34.278134 ignition[1031]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 05:29:34.278134 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 05:29:34.278134 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 05:29:34.278134 ignition[1031]: INFO : files: files passed May 16 05:29:34.278134 ignition[1031]: INFO : Ignition finished successfully May 16 05:29:34.280163 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 05:29:34.288267 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 05:29:34.292853 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 05:29:34.307284 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 05:29:34.307441 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 05:29:34.311087 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory May 16 05:29:34.313218 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 05:29:34.315254 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 05:29:34.315254 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 05:29:34.318273 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 05:29:34.319983 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 05:29:34.323493 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 05:29:34.378158 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 05:29:34.378290 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 05:29:34.379253 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 05:29:34.383846 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 05:29:34.384382 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 05:29:34.386201 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 05:29:34.424079 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 05:29:34.426608 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 05:29:34.445265 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 05:29:34.445626 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 05:29:34.445978 systemd[1]: Stopped target timers.target - Timer Units. May 16 05:29:34.446308 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 05:29:34.446431 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 05:29:34.447159 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 05:29:34.447660 systemd[1]: Stopped target basic.target - Basic System. May 16 05:29:34.447988 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 05:29:34.448331 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 05:29:34.448832 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 05:29:34.449170 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 05:29:34.449670 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 05:29:34.449995 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 05:29:34.450348 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 05:29:34.450844 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 05:29:34.451176 systemd[1]: Stopped target swap.target - Swaps. May 16 05:29:34.451642 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 05:29:34.451747 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 05:29:34.476114 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 05:29:34.476817 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 05:29:34.477119 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 05:29:34.477216 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 05:29:34.477629 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 05:29:34.477728 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 05:29:34.478299 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 05:29:34.478411 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 05:29:34.478865 systemd[1]: Stopped target paths.target - Path Units. May 16 05:29:34.479127 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 05:29:34.482397 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 05:29:34.490220 systemd[1]: Stopped target slices.target - Slice Units. May 16 05:29:34.490716 systemd[1]: Stopped target sockets.target - Socket Units. May 16 05:29:34.491056 systemd[1]: iscsid.socket: Deactivated successfully. May 16 05:29:34.491143 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 05:29:34.491614 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 05:29:34.491685 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 05:29:34.498282 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 05:29:34.498403 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 05:29:34.501122 systemd[1]: ignition-files.service: Deactivated successfully. May 16 05:29:34.501228 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 05:29:34.506257 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 05:29:34.506716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 05:29:34.506824 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 05:29:34.508192 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 05:29:34.510756 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 05:29:34.510926 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 05:29:34.512921 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 05:29:34.513027 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 05:29:34.519928 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 05:29:34.520037 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 05:29:34.537761 ignition[1086]: INFO : Ignition 2.21.0 May 16 05:29:34.537761 ignition[1086]: INFO : Stage: umount May 16 05:29:34.539521 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 05:29:34.539521 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:29:34.541780 ignition[1086]: INFO : umount: umount passed May 16 05:29:34.541780 ignition[1086]: INFO : Ignition finished successfully May 16 05:29:34.542223 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 05:29:34.544282 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 05:29:34.544412 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 05:29:34.545539 systemd[1]: Stopped target network.target - Network. May 16 05:29:34.545878 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 05:29:34.545925 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 05:29:34.546234 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 05:29:34.546276 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 05:29:34.551345 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 05:29:34.551431 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 05:29:34.551925 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 05:29:34.551963 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 05:29:34.552406 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 05:29:34.556985 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 05:29:34.563805 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 05:29:34.563948 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 05:29:34.568605 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 05:29:34.568919 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 05:29:34.568970 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 05:29:34.572989 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 05:29:34.574663 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 05:29:34.574787 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 05:29:34.578660 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 05:29:34.578821 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 05:29:34.580751 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 05:29:34.580792 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 05:29:34.584347 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 05:29:34.584830 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 05:29:34.584877 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 05:29:34.585204 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 05:29:34.585243 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 05:29:34.591479 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 05:29:34.591526 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 05:29:34.592051 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 05:29:34.593185 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 05:29:34.615208 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 05:29:34.615398 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 05:29:34.615901 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 05:29:34.615944 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 05:29:34.619011 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 05:29:34.619049 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 05:29:34.619331 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 05:29:34.619396 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 05:29:34.620526 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 05:29:34.620573 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 05:29:34.627783 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 05:29:34.627838 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 05:29:34.631398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 05:29:34.632116 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 05:29:34.632168 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 05:29:34.636219 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 05:29:34.636265 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 05:29:34.639662 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 05:29:34.639711 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:29:34.643511 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 05:29:34.645499 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 05:29:34.652071 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 05:29:34.652191 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 05:29:34.762380 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 05:29:34.762507 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 05:29:34.763373 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 05:29:34.767057 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 05:29:34.767126 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 05:29:34.768545 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 05:29:34.797077 systemd[1]: Switching root. May 16 05:29:34.843642 systemd-journald[220]: Journal stopped May 16 05:29:36.169631 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 16 05:29:36.169701 kernel: SELinux: policy capability network_peer_controls=1 May 16 05:29:36.169725 kernel: SELinux: policy capability open_perms=1 May 16 05:29:36.169743 kernel: SELinux: policy capability extended_socket_class=1 May 16 05:29:36.169754 kernel: SELinux: policy capability always_check_network=0 May 16 05:29:36.169765 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 05:29:36.169776 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 05:29:36.169787 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 05:29:36.169798 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 05:29:36.169809 kernel: SELinux: policy capability userspace_initial_context=0 May 16 05:29:36.169822 kernel: audit: type=1403 audit(1747373375.381:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 05:29:36.169834 systemd[1]: Successfully loaded SELinux policy in 47.229ms. May 16 05:29:36.169860 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.341ms. May 16 05:29:36.169873 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 05:29:36.169886 systemd[1]: Detected virtualization kvm. May 16 05:29:36.169897 systemd[1]: Detected architecture x86-64. May 16 05:29:36.169909 systemd[1]: Detected first boot. May 16 05:29:36.169920 systemd[1]: Initializing machine ID from VM UUID. May 16 05:29:36.169932 zram_generator::config[1133]: No configuration found. May 16 05:29:36.169947 kernel: Guest personality initialized and is inactive May 16 05:29:36.169959 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 05:29:36.169970 kernel: Initialized host personality May 16 05:29:36.169981 kernel: NET: Registered PF_VSOCK protocol family May 16 05:29:36.169992 systemd[1]: Populated /etc with preset unit settings. May 16 05:29:36.170005 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 05:29:36.170017 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 05:29:36.170029 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 05:29:36.170043 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 05:29:36.170064 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 05:29:36.170076 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 05:29:36.170089 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 05:29:36.170101 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 05:29:36.170113 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 05:29:36.170130 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 05:29:36.170142 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 05:29:36.170155 systemd[1]: Created slice user.slice - User and Session Slice. May 16 05:29:36.170167 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 05:29:36.170179 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 05:29:36.170191 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 05:29:36.170203 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 05:29:36.170216 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 05:29:36.170228 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 05:29:36.170240 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 05:29:36.170254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 05:29:36.170266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 05:29:36.170278 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 05:29:36.170290 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 05:29:36.170302 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 05:29:36.170314 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 05:29:36.170326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 05:29:36.170338 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 05:29:36.170363 systemd[1]: Reached target slices.target - Slice Units. May 16 05:29:36.170377 systemd[1]: Reached target swap.target - Swaps. May 16 05:29:36.170389 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 05:29:36.170401 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 05:29:36.170413 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 05:29:36.170425 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 05:29:36.170436 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 05:29:36.170448 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 05:29:36.170460 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 05:29:36.170472 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 05:29:36.170484 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 05:29:36.170498 systemd[1]: Mounting media.mount - External Media Directory... May 16 05:29:36.170510 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:29:36.170522 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 05:29:36.170534 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 05:29:36.170546 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 05:29:36.170558 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 05:29:36.170570 systemd[1]: Reached target machines.target - Containers. May 16 05:29:36.170582 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 05:29:36.170596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 05:29:36.170608 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 05:29:36.170620 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 05:29:36.170632 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 05:29:36.170644 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 05:29:36.170656 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 05:29:36.170668 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 05:29:36.170680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 05:29:36.170693 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 05:29:36.170705 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 05:29:36.170718 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 05:29:36.170730 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 05:29:36.170741 systemd[1]: Stopped systemd-fsck-usr.service. May 16 05:29:36.170754 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 05:29:36.170770 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 05:29:36.170785 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 05:29:36.170800 kernel: loop: module loaded May 16 05:29:36.170818 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 05:29:36.170833 kernel: fuse: init (API version 7.41) May 16 05:29:36.170849 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 05:29:36.170861 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 05:29:36.170873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 05:29:36.170887 systemd[1]: verity-setup.service: Deactivated successfully. May 16 05:29:36.170899 systemd[1]: Stopped verity-setup.service. May 16 05:29:36.170911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:29:36.170922 kernel: ACPI: bus type drm_connector registered May 16 05:29:36.170934 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 05:29:36.170946 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 05:29:36.170958 systemd[1]: Mounted media.mount - External Media Directory. May 16 05:29:36.170970 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 05:29:36.170984 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 05:29:36.170995 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 05:29:36.171007 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 05:29:36.171020 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 05:29:36.171032 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 05:29:36.171072 systemd-journald[1202]: Collecting audit messages is disabled. May 16 05:29:36.171099 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 05:29:36.171111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 05:29:36.171123 systemd-journald[1202]: Journal started May 16 05:29:36.171145 systemd-journald[1202]: Runtime Journal (/run/log/journal/2e9c3a0170394a7eb2207c59d3622621) is 6M, max 48.5M, 42.4M free. May 16 05:29:35.907743 systemd[1]: Queued start job for default target multi-user.target. May 16 05:29:35.927247 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 05:29:35.927699 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 05:29:36.172370 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 05:29:36.175373 systemd[1]: Started systemd-journald.service - Journal Service. May 16 05:29:36.176322 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 05:29:36.176548 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 05:29:36.177910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 05:29:36.178124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 05:29:36.179679 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 05:29:36.179885 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 05:29:36.181275 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 05:29:36.181495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 05:29:36.182916 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 05:29:36.184342 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 05:29:36.186229 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 05:29:36.187930 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 05:29:36.203666 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 05:29:36.206419 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 05:29:36.210465 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 05:29:36.211749 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 05:29:36.211785 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 05:29:36.213944 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 05:29:36.218159 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 05:29:36.219631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 05:29:36.221336 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 05:29:36.225473 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 05:29:36.226781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 05:29:36.228528 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 05:29:36.229733 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 05:29:36.231867 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 05:29:36.235864 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 05:29:36.238519 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 05:29:36.241755 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 05:29:36.244431 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 05:29:36.246002 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 05:29:36.248467 systemd-journald[1202]: Time spent on flushing to /var/log/journal/2e9c3a0170394a7eb2207c59d3622621 is 23.898ms for 1069 entries. May 16 05:29:36.248467 systemd-journald[1202]: System Journal (/var/log/journal/2e9c3a0170394a7eb2207c59d3622621) is 8M, max 195.6M, 187.6M free. May 16 05:29:36.422442 systemd-journald[1202]: Received client request to flush runtime journal. May 16 05:29:36.422495 kernel: loop0: detected capacity change from 0 to 229808 May 16 05:29:36.422520 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 05:29:36.422868 kernel: loop1: detected capacity change from 0 to 113872 May 16 05:29:36.422888 kernel: loop2: detected capacity change from 0 to 146240 May 16 05:29:36.422910 kernel: loop3: detected capacity change from 0 to 229808 May 16 05:29:36.422927 kernel: loop4: detected capacity change from 0 to 113872 May 16 05:29:36.255689 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 05:29:36.257325 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 05:29:36.261551 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 05:29:36.279867 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 05:29:36.425848 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 05:29:36.427157 kernel: loop5: detected capacity change from 0 to 146240 May 16 05:29:36.427659 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 05:29:36.430819 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 05:29:36.495635 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 16 05:29:36.495653 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 16 05:29:36.495957 (sd-merge)[1266]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 05:29:36.496565 (sd-merge)[1266]: Merged extensions into '/usr'. May 16 05:29:36.502287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 05:29:36.504717 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... May 16 05:29:36.504732 systemd[1]: Reloading... May 16 05:29:36.552384 zram_generator::config[1298]: No configuration found. May 16 05:29:36.666707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:29:36.667553 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 05:29:36.747263 systemd[1]: Reloading finished in 242 ms. May 16 05:29:36.781137 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 05:29:36.783054 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 05:29:36.784671 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 05:29:36.802186 systemd[1]: Starting ensure-sysext.service... May 16 05:29:36.804236 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 05:29:36.815794 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 16 05:29:36.815812 systemd[1]: Reloading... May 16 05:29:36.826712 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 05:29:36.826759 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 05:29:36.827135 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 05:29:36.827505 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 05:29:36.828791 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 05:29:36.829121 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 16 05:29:36.829249 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 16 05:29:36.833388 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 16 05:29:36.833470 systemd-tmpfiles[1337]: Skipping /boot May 16 05:29:36.846148 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 16 05:29:36.846222 systemd-tmpfiles[1337]: Skipping /boot May 16 05:29:36.877380 zram_generator::config[1371]: No configuration found. May 16 05:29:37.047569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:29:37.128647 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 05:29:37.128800 systemd[1]: Reloading finished in 312 ms. May 16 05:29:37.151960 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 05:29:37.175519 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 05:29:37.184856 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 05:29:37.187273 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 05:29:37.189645 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 05:29:37.206263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 05:29:37.209188 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 05:29:37.211966 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 05:29:37.216764 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:29:37.216922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 05:29:37.218555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 05:29:37.221602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 05:29:37.225531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 05:29:37.226820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 05:29:37.226927 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 05:29:37.234790 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 05:29:37.235894 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:29:37.237472 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 05:29:37.239252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 05:29:37.239469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 05:29:37.241294 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 05:29:37.244995 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 05:29:37.246753 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 05:29:37.246953 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 05:29:37.249937 systemd-udevd[1408]: Using default interface naming scheme 'v255'. May 16 05:29:37.258711 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 05:29:37.264166 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:29:37.264735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 05:29:37.267242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 05:29:37.270612 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 05:29:37.275564 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 05:29:37.280609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 05:29:37.280972 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 05:29:37.281080 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 05:29:37.282247 augenrules[1440]: No rules May 16 05:29:37.282561 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 05:29:37.283696 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:29:37.285128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 05:29:37.286106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 05:29:37.287931 systemd[1]: audit-rules.service: Deactivated successfully. May 16 05:29:37.288227 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 05:29:37.289863 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 05:29:37.290523 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 05:29:37.292421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 05:29:37.294343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 05:29:37.294577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 05:29:37.314298 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 05:29:37.315981 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 05:29:37.317799 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 05:29:37.318008 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 05:29:37.319843 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 05:29:37.332643 systemd[1]: Finished ensure-sysext.service. May 16 05:29:37.347989 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 05:29:37.349559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 05:29:37.349640 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 05:29:37.353671 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 05:29:37.354881 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 05:29:37.374525 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 05:29:37.415383 kernel: mousedev: PS/2 mouse device common for all mice May 16 05:29:37.438891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 05:29:37.442445 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 16 05:29:37.444159 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 05:29:37.451392 kernel: ACPI: button: Power Button [PWRF] May 16 05:29:37.455711 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 16 05:29:37.457248 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 05:29:37.457450 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 05:29:37.473449 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 05:29:37.513425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:29:37.527249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 05:29:37.527583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:29:37.531643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:29:37.568170 systemd-resolved[1406]: Positive Trust Anchors: May 16 05:29:37.568565 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 05:29:37.568644 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 05:29:37.570316 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 05:29:37.570721 systemd[1]: Reached target time-set.target - System Time Set. May 16 05:29:37.576146 systemd-resolved[1406]: Defaulting to hostname 'linux'. May 16 05:29:37.579706 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 05:29:37.580001 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 05:29:37.585760 systemd-networkd[1488]: lo: Link UP May 16 05:29:37.585772 systemd-networkd[1488]: lo: Gained carrier May 16 05:29:37.588048 systemd-networkd[1488]: Enumeration completed May 16 05:29:37.588116 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 05:29:37.588417 systemd[1]: Reached target network.target - Network. May 16 05:29:37.590287 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 05:29:37.591531 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:29:37.591535 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 05:29:37.592161 systemd-networkd[1488]: eth0: Link UP May 16 05:29:37.592310 systemd-networkd[1488]: eth0: Gained carrier May 16 05:29:37.592323 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:29:37.597869 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 05:29:37.617422 systemd-networkd[1488]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 05:29:37.618316 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. May 16 05:29:38.992675 systemd-timesyncd[1490]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 05:29:38.992781 systemd-timesyncd[1490]: Initial clock synchronization to Fri 2025-05-16 05:29:38.992535 UTC. May 16 05:29:38.993042 systemd-resolved[1406]: Clock change detected. Flushing caches. May 16 05:29:39.001757 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 05:29:39.018982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:29:39.020769 systemd[1]: Reached target sysinit.target - System Initialization. May 16 05:29:39.021965 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 05:29:39.024661 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 05:29:39.025983 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 16 05:29:39.027334 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 05:29:39.028509 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 05:29:39.030667 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 05:29:39.031925 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 05:29:39.031955 systemd[1]: Reached target paths.target - Path Units. May 16 05:29:39.032887 systemd[1]: Reached target timers.target - Timer Units. May 16 05:29:39.034742 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 05:29:39.037478 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 05:29:39.044375 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 05:29:39.051178 kernel: kvm_amd: TSC scaling supported May 16 05:29:39.051228 kernel: kvm_amd: Nested Virtualization enabled May 16 05:29:39.051274 kernel: kvm_amd: Nested Paging enabled May 16 05:29:39.051296 kernel: kvm_amd: LBR virtualization supported May 16 05:29:39.051315 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 05:29:39.051336 kernel: kvm_amd: Virtual GIF supported May 16 05:29:39.046148 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 05:29:39.051522 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 05:29:39.059179 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 05:29:39.061193 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 05:29:39.063773 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 05:29:39.066384 systemd[1]: Reached target sockets.target - Socket Units. May 16 05:29:39.067454 systemd[1]: Reached target basic.target - Basic System. May 16 05:29:39.068500 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 05:29:39.068600 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 05:29:39.069699 systemd[1]: Starting containerd.service - containerd container runtime... May 16 05:29:39.072427 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 05:29:39.075881 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 05:29:39.078864 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 05:29:39.081650 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 05:29:39.082739 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 05:29:39.084164 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 16 05:29:39.087955 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 05:29:39.092070 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 05:29:39.095760 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 05:29:39.097968 oslogin_cache_refresh[1539]: Refreshing passwd entry cache May 16 05:29:39.098869 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing passwd entry cache May 16 05:29:39.102588 jq[1537]: false May 16 05:29:39.099818 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 05:29:39.105964 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 05:29:39.107977 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 05:29:39.108457 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 05:29:39.111779 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting users, quitting May 16 05:29:39.111770 oslogin_cache_refresh[1539]: Failure getting users, quitting May 16 05:29:39.115630 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 05:29:39.115630 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing group entry cache May 16 05:29:39.115694 extend-filesystems[1538]: Found loop3 May 16 05:29:39.115694 extend-filesystems[1538]: Found loop4 May 16 05:29:39.115694 extend-filesystems[1538]: Found loop5 May 16 05:29:39.115694 extend-filesystems[1538]: Found sr0 May 16 05:29:39.115694 extend-filesystems[1538]: Found vda May 16 05:29:39.115694 extend-filesystems[1538]: Found vda1 May 16 05:29:39.115694 extend-filesystems[1538]: Found vda2 May 16 05:29:39.115694 extend-filesystems[1538]: Found vda3 May 16 05:29:39.115694 extend-filesystems[1538]: Found usr May 16 05:29:39.115694 extend-filesystems[1538]: Found vda4 May 16 05:29:39.115694 extend-filesystems[1538]: Found vda6 May 16 05:29:39.115694 extend-filesystems[1538]: Found vda7 May 16 05:29:39.115694 extend-filesystems[1538]: Found vda9 May 16 05:29:39.115694 extend-filesystems[1538]: Checking size of /dev/vda9 May 16 05:29:39.139368 kernel: EDAC MC: Ver: 3.0.0 May 16 05:29:39.111793 oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 05:29:39.111843 systemd[1]: Starting update-engine.service - Update Engine... May 16 05:29:39.139545 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting groups, quitting May 16 05:29:39.139545 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 05:29:39.112915 oslogin_cache_refresh[1539]: Refreshing group entry cache May 16 05:29:39.114313 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 05:29:39.139677 jq[1554]: true May 16 05:29:39.120648 oslogin_cache_refresh[1539]: Failure getting groups, quitting May 16 05:29:39.123859 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 05:29:39.120657 oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 05:29:39.125461 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 05:29:39.126721 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 05:29:39.127039 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 16 05:29:39.127275 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 16 05:29:39.128445 systemd[1]: motdgen.service: Deactivated successfully. May 16 05:29:39.129539 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 05:29:39.138418 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 05:29:39.138742 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 05:29:39.146686 update_engine[1552]: I20250516 05:29:39.141695 1552 main.cc:92] Flatcar Update Engine starting May 16 05:29:39.148008 extend-filesystems[1538]: Resized partition /dev/vda9 May 16 05:29:39.154165 (ntainerd)[1567]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 05:29:39.155835 jq[1563]: true May 16 05:29:39.200708 extend-filesystems[1593]: resize2fs 1.47.2 (1-Jan-2025) May 16 05:29:39.208000 tar[1561]: linux-amd64/LICENSE May 16 05:29:39.208251 tar[1561]: linux-amd64/helm May 16 05:29:39.218807 systemd-logind[1546]: Watching system buttons on /dev/input/event2 (Power Button) May 16 05:29:39.218837 systemd-logind[1546]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 05:29:39.218991 dbus-daemon[1535]: [system] SELinux support is enabled May 16 05:29:39.219694 systemd-logind[1546]: New seat seat0. May 16 05:29:39.220181 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 05:29:39.233445 update_engine[1552]: I20250516 05:29:39.233268 1552 update_check_scheduler.cc:74] Next update check in 5m38s May 16 05:29:39.234404 systemd[1]: Started systemd-logind.service - User Login Management. May 16 05:29:39.239873 dbus-daemon[1535]: [system] Successfully activated service 'org.freedesktop.systemd1' May 16 05:29:39.239141 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 05:29:39.239159 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 05:29:39.240467 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 05:29:39.240482 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 05:29:39.241808 systemd[1]: Started update-engine.service - Update Engine. May 16 05:29:39.244635 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 05:29:39.307284 locksmithd[1595]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 05:29:39.363940 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 05:29:39.378601 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 05:29:39.388272 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 05:29:39.391560 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 05:29:39.411086 systemd[1]: issuegen.service: Deactivated successfully. May 16 05:29:39.411371 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 05:29:39.414212 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 05:29:39.445394 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 05:29:39.449809 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 05:29:39.451975 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 05:29:39.453307 systemd[1]: Reached target getty.target - Login Prompts. May 16 05:29:39.613610 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 05:29:39.724857 extend-filesystems[1593]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 05:29:39.724857 extend-filesystems[1593]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 05:29:39.724857 extend-filesystems[1593]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 05:29:39.731829 extend-filesystems[1538]: Resized filesystem in /dev/vda9 May 16 05:29:39.732758 bash[1592]: Updated "/home/core/.ssh/authorized_keys" May 16 05:29:39.726831 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 05:29:39.727117 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 05:29:39.729730 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 05:29:39.734231 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 05:29:39.745765 containerd[1567]: time="2025-05-16T05:29:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 05:29:39.746414 containerd[1567]: time="2025-05-16T05:29:39.746379406Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 05:29:39.754303 containerd[1567]: time="2025-05-16T05:29:39.754267301Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.667µs" May 16 05:29:39.754303 containerd[1567]: time="2025-05-16T05:29:39.754292448Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 05:29:39.754360 containerd[1567]: time="2025-05-16T05:29:39.754309060Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 05:29:39.754480 containerd[1567]: time="2025-05-16T05:29:39.754454502Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 05:29:39.754480 containerd[1567]: time="2025-05-16T05:29:39.754473157Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 05:29:39.754534 containerd[1567]: time="2025-05-16T05:29:39.754494748Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 05:29:39.754589 containerd[1567]: time="2025-05-16T05:29:39.754554089Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 05:29:39.754589 containerd[1567]: time="2025-05-16T05:29:39.754583124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 05:29:39.754840 containerd[1567]: time="2025-05-16T05:29:39.754805962Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 05:29:39.754840 containerd[1567]: time="2025-05-16T05:29:39.754823765Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 05:29:39.754840 containerd[1567]: time="2025-05-16T05:29:39.754833293Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 05:29:39.754840 containerd[1567]: time="2025-05-16T05:29:39.754840547Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 05:29:39.754951 containerd[1567]: time="2025-05-16T05:29:39.754927910Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 05:29:39.755165 containerd[1567]: time="2025-05-16T05:29:39.755133516Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 05:29:39.755193 containerd[1567]: time="2025-05-16T05:29:39.755166328Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 05:29:39.755193 containerd[1567]: time="2025-05-16T05:29:39.755176647Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 05:29:39.755239 containerd[1567]: time="2025-05-16T05:29:39.755209178Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 05:29:39.756311 containerd[1567]: time="2025-05-16T05:29:39.756253487Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 05:29:39.756456 containerd[1567]: time="2025-05-16T05:29:39.756409219Z" level=info msg="metadata content store policy set" policy=shared May 16 05:29:39.761066 containerd[1567]: time="2025-05-16T05:29:39.761031018Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 05:29:39.761118 containerd[1567]: time="2025-05-16T05:29:39.761084759Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 05:29:39.761118 containerd[1567]: time="2025-05-16T05:29:39.761100358Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 05:29:39.761118 containerd[1567]: time="2025-05-16T05:29:39.761113563Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 05:29:39.761174 containerd[1567]: time="2025-05-16T05:29:39.761125996Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 05:29:39.761174 containerd[1567]: time="2025-05-16T05:29:39.761137518Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 05:29:39.761174 containerd[1567]: time="2025-05-16T05:29:39.761149350Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 05:29:39.761174 containerd[1567]: time="2025-05-16T05:29:39.761161803Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 05:29:39.761174 containerd[1567]: time="2025-05-16T05:29:39.761172974Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 05:29:39.761304 containerd[1567]: time="2025-05-16T05:29:39.761184476Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 05:29:39.761304 containerd[1567]: time="2025-05-16T05:29:39.761194324Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 05:29:39.761304 containerd[1567]: time="2025-05-16T05:29:39.761207108Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 05:29:39.761377 containerd[1567]: time="2025-05-16T05:29:39.761358001Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 05:29:39.761398 containerd[1567]: time="2025-05-16T05:29:39.761380854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 05:29:39.761398 containerd[1567]: time="2025-05-16T05:29:39.761394059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 05:29:39.761442 containerd[1567]: time="2025-05-16T05:29:39.761404849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 05:29:39.761442 containerd[1567]: time="2025-05-16T05:29:39.761415790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 05:29:39.761442 containerd[1567]: time="2025-05-16T05:29:39.761426901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 05:29:39.761500 containerd[1567]: time="2025-05-16T05:29:39.761443241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 05:29:39.761500 containerd[1567]: time="2025-05-16T05:29:39.761454052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 05:29:39.761500 containerd[1567]: time="2025-05-16T05:29:39.761464241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 05:29:39.761500 containerd[1567]: time="2025-05-16T05:29:39.761475281Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 05:29:39.761500 containerd[1567]: time="2025-05-16T05:29:39.761485961Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 05:29:39.761612 containerd[1567]: time="2025-05-16T05:29:39.761546715Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 05:29:39.761612 containerd[1567]: time="2025-05-16T05:29:39.761559139Z" level=info msg="Start snapshots syncer" May 16 05:29:39.761612 containerd[1567]: time="2025-05-16T05:29:39.761604734Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 05:29:39.761827 containerd[1567]: time="2025-05-16T05:29:39.761793588Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 05:29:39.761921 containerd[1567]: time="2025-05-16T05:29:39.761838543Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 05:29:39.761942 containerd[1567]: time="2025-05-16T05:29:39.761921248Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 05:29:39.762045 containerd[1567]: time="2025-05-16T05:29:39.762026315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 05:29:39.762072 containerd[1567]: time="2025-05-16T05:29:39.762055800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 05:29:39.762072 containerd[1567]: time="2025-05-16T05:29:39.762066861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 05:29:39.762109 containerd[1567]: time="2025-05-16T05:29:39.762076750Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 05:29:39.762109 containerd[1567]: time="2025-05-16T05:29:39.762088151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 05:29:39.762109 containerd[1567]: time="2025-05-16T05:29:39.762098951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 05:29:39.762161 containerd[1567]: time="2025-05-16T05:29:39.762110854Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 05:29:39.762161 containerd[1567]: time="2025-05-16T05:29:39.762132083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 05:29:39.762161 containerd[1567]: time="2025-05-16T05:29:39.762144937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 05:29:39.762161 containerd[1567]: time="2025-05-16T05:29:39.762155207Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 05:29:39.762263 containerd[1567]: time="2025-05-16T05:29:39.762190012Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 05:29:39.762263 containerd[1567]: time="2025-05-16T05:29:39.762204900Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 05:29:39.762263 containerd[1567]: time="2025-05-16T05:29:39.762213476Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 05:29:39.762263 containerd[1567]: time="2025-05-16T05:29:39.762230418Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 05:29:39.762263 containerd[1567]: time="2025-05-16T05:29:39.762238242Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 05:29:39.762263 containerd[1567]: time="2025-05-16T05:29:39.762249023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 05:29:39.762263 containerd[1567]: time="2025-05-16T05:29:39.762258611Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 05:29:39.762390 containerd[1567]: time="2025-05-16T05:29:39.762271274Z" level=info msg="runtime interface created" May 16 05:29:39.762390 containerd[1567]: time="2025-05-16T05:29:39.762277206Z" level=info msg="created NRI interface" May 16 05:29:39.762390 containerd[1567]: time="2025-05-16T05:29:39.762285862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 05:29:39.762390 containerd[1567]: time="2025-05-16T05:29:39.762297063Z" level=info msg="Connect containerd service" May 16 05:29:39.762390 containerd[1567]: time="2025-05-16T05:29:39.762319204Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 05:29:39.763076 containerd[1567]: time="2025-05-16T05:29:39.763054023Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 05:29:39.770100 tar[1561]: linux-amd64/README.md May 16 05:29:39.793351 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 05:29:39.849276 containerd[1567]: time="2025-05-16T05:29:39.849226177Z" level=info msg="Start subscribing containerd event" May 16 05:29:39.849379 containerd[1567]: time="2025-05-16T05:29:39.849289826Z" level=info msg="Start recovering state" May 16 05:29:39.849429 containerd[1567]: time="2025-05-16T05:29:39.849410823Z" level=info msg="Start event monitor" May 16 05:29:39.849467 containerd[1567]: time="2025-05-16T05:29:39.849415111Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 05:29:39.849541 containerd[1567]: time="2025-05-16T05:29:39.849440649Z" level=info msg="Start cni network conf syncer for default" May 16 05:29:39.849590 containerd[1567]: time="2025-05-16T05:29:39.849543472Z" level=info msg="Start streaming server" May 16 05:29:39.849590 containerd[1567]: time="2025-05-16T05:29:39.849556957Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 05:29:39.849590 containerd[1567]: time="2025-05-16T05:29:39.849564301Z" level=info msg="runtime interface starting up..." May 16 05:29:39.849590 containerd[1567]: time="2025-05-16T05:29:39.849583347Z" level=info msg="starting plugins..." May 16 05:29:39.849683 containerd[1567]: time="2025-05-16T05:29:39.849613614Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 05:29:39.849683 containerd[1567]: time="2025-05-16T05:29:39.849506623Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 05:29:39.849820 containerd[1567]: time="2025-05-16T05:29:39.849797859Z" level=info msg="containerd successfully booted in 0.104530s" May 16 05:29:39.849920 systemd[1]: Started containerd.service - containerd container runtime. May 16 05:29:40.135714 systemd-networkd[1488]: eth0: Gained IPv6LL May 16 05:29:40.138774 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 05:29:40.140551 systemd[1]: Reached target network-online.target - Network is Online. May 16 05:29:40.143095 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 05:29:40.145422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:29:40.164932 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 05:29:40.188797 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 05:29:40.190561 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 05:29:40.190835 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 05:29:40.193026 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 05:29:40.873883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:29:40.875663 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 05:29:40.877041 systemd[1]: Startup finished in 2.821s (kernel) + 6.742s (initrd) + 4.167s (userspace) = 13.731s. May 16 05:29:40.879553 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 05:29:41.337453 kubelet[1667]: E0516 05:29:41.337332 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 05:29:41.341937 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 05:29:41.342129 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 05:29:41.342560 systemd[1]: kubelet.service: Consumed 1.017s CPU time, 265.9M memory peak. May 16 05:29:43.638174 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 05:29:43.640283 systemd[1]: Started sshd@0-10.0.0.148:22-10.0.0.1:43222.service - OpenSSH per-connection server daemon (10.0.0.1:43222). May 16 05:29:43.701549 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 43222 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:43.703865 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:43.710102 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 05:29:43.711174 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 05:29:43.717031 systemd-logind[1546]: New session 1 of user core. May 16 05:29:43.733347 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 05:29:43.736417 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 05:29:43.754139 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 05:29:43.756291 systemd-logind[1546]: New session c1 of user core. May 16 05:29:43.905321 systemd[1684]: Queued start job for default target default.target. May 16 05:29:43.922761 systemd[1684]: Created slice app.slice - User Application Slice. May 16 05:29:43.922786 systemd[1684]: Reached target paths.target - Paths. May 16 05:29:43.922822 systemd[1684]: Reached target timers.target - Timers. May 16 05:29:43.924251 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 05:29:43.934901 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 05:29:43.935018 systemd[1684]: Reached target sockets.target - Sockets. May 16 05:29:43.935057 systemd[1684]: Reached target basic.target - Basic System. May 16 05:29:43.935095 systemd[1684]: Reached target default.target - Main User Target. May 16 05:29:43.935133 systemd[1684]: Startup finished in 171ms. May 16 05:29:43.935503 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 05:29:43.937086 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 05:29:44.000286 systemd[1]: Started sshd@1-10.0.0.148:22-10.0.0.1:43224.service - OpenSSH per-connection server daemon (10.0.0.1:43224). May 16 05:29:44.048026 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 43224 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:44.049582 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:44.053695 systemd-logind[1546]: New session 2 of user core. May 16 05:29:44.061701 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 05:29:44.114290 sshd[1697]: Connection closed by 10.0.0.1 port 43224 May 16 05:29:44.114609 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 16 05:29:44.134864 systemd[1]: sshd@1-10.0.0.148:22-10.0.0.1:43224.service: Deactivated successfully. May 16 05:29:44.136308 systemd[1]: session-2.scope: Deactivated successfully. May 16 05:29:44.137042 systemd-logind[1546]: Session 2 logged out. Waiting for processes to exit. May 16 05:29:44.139774 systemd[1]: Started sshd@2-10.0.0.148:22-10.0.0.1:43232.service - OpenSSH per-connection server daemon (10.0.0.1:43232). May 16 05:29:44.140342 systemd-logind[1546]: Removed session 2. May 16 05:29:44.182181 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 43232 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:44.183331 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:44.187334 systemd-logind[1546]: New session 3 of user core. May 16 05:29:44.196697 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 05:29:44.245389 sshd[1705]: Connection closed by 10.0.0.1 port 43232 May 16 05:29:44.245723 sshd-session[1703]: pam_unix(sshd:session): session closed for user core May 16 05:29:44.255945 systemd[1]: sshd@2-10.0.0.148:22-10.0.0.1:43232.service: Deactivated successfully. May 16 05:29:44.257791 systemd[1]: session-3.scope: Deactivated successfully. May 16 05:29:44.258411 systemd-logind[1546]: Session 3 logged out. Waiting for processes to exit. May 16 05:29:44.261305 systemd[1]: Started sshd@3-10.0.0.148:22-10.0.0.1:43248.service - OpenSSH per-connection server daemon (10.0.0.1:43248). May 16 05:29:44.261809 systemd-logind[1546]: Removed session 3. May 16 05:29:44.309634 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 43248 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:44.310958 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:44.314752 systemd-logind[1546]: New session 4 of user core. May 16 05:29:44.329694 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 05:29:44.383043 sshd[1713]: Connection closed by 10.0.0.1 port 43248 May 16 05:29:44.383393 sshd-session[1711]: pam_unix(sshd:session): session closed for user core May 16 05:29:44.396018 systemd[1]: sshd@3-10.0.0.148:22-10.0.0.1:43248.service: Deactivated successfully. May 16 05:29:44.397710 systemd[1]: session-4.scope: Deactivated successfully. May 16 05:29:44.398377 systemd-logind[1546]: Session 4 logged out. Waiting for processes to exit. May 16 05:29:44.400869 systemd[1]: Started sshd@4-10.0.0.148:22-10.0.0.1:43252.service - OpenSSH per-connection server daemon (10.0.0.1:43252). May 16 05:29:44.401437 systemd-logind[1546]: Removed session 4. May 16 05:29:44.440539 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 43252 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:44.441773 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:44.445714 systemd-logind[1546]: New session 5 of user core. May 16 05:29:44.455680 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 05:29:44.511740 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 05:29:44.512042 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:29:44.527937 sudo[1722]: pam_unix(sudo:session): session closed for user root May 16 05:29:44.529553 sshd[1721]: Connection closed by 10.0.0.1 port 43252 May 16 05:29:44.529906 sshd-session[1719]: pam_unix(sshd:session): session closed for user core May 16 05:29:44.546156 systemd[1]: sshd@4-10.0.0.148:22-10.0.0.1:43252.service: Deactivated successfully. May 16 05:29:44.547917 systemd[1]: session-5.scope: Deactivated successfully. May 16 05:29:44.548596 systemd-logind[1546]: Session 5 logged out. Waiting for processes to exit. May 16 05:29:44.551548 systemd[1]: Started sshd@5-10.0.0.148:22-10.0.0.1:43256.service - OpenSSH per-connection server daemon (10.0.0.1:43256). May 16 05:29:44.552147 systemd-logind[1546]: Removed session 5. May 16 05:29:44.601808 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 43256 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:44.603272 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:44.607658 systemd-logind[1546]: New session 6 of user core. May 16 05:29:44.628690 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 05:29:44.681348 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 05:29:44.681649 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:29:44.896381 sudo[1732]: pam_unix(sudo:session): session closed for user root May 16 05:29:44.902878 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 05:29:44.903185 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:29:44.913067 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 05:29:44.959456 augenrules[1754]: No rules May 16 05:29:44.961336 systemd[1]: audit-rules.service: Deactivated successfully. May 16 05:29:44.961679 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 05:29:44.962776 sudo[1731]: pam_unix(sudo:session): session closed for user root May 16 05:29:44.964363 sshd[1730]: Connection closed by 10.0.0.1 port 43256 May 16 05:29:44.964629 sshd-session[1728]: pam_unix(sshd:session): session closed for user core May 16 05:29:44.973039 systemd[1]: sshd@5-10.0.0.148:22-10.0.0.1:43256.service: Deactivated successfully. May 16 05:29:44.974917 systemd[1]: session-6.scope: Deactivated successfully. May 16 05:29:44.975596 systemd-logind[1546]: Session 6 logged out. Waiting for processes to exit. May 16 05:29:44.978501 systemd[1]: Started sshd@6-10.0.0.148:22-10.0.0.1:43270.service - OpenSSH per-connection server daemon (10.0.0.1:43270). May 16 05:29:44.979037 systemd-logind[1546]: Removed session 6. May 16 05:29:45.028027 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 43270 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:45.029318 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:45.033386 systemd-logind[1546]: New session 7 of user core. May 16 05:29:45.042689 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 05:29:45.094831 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 05:29:45.095128 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:29:45.386480 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 05:29:45.401875 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 05:29:45.616122 dockerd[1788]: time="2025-05-16T05:29:45.616043869Z" level=info msg="Starting up" May 16 05:29:45.617774 dockerd[1788]: time="2025-05-16T05:29:45.617730833Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 05:29:46.755258 dockerd[1788]: time="2025-05-16T05:29:46.755210379Z" level=info msg="Loading containers: start." May 16 05:29:46.801587 kernel: Initializing XFRM netlink socket May 16 05:29:47.022436 systemd-networkd[1488]: docker0: Link UP May 16 05:29:47.027513 dockerd[1788]: time="2025-05-16T05:29:47.027469249Z" level=info msg="Loading containers: done." May 16 05:29:47.041511 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck685615416-merged.mount: Deactivated successfully. May 16 05:29:47.043297 dockerd[1788]: time="2025-05-16T05:29:47.043256371Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 05:29:47.043356 dockerd[1788]: time="2025-05-16T05:29:47.043333446Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 05:29:47.043467 dockerd[1788]: time="2025-05-16T05:29:47.043444554Z" level=info msg="Initializing buildkit" May 16 05:29:47.071242 dockerd[1788]: time="2025-05-16T05:29:47.071205596Z" level=info msg="Completed buildkit initialization" May 16 05:29:47.075465 dockerd[1788]: time="2025-05-16T05:29:47.075424760Z" level=info msg="Daemon has completed initialization" May 16 05:29:47.075525 dockerd[1788]: time="2025-05-16T05:29:47.075483761Z" level=info msg="API listen on /run/docker.sock" May 16 05:29:47.075666 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 05:29:47.609997 containerd[1567]: time="2025-05-16T05:29:47.609957942Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 16 05:29:48.181057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300704493.mount: Deactivated successfully. May 16 05:29:49.428188 containerd[1567]: time="2025-05-16T05:29:49.428130734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:49.428977 containerd[1567]: time="2025-05-16T05:29:49.428931316Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 16 05:29:49.430222 containerd[1567]: time="2025-05-16T05:29:49.430171291Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:49.432511 containerd[1567]: time="2025-05-16T05:29:49.432459243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:49.434109 containerd[1567]: time="2025-05-16T05:29:49.434063723Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.824065726s" May 16 05:29:49.434174 containerd[1567]: time="2025-05-16T05:29:49.434109198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 16 05:29:49.434671 containerd[1567]: time="2025-05-16T05:29:49.434630556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 16 05:29:50.753467 containerd[1567]: time="2025-05-16T05:29:50.753408647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:50.754097 containerd[1567]: time="2025-05-16T05:29:50.754044629Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 16 05:29:50.755185 containerd[1567]: time="2025-05-16T05:29:50.755156124Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:50.757501 containerd[1567]: time="2025-05-16T05:29:50.757472439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:50.758279 containerd[1567]: time="2025-05-16T05:29:50.758231803Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.323571482s" May 16 05:29:50.758279 containerd[1567]: time="2025-05-16T05:29:50.758261269Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 16 05:29:50.758810 containerd[1567]: time="2025-05-16T05:29:50.758788638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 16 05:29:51.592519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 05:29:51.594709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:29:51.781998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:29:51.797848 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 05:29:51.831039 kubelet[2069]: E0516 05:29:51.830995 2069 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 05:29:51.837853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 05:29:51.838056 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 05:29:51.838409 systemd[1]: kubelet.service: Consumed 208ms CPU time, 108.6M memory peak. May 16 05:29:52.470666 containerd[1567]: time="2025-05-16T05:29:52.470563859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:52.471326 containerd[1567]: time="2025-05-16T05:29:52.471264764Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 16 05:29:52.472452 containerd[1567]: time="2025-05-16T05:29:52.472396667Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:52.475042 containerd[1567]: time="2025-05-16T05:29:52.475011341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:52.475843 containerd[1567]: time="2025-05-16T05:29:52.475810961Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.716996535s" May 16 05:29:52.475879 containerd[1567]: time="2025-05-16T05:29:52.475842570Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 16 05:29:52.476325 containerd[1567]: time="2025-05-16T05:29:52.476275422Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 16 05:29:53.359166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3164204164.mount: Deactivated successfully. May 16 05:29:54.151086 containerd[1567]: time="2025-05-16T05:29:54.150995869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:54.152107 containerd[1567]: time="2025-05-16T05:29:54.152069864Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 16 05:29:54.153449 containerd[1567]: time="2025-05-16T05:29:54.153403265Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:54.155298 containerd[1567]: time="2025-05-16T05:29:54.155254657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:54.155806 containerd[1567]: time="2025-05-16T05:29:54.155758953Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.679453775s" May 16 05:29:54.155806 containerd[1567]: time="2025-05-16T05:29:54.155801032Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 16 05:29:54.156329 containerd[1567]: time="2025-05-16T05:29:54.156290190Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 16 05:29:54.728095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453094236.mount: Deactivated successfully. May 16 05:29:55.383944 containerd[1567]: time="2025-05-16T05:29:55.383886580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:55.384531 containerd[1567]: time="2025-05-16T05:29:55.384496924Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 16 05:29:55.385881 containerd[1567]: time="2025-05-16T05:29:55.385821098Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:55.388315 containerd[1567]: time="2025-05-16T05:29:55.388284749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:55.389178 containerd[1567]: time="2025-05-16T05:29:55.389129243Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.232810309s" May 16 05:29:55.389178 containerd[1567]: time="2025-05-16T05:29:55.389173867Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 16 05:29:55.389754 containerd[1567]: time="2025-05-16T05:29:55.389714080Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 05:29:55.836692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983330437.mount: Deactivated successfully. May 16 05:29:55.842892 containerd[1567]: time="2025-05-16T05:29:55.842813205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:29:55.843533 containerd[1567]: time="2025-05-16T05:29:55.843493912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 05:29:55.844890 containerd[1567]: time="2025-05-16T05:29:55.844843313Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:29:55.847081 containerd[1567]: time="2025-05-16T05:29:55.847000599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:29:55.847764 containerd[1567]: time="2025-05-16T05:29:55.847729857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 457.978797ms" May 16 05:29:55.847799 containerd[1567]: time="2025-05-16T05:29:55.847765253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 05:29:55.848314 containerd[1567]: time="2025-05-16T05:29:55.848284187Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 16 05:29:58.348711 containerd[1567]: time="2025-05-16T05:29:58.348658705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:58.349505 containerd[1567]: time="2025-05-16T05:29:58.349450670Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 16 05:29:58.350800 containerd[1567]: time="2025-05-16T05:29:58.350748945Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:58.353190 containerd[1567]: time="2025-05-16T05:29:58.353163054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:58.354255 containerd[1567]: time="2025-05-16T05:29:58.354220597Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.505912104s" May 16 05:29:58.354301 containerd[1567]: time="2025-05-16T05:29:58.354254280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 16 05:30:01.238131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:30:01.238299 systemd[1]: kubelet.service: Consumed 208ms CPU time, 108.6M memory peak. May 16 05:30:01.240388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:30:01.264539 systemd[1]: Reload requested from client PID 2185 ('systemctl') (unit session-7.scope)... May 16 05:30:01.264555 systemd[1]: Reloading... May 16 05:30:01.346618 zram_generator::config[2227]: No configuration found. May 16 05:30:01.622729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:30:01.734802 systemd[1]: Reloading finished in 469 ms. May 16 05:30:01.798195 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 05:30:01.798286 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 05:30:01.798557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:30:01.798609 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.3M memory peak. May 16 05:30:01.800034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:30:01.990356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:30:02.006847 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 05:30:02.044398 kubelet[2275]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:30:02.044398 kubelet[2275]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 05:30:02.044398 kubelet[2275]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:30:02.044768 kubelet[2275]: I0516 05:30:02.044433 2275 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 05:30:02.803582 kubelet[2275]: I0516 05:30:02.803514 2275 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 05:30:02.803582 kubelet[2275]: I0516 05:30:02.803559 2275 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 05:30:02.803884 kubelet[2275]: I0516 05:30:02.803855 2275 server.go:956] "Client rotation is on, will bootstrap in background" May 16 05:30:02.826892 kubelet[2275]: E0516 05:30:02.826860 2275 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 16 05:30:02.827183 kubelet[2275]: I0516 05:30:02.827168 2275 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 05:30:02.832979 kubelet[2275]: I0516 05:30:02.832952 2275 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 05:30:02.839019 kubelet[2275]: I0516 05:30:02.838536 2275 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 05:30:02.839019 kubelet[2275]: I0516 05:30:02.838869 2275 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 05:30:02.839370 kubelet[2275]: I0516 05:30:02.838893 2275 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 05:30:02.839469 kubelet[2275]: I0516 05:30:02.839392 2275 topology_manager.go:138] "Creating topology manager with none policy" May 16 05:30:02.839469 kubelet[2275]: I0516 05:30:02.839412 2275 container_manager_linux.go:303] "Creating device plugin manager" May 16 05:30:02.839706 kubelet[2275]: I0516 05:30:02.839688 2275 state_mem.go:36] "Initialized new in-memory state store" May 16 05:30:02.841527 kubelet[2275]: I0516 05:30:02.841505 2275 kubelet.go:480] "Attempting to sync node with API server" May 16 05:30:02.841527 kubelet[2275]: I0516 05:30:02.841522 2275 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 05:30:02.841612 kubelet[2275]: I0516 05:30:02.841544 2275 kubelet.go:386] "Adding apiserver pod source" May 16 05:30:02.842960 kubelet[2275]: I0516 05:30:02.842800 2275 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 05:30:02.847475 kubelet[2275]: I0516 05:30:02.847462 2275 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 05:30:02.847910 kubelet[2275]: E0516 05:30:02.847874 2275 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 16 05:30:02.847910 kubelet[2275]: E0516 05:30:02.847875 2275 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 05:30:02.848073 kubelet[2275]: I0516 05:30:02.848037 2275 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 05:30:02.849246 kubelet[2275]: W0516 05:30:02.849204 2275 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 05:30:02.851868 kubelet[2275]: I0516 05:30:02.851844 2275 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 05:30:02.851925 kubelet[2275]: I0516 05:30:02.851899 2275 server.go:1289] "Started kubelet" May 16 05:30:02.858815 kubelet[2275]: E0516 05:30:02.857181 2275 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183feada16f37418 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 05:30:02.851865624 +0000 UTC m=+0.841341718,LastTimestamp:2025-05-16 05:30:02.851865624 +0000 UTC m=+0.841341718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 05:30:02.858990 kubelet[2275]: I0516 05:30:02.858966 2275 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 05:30:02.859271 kubelet[2275]: I0516 05:30:02.859029 2275 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 05:30:02.859303 kubelet[2275]: I0516 05:30:02.859285 2275 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 05:30:02.859516 kubelet[2275]: I0516 05:30:02.859344 2275 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 05:30:02.859516 kubelet[2275]: I0516 05:30:02.859374 2275 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 05:30:02.859591 kubelet[2275]: I0516 05:30:02.859554 2275 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 05:30:02.859676 kubelet[2275]: I0516 05:30:02.859649 2275 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 05:30:02.859760 kubelet[2275]: I0516 05:30:02.859737 2275 reconciler.go:26] "Reconciler: start to sync state" May 16 05:30:02.860017 kubelet[2275]: E0516 05:30:02.859988 2275 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 05:30:02.860584 kubelet[2275]: E0516 05:30:02.860137 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:02.860584 kubelet[2275]: E0516 05:30:02.860204 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="200ms" May 16 05:30:02.860584 kubelet[2275]: I0516 05:30:02.860286 2275 server.go:317] "Adding debug handlers to kubelet server" May 16 05:30:02.862077 kubelet[2275]: E0516 05:30:02.862055 2275 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 05:30:02.862712 kubelet[2275]: I0516 05:30:02.862681 2275 factory.go:223] Registration of the containerd container factory successfully May 16 05:30:02.862712 kubelet[2275]: I0516 05:30:02.862697 2275 factory.go:223] Registration of the systemd container factory successfully May 16 05:30:02.862818 kubelet[2275]: I0516 05:30:02.862797 2275 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 05:30:02.876420 kubelet[2275]: I0516 05:30:02.876380 2275 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 05:30:02.876527 kubelet[2275]: I0516 05:30:02.876500 2275 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 05:30:02.876527 kubelet[2275]: I0516 05:30:02.876517 2275 state_mem.go:36] "Initialized new in-memory state store" May 16 05:30:02.879044 kubelet[2275]: I0516 05:30:02.879009 2275 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 05:30:02.880699 kubelet[2275]: I0516 05:30:02.880355 2275 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 05:30:02.880699 kubelet[2275]: I0516 05:30:02.880386 2275 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 05:30:02.880699 kubelet[2275]: I0516 05:30:02.880403 2275 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 05:30:02.880699 kubelet[2275]: I0516 05:30:02.880411 2275 kubelet.go:2436] "Starting kubelet main sync loop" May 16 05:30:02.880699 kubelet[2275]: E0516 05:30:02.880448 2275 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 05:30:02.960501 kubelet[2275]: E0516 05:30:02.960447 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:02.980666 kubelet[2275]: E0516 05:30:02.980591 2275 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 05:30:03.060641 kubelet[2275]: E0516 05:30:03.060509 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:03.061079 kubelet[2275]: E0516 05:30:03.060922 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="400ms" May 16 05:30:03.161512 kubelet[2275]: E0516 05:30:03.161459 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:03.181712 kubelet[2275]: E0516 05:30:03.181666 2275 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 05:30:03.262124 kubelet[2275]: E0516 05:30:03.262080 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:03.303173 kubelet[2275]: E0516 05:30:03.303122 2275 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 05:30:03.362654 kubelet[2275]: E0516 05:30:03.362484 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:03.462483 kubelet[2275]: E0516 05:30:03.462404 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="800ms" May 16 05:30:03.463584 kubelet[2275]: E0516 05:30:03.463521 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:03.564110 kubelet[2275]: E0516 05:30:03.564059 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:03.567060 kubelet[2275]: I0516 05:30:03.567005 2275 policy_none.go:49] "None policy: Start" May 16 05:30:03.567060 kubelet[2275]: I0516 05:30:03.567040 2275 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 05:30:03.567060 kubelet[2275]: I0516 05:30:03.567054 2275 state_mem.go:35] "Initializing new in-memory state store" May 16 05:30:03.581794 kubelet[2275]: E0516 05:30:03.581761 2275 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 05:30:03.582015 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 05:30:03.603470 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 05:30:03.606737 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 05:30:03.626894 kubelet[2275]: E0516 05:30:03.626401 2275 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 05:30:03.626894 kubelet[2275]: I0516 05:30:03.626653 2275 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 05:30:03.626894 kubelet[2275]: I0516 05:30:03.626662 2275 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 05:30:03.626894 kubelet[2275]: I0516 05:30:03.626857 2275 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 05:30:03.628186 kubelet[2275]: E0516 05:30:03.628129 2275 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 05:30:03.628186 kubelet[2275]: E0516 05:30:03.628166 2275 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 05:30:03.728252 kubelet[2275]: I0516 05:30:03.728216 2275 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:30:03.728625 kubelet[2275]: E0516 05:30:03.728596 2275 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" May 16 05:30:03.820376 kubelet[2275]: E0516 05:30:03.820344 2275 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 16 05:30:03.929960 kubelet[2275]: I0516 05:30:03.929869 2275 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:30:03.930314 kubelet[2275]: E0516 05:30:03.930262 2275 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" May 16 05:30:04.071838 kubelet[2275]: E0516 05:30:04.071787 2275 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 05:30:04.260283 kubelet[2275]: E0516 05:30:04.260134 2275 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 05:30:04.263703 kubelet[2275]: E0516 05:30:04.263648 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="1.6s" May 16 05:30:04.331952 kubelet[2275]: I0516 05:30:04.331926 2275 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:30:04.332162 kubelet[2275]: E0516 05:30:04.332127 2275 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" May 16 05:30:04.356755 kubelet[2275]: E0516 05:30:04.356723 2275 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 05:30:04.392342 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 16 05:30:04.404411 kubelet[2275]: E0516 05:30:04.404382 2275 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:30:04.407294 systemd[1]: Created slice kubepods-burstable-podc3b57fc1482827bd33440b9d929a4d5a.slice - libcontainer container kubepods-burstable-podc3b57fc1482827bd33440b9d929a4d5a.slice. May 16 05:30:04.409138 kubelet[2275]: E0516 05:30:04.409105 2275 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:30:04.411496 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 16 05:30:04.413006 kubelet[2275]: E0516 05:30:04.412970 2275 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:30:04.469463 kubelet[2275]: I0516 05:30:04.469404 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:30:04.469463 kubelet[2275]: I0516 05:30:04.469449 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3b57fc1482827bd33440b9d929a4d5a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c3b57fc1482827bd33440b9d929a4d5a\") " pod="kube-system/kube-apiserver-localhost" May 16 05:30:04.469666 kubelet[2275]: I0516 05:30:04.469471 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3b57fc1482827bd33440b9d929a4d5a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c3b57fc1482827bd33440b9d929a4d5a\") " pod="kube-system/kube-apiserver-localhost" May 16 05:30:04.469666 kubelet[2275]: I0516 05:30:04.469490 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:30:04.469835 kubelet[2275]: I0516 05:30:04.469804 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:30:04.469878 kubelet[2275]: I0516 05:30:04.469834 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:30:04.469878 kubelet[2275]: I0516 05:30:04.469855 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 16 05:30:04.469878 kubelet[2275]: I0516 05:30:04.469872 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3b57fc1482827bd33440b9d929a4d5a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c3b57fc1482827bd33440b9d929a4d5a\") " pod="kube-system/kube-apiserver-localhost" May 16 05:30:04.469972 kubelet[2275]: I0516 05:30:04.469893 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:30:04.705961 kubelet[2275]: E0516 05:30:04.705797 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:04.706489 containerd[1567]: time="2025-05-16T05:30:04.706444714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 16 05:30:04.709747 kubelet[2275]: E0516 05:30:04.709727 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:04.710200 containerd[1567]: time="2025-05-16T05:30:04.710164301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c3b57fc1482827bd33440b9d929a4d5a,Namespace:kube-system,Attempt:0,}" May 16 05:30:04.713345 kubelet[2275]: E0516 05:30:04.713320 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:04.713647 containerd[1567]: time="2025-05-16T05:30:04.713622748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 16 05:30:04.737151 containerd[1567]: time="2025-05-16T05:30:04.735806198Z" level=info msg="connecting to shim 50267bdd6fec356fd02b5278a8b37754c6e17509ae3ffb279258a0fedbc8a0a8" address="unix:///run/containerd/s/8407d669abc1bb82d174f8e0f5bd36c56905b7710e1c0cf9f3492b9a5bc547f8" namespace=k8s.io protocol=ttrpc version=3 May 16 05:30:04.747548 containerd[1567]: time="2025-05-16T05:30:04.747492077Z" level=info msg="connecting to shim 0afd1dae15e918eb8c7e75a45aceabfc98439eacecf7ed8de9df9177cd1c0e08" address="unix:///run/containerd/s/14c20156a60609d8d9093e61123e271ba45939a7aedfe1e7a2a5b8fcd70304ca" namespace=k8s.io protocol=ttrpc version=3 May 16 05:30:04.758304 containerd[1567]: time="2025-05-16T05:30:04.758257059Z" level=info msg="connecting to shim 337976b53c32ded87db5cfb842f2bc2f48d912b947ef6c57b3fe01cde0bc7222" address="unix:///run/containerd/s/f8b55d5de0bc9e801346df7887e40158b95bb1614f46e36a81d37c23755c3f60" namespace=k8s.io protocol=ttrpc version=3 May 16 05:30:04.764746 systemd[1]: Started cri-containerd-50267bdd6fec356fd02b5278a8b37754c6e17509ae3ffb279258a0fedbc8a0a8.scope - libcontainer container 50267bdd6fec356fd02b5278a8b37754c6e17509ae3ffb279258a0fedbc8a0a8. May 16 05:30:04.772345 systemd[1]: Started cri-containerd-0afd1dae15e918eb8c7e75a45aceabfc98439eacecf7ed8de9df9177cd1c0e08.scope - libcontainer container 0afd1dae15e918eb8c7e75a45aceabfc98439eacecf7ed8de9df9177cd1c0e08. May 16 05:30:04.781311 systemd[1]: Started cri-containerd-337976b53c32ded87db5cfb842f2bc2f48d912b947ef6c57b3fe01cde0bc7222.scope - libcontainer container 337976b53c32ded87db5cfb842f2bc2f48d912b947ef6c57b3fe01cde0bc7222. May 16 05:30:04.812084 containerd[1567]: time="2025-05-16T05:30:04.812033819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"50267bdd6fec356fd02b5278a8b37754c6e17509ae3ffb279258a0fedbc8a0a8\"" May 16 05:30:04.813345 kubelet[2275]: E0516 05:30:04.813326 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:04.819977 containerd[1567]: time="2025-05-16T05:30:04.819928387Z" level=info msg="CreateContainer within sandbox \"50267bdd6fec356fd02b5278a8b37754c6e17509ae3ffb279258a0fedbc8a0a8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 05:30:04.820838 containerd[1567]: time="2025-05-16T05:30:04.820798579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c3b57fc1482827bd33440b9d929a4d5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0afd1dae15e918eb8c7e75a45aceabfc98439eacecf7ed8de9df9177cd1c0e08\"" May 16 05:30:04.821910 kubelet[2275]: E0516 05:30:04.821879 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:04.826242 containerd[1567]: time="2025-05-16T05:30:04.826163893Z" level=info msg="CreateContainer within sandbox \"0afd1dae15e918eb8c7e75a45aceabfc98439eacecf7ed8de9df9177cd1c0e08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 05:30:04.829521 containerd[1567]: time="2025-05-16T05:30:04.829495602Z" level=info msg="Container c3e9660678779951ec2d5644e2e84eb87319d39077661d1b2794490bc5f9f2ef: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:04.829590 containerd[1567]: time="2025-05-16T05:30:04.829550896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"337976b53c32ded87db5cfb842f2bc2f48d912b947ef6c57b3fe01cde0bc7222\"" May 16 05:30:04.830203 kubelet[2275]: E0516 05:30:04.830078 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:04.833931 containerd[1567]: time="2025-05-16T05:30:04.833901256Z" level=info msg="CreateContainer within sandbox \"337976b53c32ded87db5cfb842f2bc2f48d912b947ef6c57b3fe01cde0bc7222\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 05:30:04.840529 containerd[1567]: time="2025-05-16T05:30:04.840501005Z" level=info msg="CreateContainer within sandbox \"50267bdd6fec356fd02b5278a8b37754c6e17509ae3ffb279258a0fedbc8a0a8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c3e9660678779951ec2d5644e2e84eb87319d39077661d1b2794490bc5f9f2ef\"" May 16 05:30:04.841038 containerd[1567]: time="2025-05-16T05:30:04.841009629Z" level=info msg="StartContainer for \"c3e9660678779951ec2d5644e2e84eb87319d39077661d1b2794490bc5f9f2ef\"" May 16 05:30:04.842071 containerd[1567]: time="2025-05-16T05:30:04.842032939Z" level=info msg="connecting to shim c3e9660678779951ec2d5644e2e84eb87319d39077661d1b2794490bc5f9f2ef" address="unix:///run/containerd/s/8407d669abc1bb82d174f8e0f5bd36c56905b7710e1c0cf9f3492b9a5bc547f8" protocol=ttrpc version=3 May 16 05:30:04.842823 containerd[1567]: time="2025-05-16T05:30:04.842802562Z" level=info msg="Container 119933c9fb3324a5b26bd6b31d7d767863de7f010e88dd1c643ecb8ebe22e74c: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:04.851528 containerd[1567]: time="2025-05-16T05:30:04.851486090Z" level=info msg="CreateContainer within sandbox \"0afd1dae15e918eb8c7e75a45aceabfc98439eacecf7ed8de9df9177cd1c0e08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"119933c9fb3324a5b26bd6b31d7d767863de7f010e88dd1c643ecb8ebe22e74c\"" May 16 05:30:04.851993 containerd[1567]: time="2025-05-16T05:30:04.851969296Z" level=info msg="StartContainer for \"119933c9fb3324a5b26bd6b31d7d767863de7f010e88dd1c643ecb8ebe22e74c\"" May 16 05:30:04.853098 containerd[1567]: time="2025-05-16T05:30:04.853063759Z" level=info msg="connecting to shim 119933c9fb3324a5b26bd6b31d7d767863de7f010e88dd1c643ecb8ebe22e74c" address="unix:///run/containerd/s/14c20156a60609d8d9093e61123e271ba45939a7aedfe1e7a2a5b8fcd70304ca" protocol=ttrpc version=3 May 16 05:30:04.853205 containerd[1567]: time="2025-05-16T05:30:04.853183143Z" level=info msg="Container 9dc11bb6604415d00c3943843b8867259f99c83c39d402326c550c31d2af6537: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:04.857355 kubelet[2275]: E0516 05:30:04.857298 2275 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 16 05:30:04.862097 containerd[1567]: time="2025-05-16T05:30:04.862064051Z" level=info msg="CreateContainer within sandbox \"337976b53c32ded87db5cfb842f2bc2f48d912b947ef6c57b3fe01cde0bc7222\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9dc11bb6604415d00c3943843b8867259f99c83c39d402326c550c31d2af6537\"" May 16 05:30:04.862525 containerd[1567]: time="2025-05-16T05:30:04.862499178Z" level=info msg="StartContainer for \"9dc11bb6604415d00c3943843b8867259f99c83c39d402326c550c31d2af6537\"" May 16 05:30:04.863634 containerd[1567]: time="2025-05-16T05:30:04.863608488Z" level=info msg="connecting to shim 9dc11bb6604415d00c3943843b8867259f99c83c39d402326c550c31d2af6537" address="unix:///run/containerd/s/f8b55d5de0bc9e801346df7887e40158b95bb1614f46e36a81d37c23755c3f60" protocol=ttrpc version=3 May 16 05:30:04.866853 systemd[1]: Started cri-containerd-c3e9660678779951ec2d5644e2e84eb87319d39077661d1b2794490bc5f9f2ef.scope - libcontainer container c3e9660678779951ec2d5644e2e84eb87319d39077661d1b2794490bc5f9f2ef. May 16 05:30:04.872199 systemd[1]: Started cri-containerd-119933c9fb3324a5b26bd6b31d7d767863de7f010e88dd1c643ecb8ebe22e74c.scope - libcontainer container 119933c9fb3324a5b26bd6b31d7d767863de7f010e88dd1c643ecb8ebe22e74c. May 16 05:30:04.885717 systemd[1]: Started cri-containerd-9dc11bb6604415d00c3943843b8867259f99c83c39d402326c550c31d2af6537.scope - libcontainer container 9dc11bb6604415d00c3943843b8867259f99c83c39d402326c550c31d2af6537. May 16 05:30:04.936504 containerd[1567]: time="2025-05-16T05:30:04.935711703Z" level=info msg="StartContainer for \"c3e9660678779951ec2d5644e2e84eb87319d39077661d1b2794490bc5f9f2ef\" returns successfully" May 16 05:30:04.942246 containerd[1567]: time="2025-05-16T05:30:04.942155871Z" level=info msg="StartContainer for \"119933c9fb3324a5b26bd6b31d7d767863de7f010e88dd1c643ecb8ebe22e74c\" returns successfully" May 16 05:30:04.942246 containerd[1567]: time="2025-05-16T05:30:04.942202899Z" level=info msg="StartContainer for \"9dc11bb6604415d00c3943843b8867259f99c83c39d402326c550c31d2af6537\" returns successfully" May 16 05:30:05.134299 kubelet[2275]: I0516 05:30:05.134192 2275 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:30:05.830342 kubelet[2275]: I0516 05:30:05.830223 2275 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 05:30:05.830342 kubelet[2275]: E0516 05:30:05.830259 2275 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 05:30:05.841095 kubelet[2275]: E0516 05:30:05.841053 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:05.896493 kubelet[2275]: E0516 05:30:05.896460 2275 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:30:05.896684 kubelet[2275]: E0516 05:30:05.896665 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:05.899678 kubelet[2275]: E0516 05:30:05.899658 2275 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:30:05.899773 kubelet[2275]: E0516 05:30:05.899749 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:05.900324 kubelet[2275]: E0516 05:30:05.900295 2275 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:30:05.900457 kubelet[2275]: E0516 05:30:05.900422 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:05.941161 kubelet[2275]: E0516 05:30:05.941126 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:06.041842 kubelet[2275]: E0516 05:30:06.041802 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:06.142629 kubelet[2275]: E0516 05:30:06.142470 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:06.243037 kubelet[2275]: E0516 05:30:06.242993 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:06.343705 kubelet[2275]: E0516 05:30:06.343677 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:06.444646 kubelet[2275]: E0516 05:30:06.444506 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:06.545054 kubelet[2275]: E0516 05:30:06.544993 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:06.645907 kubelet[2275]: E0516 05:30:06.645858 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:06.747072 kubelet[2275]: E0516 05:30:06.746944 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:06.847408 kubelet[2275]: E0516 05:30:06.847362 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:06.902358 kubelet[2275]: E0516 05:30:06.902317 2275 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:30:06.902508 kubelet[2275]: E0516 05:30:06.902417 2275 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:30:06.902508 kubelet[2275]: E0516 05:30:06.902441 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:06.902508 kubelet[2275]: E0516 05:30:06.902496 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:06.902703 kubelet[2275]: E0516 05:30:06.902619 2275 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:30:06.902802 kubelet[2275]: E0516 05:30:06.902787 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:06.947820 kubelet[2275]: E0516 05:30:06.947771 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:07.048819 kubelet[2275]: E0516 05:30:07.048703 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:07.149527 kubelet[2275]: E0516 05:30:07.149477 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:07.250005 kubelet[2275]: E0516 05:30:07.249967 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:07.350226 kubelet[2275]: E0516 05:30:07.350117 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:07.450712 kubelet[2275]: E0516 05:30:07.450654 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:30:07.560586 kubelet[2275]: I0516 05:30:07.560531 2275 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 05:30:07.568708 kubelet[2275]: I0516 05:30:07.568665 2275 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 05:30:07.572045 kubelet[2275]: I0516 05:30:07.572014 2275 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 05:30:07.849734 kubelet[2275]: I0516 05:30:07.849624 2275 apiserver.go:52] "Watching apiserver" May 16 05:30:07.859759 kubelet[2275]: I0516 05:30:07.859742 2275 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 05:30:07.902877 kubelet[2275]: E0516 05:30:07.902846 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:07.903051 kubelet[2275]: I0516 05:30:07.903035 2275 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 05:30:07.903051 kubelet[2275]: I0516 05:30:07.903042 2275 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 05:30:07.907910 kubelet[2275]: E0516 05:30:07.907874 2275 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 05:30:07.908061 kubelet[2275]: E0516 05:30:07.907967 2275 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 05:30:07.908061 kubelet[2275]: E0516 05:30:07.908046 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:07.908151 kubelet[2275]: E0516 05:30:07.908130 2275 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:07.936787 systemd[1]: Reload requested from client PID 2561 ('systemctl') (unit session-7.scope)... May 16 05:30:07.936803 systemd[1]: Reloading... May 16 05:30:08.008673 zram_generator::config[2607]: No configuration found. May 16 05:30:08.097729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:30:08.224231 systemd[1]: Reloading finished in 287 ms. May 16 05:30:08.255108 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:30:08.276873 systemd[1]: kubelet.service: Deactivated successfully. May 16 05:30:08.277201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:30:08.277255 systemd[1]: kubelet.service: Consumed 1.266s CPU time, 132.2M memory peak. May 16 05:30:08.279176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:30:08.478103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:30:08.486031 (kubelet)[2649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 05:30:08.527675 kubelet[2649]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:30:08.527675 kubelet[2649]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 05:30:08.527675 kubelet[2649]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:30:08.528275 kubelet[2649]: I0516 05:30:08.527814 2649 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 05:30:08.535144 kubelet[2649]: I0516 05:30:08.535108 2649 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 05:30:08.535144 kubelet[2649]: I0516 05:30:08.535135 2649 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 05:30:08.535387 kubelet[2649]: I0516 05:30:08.535364 2649 server.go:956] "Client rotation is on, will bootstrap in background" May 16 05:30:08.536587 kubelet[2649]: I0516 05:30:08.536552 2649 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 16 05:30:08.538852 kubelet[2649]: I0516 05:30:08.538832 2649 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 05:30:08.543449 kubelet[2649]: I0516 05:30:08.543393 2649 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 05:30:08.548119 kubelet[2649]: I0516 05:30:08.548083 2649 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 05:30:08.548317 kubelet[2649]: I0516 05:30:08.548280 2649 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 05:30:08.548444 kubelet[2649]: I0516 05:30:08.548302 2649 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 05:30:08.548586 kubelet[2649]: I0516 05:30:08.548445 2649 topology_manager.go:138] "Creating topology manager with none policy" May 16 05:30:08.548586 kubelet[2649]: I0516 05:30:08.548454 2649 container_manager_linux.go:303] "Creating device plugin manager" May 16 05:30:08.548586 kubelet[2649]: I0516 05:30:08.548497 2649 state_mem.go:36] "Initialized new in-memory state store" May 16 05:30:08.548720 kubelet[2649]: I0516 05:30:08.548699 2649 kubelet.go:480] "Attempting to sync node with API server" May 16 05:30:08.548720 kubelet[2649]: I0516 05:30:08.548714 2649 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 05:30:08.548777 kubelet[2649]: I0516 05:30:08.548733 2649 kubelet.go:386] "Adding apiserver pod source" May 16 05:30:08.548777 kubelet[2649]: I0516 05:30:08.548749 2649 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 05:30:08.550821 kubelet[2649]: I0516 05:30:08.550762 2649 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 05:30:08.551305 kubelet[2649]: I0516 05:30:08.551226 2649 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 05:30:08.554404 kubelet[2649]: I0516 05:30:08.554324 2649 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 05:30:08.554576 kubelet[2649]: I0516 05:30:08.554478 2649 server.go:1289] "Started kubelet" May 16 05:30:08.554687 kubelet[2649]: I0516 05:30:08.554647 2649 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 05:30:08.555473 kubelet[2649]: I0516 05:30:08.555421 2649 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 05:30:08.555634 kubelet[2649]: I0516 05:30:08.555621 2649 server.go:317] "Adding debug handlers to kubelet server" May 16 05:30:08.557583 kubelet[2649]: I0516 05:30:08.556228 2649 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 05:30:08.562475 kubelet[2649]: I0516 05:30:08.562450 2649 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 05:30:08.566932 kubelet[2649]: I0516 05:30:08.563132 2649 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 05:30:08.566932 kubelet[2649]: I0516 05:30:08.566396 2649 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 05:30:08.566932 kubelet[2649]: E0516 05:30:08.564402 2649 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 05:30:08.566932 kubelet[2649]: I0516 05:30:08.566792 2649 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 05:30:08.568135 kubelet[2649]: I0516 05:30:08.567304 2649 reconciler.go:26] "Reconciler: start to sync state" May 16 05:30:08.568135 kubelet[2649]: I0516 05:30:08.567454 2649 factory.go:223] Registration of the systemd container factory successfully May 16 05:30:08.568135 kubelet[2649]: I0516 05:30:08.567746 2649 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 05:30:08.569588 kubelet[2649]: I0516 05:30:08.569474 2649 factory.go:223] Registration of the containerd container factory successfully May 16 05:30:08.579355 kubelet[2649]: I0516 05:30:08.579302 2649 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 05:30:08.580968 kubelet[2649]: I0516 05:30:08.580933 2649 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 05:30:08.580968 kubelet[2649]: I0516 05:30:08.580959 2649 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 05:30:08.581066 kubelet[2649]: I0516 05:30:08.580981 2649 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 05:30:08.581066 kubelet[2649]: I0516 05:30:08.580989 2649 kubelet.go:2436] "Starting kubelet main sync loop" May 16 05:30:08.581066 kubelet[2649]: E0516 05:30:08.581030 2649 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 05:30:08.605120 kubelet[2649]: I0516 05:30:08.605089 2649 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 05:30:08.605120 kubelet[2649]: I0516 05:30:08.605106 2649 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 05:30:08.605120 kubelet[2649]: I0516 05:30:08.605123 2649 state_mem.go:36] "Initialized new in-memory state store" May 16 05:30:08.605287 kubelet[2649]: I0516 05:30:08.605246 2649 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 05:30:08.605287 kubelet[2649]: I0516 05:30:08.605261 2649 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 05:30:08.605287 kubelet[2649]: I0516 05:30:08.605282 2649 policy_none.go:49] "None policy: Start" May 16 05:30:08.605354 kubelet[2649]: I0516 05:30:08.605291 2649 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 05:30:08.605354 kubelet[2649]: I0516 05:30:08.605303 2649 state_mem.go:35] "Initializing new in-memory state store" May 16 05:30:08.605421 kubelet[2649]: I0516 05:30:08.605409 2649 state_mem.go:75] "Updated machine memory state" May 16 05:30:08.610257 kubelet[2649]: E0516 05:30:08.610229 2649 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 05:30:08.610445 kubelet[2649]: I0516 05:30:08.610424 2649 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 05:30:08.610489 kubelet[2649]: I0516 05:30:08.610443 2649 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 05:30:08.610780 kubelet[2649]: I0516 05:30:08.610755 2649 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 05:30:08.612798 kubelet[2649]: E0516 05:30:08.612776 2649 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 05:30:08.681927 kubelet[2649]: I0516 05:30:08.681902 2649 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 05:30:08.682240 kubelet[2649]: I0516 05:30:08.681936 2649 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 05:30:08.682650 kubelet[2649]: I0516 05:30:08.682012 2649 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 05:30:08.687398 kubelet[2649]: E0516 05:30:08.687362 2649 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 05:30:08.687644 kubelet[2649]: E0516 05:30:08.687362 2649 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 05:30:08.687644 kubelet[2649]: E0516 05:30:08.687538 2649 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 05:30:08.716817 kubelet[2649]: I0516 05:30:08.716789 2649 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:30:08.722954 kubelet[2649]: I0516 05:30:08.722919 2649 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 05:30:08.723026 kubelet[2649]: I0516 05:30:08.723012 2649 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 05:30:08.768840 kubelet[2649]: I0516 05:30:08.768724 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:30:08.768840 kubelet[2649]: I0516 05:30:08.768752 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:30:08.768840 kubelet[2649]: I0516 05:30:08.768772 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:30:08.768840 kubelet[2649]: I0516 05:30:08.768787 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:30:08.768840 kubelet[2649]: I0516 05:30:08.768804 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:30:08.769085 kubelet[2649]: I0516 05:30:08.768822 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3b57fc1482827bd33440b9d929a4d5a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c3b57fc1482827bd33440b9d929a4d5a\") " pod="kube-system/kube-apiserver-localhost" May 16 05:30:08.769085 kubelet[2649]: I0516 05:30:08.768861 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3b57fc1482827bd33440b9d929a4d5a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c3b57fc1482827bd33440b9d929a4d5a\") " pod="kube-system/kube-apiserver-localhost" May 16 05:30:08.769085 kubelet[2649]: I0516 05:30:08.768888 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 16 05:30:08.769085 kubelet[2649]: I0516 05:30:08.768904 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3b57fc1482827bd33440b9d929a4d5a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c3b57fc1482827bd33440b9d929a4d5a\") " pod="kube-system/kube-apiserver-localhost" May 16 05:30:08.939585 sudo[2691]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 05:30:08.939901 sudo[2691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 05:30:08.987937 kubelet[2649]: E0516 05:30:08.987900 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:08.988018 kubelet[2649]: E0516 05:30:08.987972 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:08.988055 kubelet[2649]: E0516 05:30:08.988010 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:09.403683 sudo[2691]: pam_unix(sudo:session): session closed for user root May 16 05:30:09.550128 kubelet[2649]: I0516 05:30:09.550088 2649 apiserver.go:52] "Watching apiserver" May 16 05:30:09.567650 kubelet[2649]: I0516 05:30:09.567620 2649 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 05:30:09.594454 kubelet[2649]: I0516 05:30:09.594393 2649 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 05:30:09.594736 kubelet[2649]: I0516 05:30:09.594624 2649 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 05:30:09.596589 kubelet[2649]: E0516 05:30:09.596393 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:09.601168 kubelet[2649]: E0516 05:30:09.600891 2649 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 05:30:09.601168 kubelet[2649]: E0516 05:30:09.601070 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:09.601414 kubelet[2649]: E0516 05:30:09.601381 2649 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 05:30:09.601577 kubelet[2649]: E0516 05:30:09.601539 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:09.617684 kubelet[2649]: I0516 05:30:09.617421 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.617401675 podStartE2EDuration="2.617401675s" podCreationTimestamp="2025-05-16 05:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:30:09.611843931 +0000 UTC m=+1.121344994" watchObservedRunningTime="2025-05-16 05:30:09.617401675 +0000 UTC m=+1.126902719" May 16 05:30:09.624202 kubelet[2649]: I0516 05:30:09.624142 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.624127311 podStartE2EDuration="2.624127311s" podCreationTimestamp="2025-05-16 05:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:30:09.617856509 +0000 UTC m=+1.127357542" watchObservedRunningTime="2025-05-16 05:30:09.624127311 +0000 UTC m=+1.133628354" May 16 05:30:09.630831 kubelet[2649]: I0516 05:30:09.630777 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.630770291 podStartE2EDuration="2.630770291s" podCreationTimestamp="2025-05-16 05:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:30:09.624327677 +0000 UTC m=+1.133828720" watchObservedRunningTime="2025-05-16 05:30:09.630770291 +0000 UTC m=+1.140271334" May 16 05:30:10.595998 kubelet[2649]: E0516 05:30:10.595951 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:10.596386 kubelet[2649]: E0516 05:30:10.596108 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:10.596869 kubelet[2649]: E0516 05:30:10.596839 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:10.794606 sudo[1767]: pam_unix(sudo:session): session closed for user root May 16 05:30:10.795961 sshd[1766]: Connection closed by 10.0.0.1 port 43270 May 16 05:30:10.796317 sshd-session[1763]: pam_unix(sshd:session): session closed for user core May 16 05:30:10.800828 systemd[1]: sshd@6-10.0.0.148:22-10.0.0.1:43270.service: Deactivated successfully. May 16 05:30:10.802969 systemd[1]: session-7.scope: Deactivated successfully. May 16 05:30:10.803174 systemd[1]: session-7.scope: Consumed 5.019s CPU time, 260.8M memory peak. May 16 05:30:10.804403 systemd-logind[1546]: Session 7 logged out. Waiting for processes to exit. May 16 05:30:10.805767 systemd-logind[1546]: Removed session 7. May 16 05:30:12.731417 kubelet[2649]: E0516 05:30:12.731379 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:13.835312 kubelet[2649]: I0516 05:30:13.835279 2649 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 05:30:13.835784 containerd[1567]: time="2025-05-16T05:30:13.835695196Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 05:30:13.836006 kubelet[2649]: I0516 05:30:13.835875 2649 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 05:30:14.265938 kubelet[2649]: E0516 05:30:14.265802 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:14.601200 kubelet[2649]: E0516 05:30:14.601175 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:14.998863 systemd[1]: Created slice kubepods-besteffort-podfb5258fb_4eb3_4b53_94fc_c224d0169ae6.slice - libcontainer container kubepods-besteffort-podfb5258fb_4eb3_4b53_94fc_c224d0169ae6.slice. May 16 05:30:15.009373 systemd[1]: Created slice kubepods-burstable-podf5ee9e88_2db9_4e60_8543_7eba4291819e.slice - libcontainer container kubepods-burstable-podf5ee9e88_2db9_4e60_8543_7eba4291819e.slice. May 16 05:30:15.010850 kubelet[2649]: I0516 05:30:15.009915 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb5258fb-4eb3-4b53-94fc-c224d0169ae6-kube-proxy\") pod \"kube-proxy-bl9h8\" (UID: \"fb5258fb-4eb3-4b53-94fc-c224d0169ae6\") " pod="kube-system/kube-proxy-bl9h8" May 16 05:30:15.010850 kubelet[2649]: I0516 05:30:15.009955 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-lib-modules\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.010850 kubelet[2649]: I0516 05:30:15.009975 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-run\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.010850 kubelet[2649]: I0516 05:30:15.009994 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-hostproc\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.010850 kubelet[2649]: I0516 05:30:15.010013 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-cgroup\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.010850 kubelet[2649]: I0516 05:30:15.010033 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9qcq\" (UniqueName: \"kubernetes.io/projected/fb5258fb-4eb3-4b53-94fc-c224d0169ae6-kube-api-access-f9qcq\") pod \"kube-proxy-bl9h8\" (UID: \"fb5258fb-4eb3-4b53-94fc-c224d0169ae6\") " pod="kube-system/kube-proxy-bl9h8" May 16 05:30:15.011211 kubelet[2649]: I0516 05:30:15.010051 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-bpf-maps\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.011211 kubelet[2649]: I0516 05:30:15.010071 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snrh8\" (UniqueName: \"kubernetes.io/projected/f5ee9e88-2db9-4e60-8543-7eba4291819e-kube-api-access-snrh8\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.011211 kubelet[2649]: I0516 05:30:15.010089 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft95s\" (UniqueName: \"kubernetes.io/projected/c9551319-d139-4d8d-90aa-ae368527bc1b-kube-api-access-ft95s\") pod \"cilium-operator-6c4d7847fc-95r8q\" (UID: \"c9551319-d139-4d8d-90aa-ae368527bc1b\") " pod="kube-system/cilium-operator-6c4d7847fc-95r8q" May 16 05:30:15.011211 kubelet[2649]: I0516 05:30:15.010338 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5ee9e88-2db9-4e60-8543-7eba4291819e-hubble-tls\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.011211 kubelet[2649]: I0516 05:30:15.010364 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb5258fb-4eb3-4b53-94fc-c224d0169ae6-lib-modules\") pod \"kube-proxy-bl9h8\" (UID: \"fb5258fb-4eb3-4b53-94fc-c224d0169ae6\") " pod="kube-system/kube-proxy-bl9h8" May 16 05:30:15.011322 kubelet[2649]: I0516 05:30:15.010378 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cni-path\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.011322 kubelet[2649]: I0516 05:30:15.010529 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-etc-cni-netd\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.011322 kubelet[2649]: I0516 05:30:15.010547 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5ee9e88-2db9-4e60-8543-7eba4291819e-clustermesh-secrets\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.011322 kubelet[2649]: I0516 05:30:15.010563 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9551319-d139-4d8d-90aa-ae368527bc1b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-95r8q\" (UID: \"c9551319-d139-4d8d-90aa-ae368527bc1b\") " pod="kube-system/cilium-operator-6c4d7847fc-95r8q" May 16 05:30:15.011322 kubelet[2649]: I0516 05:30:15.010679 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-host-proc-sys-net\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.011433 kubelet[2649]: I0516 05:30:15.010694 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb5258fb-4eb3-4b53-94fc-c224d0169ae6-xtables-lock\") pod \"kube-proxy-bl9h8\" (UID: \"fb5258fb-4eb3-4b53-94fc-c224d0169ae6\") " pod="kube-system/kube-proxy-bl9h8" May 16 05:30:15.011433 kubelet[2649]: I0516 05:30:15.010791 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-config-path\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.011433 kubelet[2649]: I0516 05:30:15.010810 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-host-proc-sys-kernel\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.011433 kubelet[2649]: I0516 05:30:15.010857 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-xtables-lock\") pod \"cilium-kzsch\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " pod="kube-system/cilium-kzsch" May 16 05:30:15.024793 systemd[1]: Created slice kubepods-besteffort-podc9551319_d139_4d8d_90aa_ae368527bc1b.slice - libcontainer container kubepods-besteffort-podc9551319_d139_4d8d_90aa_ae368527bc1b.slice. May 16 05:30:15.321230 kubelet[2649]: E0516 05:30:15.321204 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:15.321743 containerd[1567]: time="2025-05-16T05:30:15.321699386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bl9h8,Uid:fb5258fb-4eb3-4b53-94fc-c224d0169ae6,Namespace:kube-system,Attempt:0,}" May 16 05:30:15.322412 kubelet[2649]: E0516 05:30:15.322366 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:15.322845 containerd[1567]: time="2025-05-16T05:30:15.322759793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzsch,Uid:f5ee9e88-2db9-4e60-8543-7eba4291819e,Namespace:kube-system,Attempt:0,}" May 16 05:30:15.328544 kubelet[2649]: E0516 05:30:15.328514 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:15.328901 containerd[1567]: time="2025-05-16T05:30:15.328872481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-95r8q,Uid:c9551319-d139-4d8d-90aa-ae368527bc1b,Namespace:kube-system,Attempt:0,}" May 16 05:30:15.370086 containerd[1567]: time="2025-05-16T05:30:15.370039601Z" level=info msg="connecting to shim ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1" address="unix:///run/containerd/s/59d9f0daa50a799821049d5500efc4bd2f8f71b4f153a712d96de12b540a170d" namespace=k8s.io protocol=ttrpc version=3 May 16 05:30:15.370885 containerd[1567]: time="2025-05-16T05:30:15.370851083Z" level=info msg="connecting to shim 8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244" address="unix:///run/containerd/s/941136f905800f4c1e08c5fc2e222414bb240800306cfa8004f32cbe5289bdda" namespace=k8s.io protocol=ttrpc version=3 May 16 05:30:15.372363 containerd[1567]: time="2025-05-16T05:30:15.372302148Z" level=info msg="connecting to shim 89d33ffb4ea383ab47391b50abe1b25e5f4a3b94ba2588f3f170dca36536f1f1" address="unix:///run/containerd/s/8e5256058c6ec840634fe9702b8254ce2c2b59c664012002b004674cb474f5a0" namespace=k8s.io protocol=ttrpc version=3 May 16 05:30:15.443708 systemd[1]: Started cri-containerd-89d33ffb4ea383ab47391b50abe1b25e5f4a3b94ba2588f3f170dca36536f1f1.scope - libcontainer container 89d33ffb4ea383ab47391b50abe1b25e5f4a3b94ba2588f3f170dca36536f1f1. May 16 05:30:15.445098 systemd[1]: Started cri-containerd-8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244.scope - libcontainer container 8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244. May 16 05:30:15.446715 systemd[1]: Started cri-containerd-ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1.scope - libcontainer container ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1. May 16 05:30:15.476893 containerd[1567]: time="2025-05-16T05:30:15.476817789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzsch,Uid:f5ee9e88-2db9-4e60-8543-7eba4291819e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\"" May 16 05:30:15.477938 kubelet[2649]: E0516 05:30:15.477910 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:15.479917 containerd[1567]: time="2025-05-16T05:30:15.479880274Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 05:30:15.484509 containerd[1567]: time="2025-05-16T05:30:15.484440654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bl9h8,Uid:fb5258fb-4eb3-4b53-94fc-c224d0169ae6,Namespace:kube-system,Attempt:0,} returns sandbox id \"89d33ffb4ea383ab47391b50abe1b25e5f4a3b94ba2588f3f170dca36536f1f1\"" May 16 05:30:15.486864 kubelet[2649]: E0516 05:30:15.486840 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:15.492448 containerd[1567]: time="2025-05-16T05:30:15.492414028Z" level=info msg="CreateContainer within sandbox \"89d33ffb4ea383ab47391b50abe1b25e5f4a3b94ba2588f3f170dca36536f1f1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 05:30:15.495204 containerd[1567]: time="2025-05-16T05:30:15.495135251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-95r8q,Uid:c9551319-d139-4d8d-90aa-ae368527bc1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244\"" May 16 05:30:15.496094 kubelet[2649]: E0516 05:30:15.495978 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:15.503806 containerd[1567]: time="2025-05-16T05:30:15.503772475Z" level=info msg="Container 12944bf2ae35fc3a88a861832b483629ec5feb309671e9e124d16619188a0115: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:15.511526 containerd[1567]: time="2025-05-16T05:30:15.511482146Z" level=info msg="CreateContainer within sandbox \"89d33ffb4ea383ab47391b50abe1b25e5f4a3b94ba2588f3f170dca36536f1f1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"12944bf2ae35fc3a88a861832b483629ec5feb309671e9e124d16619188a0115\"" May 16 05:30:15.511993 containerd[1567]: time="2025-05-16T05:30:15.511968797Z" level=info msg="StartContainer for \"12944bf2ae35fc3a88a861832b483629ec5feb309671e9e124d16619188a0115\"" May 16 05:30:15.513291 containerd[1567]: time="2025-05-16T05:30:15.513264304Z" level=info msg="connecting to shim 12944bf2ae35fc3a88a861832b483629ec5feb309671e9e124d16619188a0115" address="unix:///run/containerd/s/8e5256058c6ec840634fe9702b8254ce2c2b59c664012002b004674cb474f5a0" protocol=ttrpc version=3 May 16 05:30:15.539735 systemd[1]: Started cri-containerd-12944bf2ae35fc3a88a861832b483629ec5feb309671e9e124d16619188a0115.scope - libcontainer container 12944bf2ae35fc3a88a861832b483629ec5feb309671e9e124d16619188a0115. May 16 05:30:15.581840 containerd[1567]: time="2025-05-16T05:30:15.581646428Z" level=info msg="StartContainer for \"12944bf2ae35fc3a88a861832b483629ec5feb309671e9e124d16619188a0115\" returns successfully" May 16 05:30:15.606048 kubelet[2649]: E0516 05:30:15.606015 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:15.615653 kubelet[2649]: I0516 05:30:15.615603 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bl9h8" podStartSLOduration=1.615588134 podStartE2EDuration="1.615588134s" podCreationTimestamp="2025-05-16 05:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:30:15.614713682 +0000 UTC m=+7.124214725" watchObservedRunningTime="2025-05-16 05:30:15.615588134 +0000 UTC m=+7.125089177" May 16 05:30:18.329104 kubelet[2649]: E0516 05:30:18.329071 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:18.613494 kubelet[2649]: E0516 05:30:18.613378 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:19.615368 kubelet[2649]: E0516 05:30:19.615322 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:22.736549 kubelet[2649]: E0516 05:30:22.736499 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:23.940131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2646995478.mount: Deactivated successfully. May 16 05:30:24.153671 update_engine[1552]: I20250516 05:30:24.153621 1552 update_attempter.cc:509] Updating boot flags... May 16 05:30:26.038780 containerd[1567]: time="2025-05-16T05:30:26.038712387Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:30:26.039557 containerd[1567]: time="2025-05-16T05:30:26.039516820Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 05:30:26.041003 containerd[1567]: time="2025-05-16T05:30:26.040969811Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:30:26.042479 containerd[1567]: time="2025-05-16T05:30:26.042447560Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.562525235s" May 16 05:30:26.042479 containerd[1567]: time="2025-05-16T05:30:26.042474751Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 05:30:26.043591 containerd[1567]: time="2025-05-16T05:30:26.043534648Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 05:30:26.046818 containerd[1567]: time="2025-05-16T05:30:26.046770566Z" level=info msg="CreateContainer within sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 05:30:26.055491 containerd[1567]: time="2025-05-16T05:30:26.055452315Z" level=info msg="Container 48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:26.064682 containerd[1567]: time="2025-05-16T05:30:26.064647486Z" level=info msg="CreateContainer within sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\"" May 16 05:30:26.065444 containerd[1567]: time="2025-05-16T05:30:26.065379262Z" level=info msg="StartContainer for \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\"" May 16 05:30:26.066322 containerd[1567]: time="2025-05-16T05:30:26.066296319Z" level=info msg="connecting to shim 48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8" address="unix:///run/containerd/s/59d9f0daa50a799821049d5500efc4bd2f8f71b4f153a712d96de12b540a170d" protocol=ttrpc version=3 May 16 05:30:26.088758 systemd[1]: Started cri-containerd-48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8.scope - libcontainer container 48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8. May 16 05:30:26.119702 containerd[1567]: time="2025-05-16T05:30:26.119559935Z" level=info msg="StartContainer for \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\" returns successfully" May 16 05:30:26.131784 systemd[1]: cri-containerd-48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8.scope: Deactivated successfully. May 16 05:30:26.133114 containerd[1567]: time="2025-05-16T05:30:26.133048727Z" level=info msg="received exit event container_id:\"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\" id:\"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\" pid:3102 exited_at:{seconds:1747373426 nanos:132522119}" May 16 05:30:26.133243 containerd[1567]: time="2025-05-16T05:30:26.133125121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\" id:\"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\" pid:3102 exited_at:{seconds:1747373426 nanos:132522119}" May 16 05:30:26.157394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8-rootfs.mount: Deactivated successfully. May 16 05:30:27.029720 kubelet[2649]: E0516 05:30:27.029691 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:27.033603 containerd[1567]: time="2025-05-16T05:30:27.033442737Z" level=info msg="CreateContainer within sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 05:30:27.045540 containerd[1567]: time="2025-05-16T05:30:27.045481039Z" level=info msg="Container 1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:27.051741 containerd[1567]: time="2025-05-16T05:30:27.051702673Z" level=info msg="CreateContainer within sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\"" May 16 05:30:27.052196 containerd[1567]: time="2025-05-16T05:30:27.052171460Z" level=info msg="StartContainer for \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\"" May 16 05:30:27.052934 containerd[1567]: time="2025-05-16T05:30:27.052888747Z" level=info msg="connecting to shim 1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467" address="unix:///run/containerd/s/59d9f0daa50a799821049d5500efc4bd2f8f71b4f153a712d96de12b540a170d" protocol=ttrpc version=3 May 16 05:30:27.074693 systemd[1]: Started cri-containerd-1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467.scope - libcontainer container 1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467. May 16 05:30:27.101858 containerd[1567]: time="2025-05-16T05:30:27.101825676Z" level=info msg="StartContainer for \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\" returns successfully" May 16 05:30:27.115635 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 05:30:27.116141 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 05:30:27.116362 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 05:30:27.118126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 05:30:27.119704 containerd[1567]: time="2025-05-16T05:30:27.119348337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\" id:\"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\" pid:3149 exited_at:{seconds:1747373427 nanos:118441230}" May 16 05:30:27.119704 containerd[1567]: time="2025-05-16T05:30:27.119412078Z" level=info msg="received exit event container_id:\"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\" id:\"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\" pid:3149 exited_at:{seconds:1747373427 nanos:118441230}" May 16 05:30:27.120698 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 05:30:27.121431 systemd[1]: cri-containerd-1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467.scope: Deactivated successfully. May 16 05:30:27.140124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 05:30:27.142449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467-rootfs.mount: Deactivated successfully. May 16 05:30:28.032980 kubelet[2649]: E0516 05:30:28.032949 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:28.037457 containerd[1567]: time="2025-05-16T05:30:28.037425165Z" level=info msg="CreateContainer within sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 05:30:28.048252 containerd[1567]: time="2025-05-16T05:30:28.048213880Z" level=info msg="Container 85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:28.059892 containerd[1567]: time="2025-05-16T05:30:28.059854828Z" level=info msg="CreateContainer within sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\"" May 16 05:30:28.060308 containerd[1567]: time="2025-05-16T05:30:28.060267108Z" level=info msg="StartContainer for \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\"" May 16 05:30:28.061635 containerd[1567]: time="2025-05-16T05:30:28.061610639Z" level=info msg="connecting to shim 85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008" address="unix:///run/containerd/s/59d9f0daa50a799821049d5500efc4bd2f8f71b4f153a712d96de12b540a170d" protocol=ttrpc version=3 May 16 05:30:28.086731 systemd[1]: Started cri-containerd-85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008.scope - libcontainer container 85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008. May 16 05:30:28.123861 systemd[1]: cri-containerd-85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008.scope: Deactivated successfully. May 16 05:30:28.124381 containerd[1567]: time="2025-05-16T05:30:28.124355663Z" level=info msg="StartContainer for \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\" returns successfully" May 16 05:30:28.124978 containerd[1567]: time="2025-05-16T05:30:28.124949497Z" level=info msg="received exit event container_id:\"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\" id:\"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\" pid:3194 exited_at:{seconds:1747373428 nanos:124500228}" May 16 05:30:28.125201 containerd[1567]: time="2025-05-16T05:30:28.125061919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\" id:\"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\" pid:3194 exited_at:{seconds:1747373428 nanos:124500228}" May 16 05:30:28.145875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008-rootfs.mount: Deactivated successfully. May 16 05:30:28.512154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663046568.mount: Deactivated successfully. May 16 05:30:28.824542 containerd[1567]: time="2025-05-16T05:30:28.824488855Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:30:28.825238 containerd[1567]: time="2025-05-16T05:30:28.825188358Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 05:30:28.826190 containerd[1567]: time="2025-05-16T05:30:28.826146670Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:30:28.827460 containerd[1567]: time="2025-05-16T05:30:28.827404229Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.783657539s" May 16 05:30:28.827561 containerd[1567]: time="2025-05-16T05:30:28.827524036Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 05:30:28.832741 containerd[1567]: time="2025-05-16T05:30:28.832707940Z" level=info msg="CreateContainer within sandbox \"8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 05:30:28.839253 containerd[1567]: time="2025-05-16T05:30:28.839218775Z" level=info msg="Container e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:28.845815 containerd[1567]: time="2025-05-16T05:30:28.845781738Z" level=info msg="CreateContainer within sandbox \"8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\"" May 16 05:30:28.846162 containerd[1567]: time="2025-05-16T05:30:28.846109708Z" level=info msg="StartContainer for \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\"" May 16 05:30:28.846949 containerd[1567]: time="2025-05-16T05:30:28.846915793Z" level=info msg="connecting to shim e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f" address="unix:///run/containerd/s/941136f905800f4c1e08c5fc2e222414bb240800306cfa8004f32cbe5289bdda" protocol=ttrpc version=3 May 16 05:30:28.867710 systemd[1]: Started cri-containerd-e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f.scope - libcontainer container e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f. May 16 05:30:28.895388 containerd[1567]: time="2025-05-16T05:30:28.895351674Z" level=info msg="StartContainer for \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" returns successfully" May 16 05:30:29.037920 kubelet[2649]: E0516 05:30:29.037886 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:29.041459 kubelet[2649]: E0516 05:30:29.041427 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:29.054583 containerd[1567]: time="2025-05-16T05:30:29.054457191Z" level=info msg="CreateContainer within sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 05:30:29.165276 containerd[1567]: time="2025-05-16T05:30:29.165160024Z" level=info msg="Container 28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:29.234330 containerd[1567]: time="2025-05-16T05:30:29.234283516Z" level=info msg="CreateContainer within sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\"" May 16 05:30:29.235132 containerd[1567]: time="2025-05-16T05:30:29.235090712Z" level=info msg="StartContainer for \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\"" May 16 05:30:29.236228 containerd[1567]: time="2025-05-16T05:30:29.236199317Z" level=info msg="connecting to shim 28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9" address="unix:///run/containerd/s/59d9f0daa50a799821049d5500efc4bd2f8f71b4f153a712d96de12b540a170d" protocol=ttrpc version=3 May 16 05:30:29.253375 kubelet[2649]: I0516 05:30:29.253323 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-95r8q" podStartSLOduration=1.9221853759999998 podStartE2EDuration="15.253305645s" podCreationTimestamp="2025-05-16 05:30:14 +0000 UTC" firstStartedPulling="2025-05-16 05:30:15.496895206 +0000 UTC m=+7.006396249" lastFinishedPulling="2025-05-16 05:30:28.828015475 +0000 UTC m=+20.337516518" observedRunningTime="2025-05-16 05:30:29.091666051 +0000 UTC m=+20.601167094" watchObservedRunningTime="2025-05-16 05:30:29.253305645 +0000 UTC m=+20.762806688" May 16 05:30:29.295055 systemd[1]: Started cri-containerd-28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9.scope - libcontainer container 28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9. May 16 05:30:29.324724 systemd[1]: cri-containerd-28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9.scope: Deactivated successfully. May 16 05:30:29.325177 containerd[1567]: time="2025-05-16T05:30:29.325143968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\" id:\"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\" pid:3285 exited_at:{seconds:1747373429 nanos:324858579}" May 16 05:30:29.326802 containerd[1567]: time="2025-05-16T05:30:29.326776304Z" level=info msg="received exit event container_id:\"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\" id:\"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\" pid:3285 exited_at:{seconds:1747373429 nanos:324858579}" May 16 05:30:29.329674 containerd[1567]: time="2025-05-16T05:30:29.329645468Z" level=info msg="StartContainer for \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\" returns successfully" May 16 05:30:29.348737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9-rootfs.mount: Deactivated successfully. May 16 05:30:30.046622 kubelet[2649]: E0516 05:30:30.046590 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:30.047522 kubelet[2649]: E0516 05:30:30.046680 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:30.051047 containerd[1567]: time="2025-05-16T05:30:30.051004061Z" level=info msg="CreateContainer within sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 05:30:30.063231 containerd[1567]: time="2025-05-16T05:30:30.063186630Z" level=info msg="Container 3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:30.066820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354275639.mount: Deactivated successfully. May 16 05:30:30.070196 containerd[1567]: time="2025-05-16T05:30:30.070157753Z" level=info msg="CreateContainer within sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\"" May 16 05:30:30.070705 containerd[1567]: time="2025-05-16T05:30:30.070653499Z" level=info msg="StartContainer for \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\"" May 16 05:30:30.071522 containerd[1567]: time="2025-05-16T05:30:30.071494278Z" level=info msg="connecting to shim 3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577" address="unix:///run/containerd/s/59d9f0daa50a799821049d5500efc4bd2f8f71b4f153a712d96de12b540a170d" protocol=ttrpc version=3 May 16 05:30:30.091693 systemd[1]: Started cri-containerd-3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577.scope - libcontainer container 3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577. May 16 05:30:30.125499 containerd[1567]: time="2025-05-16T05:30:30.125454716Z" level=info msg="StartContainer for \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" returns successfully" May 16 05:30:30.200791 containerd[1567]: time="2025-05-16T05:30:30.200742931Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" id:\"a78a899d0d1bca8b500331f590a037b7d52abaab69568c25bbfc924738908765\" pid:3353 exited_at:{seconds:1747373430 nanos:200353445}" May 16 05:30:30.239076 kubelet[2649]: I0516 05:30:30.239034 2649 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 05:30:30.305159 systemd[1]: Created slice kubepods-burstable-pod93fdf127_161c_424f_a489_1c9779a514fd.slice - libcontainer container kubepods-burstable-pod93fdf127_161c_424f_a489_1c9779a514fd.slice. May 16 05:30:30.312955 systemd[1]: Created slice kubepods-burstable-pod37d06866_ae72_4dae_81e4_de46fc814f81.slice - libcontainer container kubepods-burstable-pod37d06866_ae72_4dae_81e4_de46fc814f81.slice. May 16 05:30:30.455397 kubelet[2649]: I0516 05:30:30.455307 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37d06866-ae72-4dae-81e4-de46fc814f81-config-volume\") pod \"coredns-674b8bbfcf-s92ml\" (UID: \"37d06866-ae72-4dae-81e4-de46fc814f81\") " pod="kube-system/coredns-674b8bbfcf-s92ml" May 16 05:30:30.455397 kubelet[2649]: I0516 05:30:30.455344 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrpb\" (UniqueName: \"kubernetes.io/projected/93fdf127-161c-424f-a489-1c9779a514fd-kube-api-access-mxrpb\") pod \"coredns-674b8bbfcf-dflvd\" (UID: \"93fdf127-161c-424f-a489-1c9779a514fd\") " pod="kube-system/coredns-674b8bbfcf-dflvd" May 16 05:30:30.455537 kubelet[2649]: I0516 05:30:30.455437 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93fdf127-161c-424f-a489-1c9779a514fd-config-volume\") pod \"coredns-674b8bbfcf-dflvd\" (UID: \"93fdf127-161c-424f-a489-1c9779a514fd\") " pod="kube-system/coredns-674b8bbfcf-dflvd" May 16 05:30:30.455591 kubelet[2649]: I0516 05:30:30.455533 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9kxl\" (UniqueName: \"kubernetes.io/projected/37d06866-ae72-4dae-81e4-de46fc814f81-kube-api-access-b9kxl\") pod \"coredns-674b8bbfcf-s92ml\" (UID: \"37d06866-ae72-4dae-81e4-de46fc814f81\") " pod="kube-system/coredns-674b8bbfcf-s92ml" May 16 05:30:30.609706 kubelet[2649]: E0516 05:30:30.609608 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:30.611040 containerd[1567]: time="2025-05-16T05:30:30.611003083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dflvd,Uid:93fdf127-161c-424f-a489-1c9779a514fd,Namespace:kube-system,Attempt:0,}" May 16 05:30:30.616960 kubelet[2649]: E0516 05:30:30.616910 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:30.619135 containerd[1567]: time="2025-05-16T05:30:30.618262800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s92ml,Uid:37d06866-ae72-4dae-81e4-de46fc814f81,Namespace:kube-system,Attempt:0,}" May 16 05:30:31.053335 kubelet[2649]: E0516 05:30:31.053304 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:32.054744 kubelet[2649]: E0516 05:30:32.054712 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:32.292108 systemd-networkd[1488]: cilium_host: Link UP May 16 05:30:32.292655 systemd-networkd[1488]: cilium_net: Link UP May 16 05:30:32.292841 systemd-networkd[1488]: cilium_net: Gained carrier May 16 05:30:32.292998 systemd-networkd[1488]: cilium_host: Gained carrier May 16 05:30:32.391843 systemd-networkd[1488]: cilium_vxlan: Link UP May 16 05:30:32.391854 systemd-networkd[1488]: cilium_vxlan: Gained carrier May 16 05:30:32.593604 kernel: NET: Registered PF_ALG protocol family May 16 05:30:32.600711 systemd-networkd[1488]: cilium_net: Gained IPv6LL May 16 05:30:33.056959 kubelet[2649]: E0516 05:30:33.056929 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:33.128735 systemd-networkd[1488]: cilium_host: Gained IPv6LL May 16 05:30:33.208615 systemd-networkd[1488]: lxc_health: Link UP May 16 05:30:33.209245 systemd-networkd[1488]: lxc_health: Gained carrier May 16 05:30:33.342269 kubelet[2649]: I0516 05:30:33.342131 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kzsch" podStartSLOduration=8.776551387 podStartE2EDuration="19.340614646s" podCreationTimestamp="2025-05-16 05:30:14 +0000 UTC" firstStartedPulling="2025-05-16 05:30:15.479316245 +0000 UTC m=+6.988817288" lastFinishedPulling="2025-05-16 05:30:26.043379504 +0000 UTC m=+17.552880547" observedRunningTime="2025-05-16 05:30:31.064895811 +0000 UTC m=+22.574396854" watchObservedRunningTime="2025-05-16 05:30:33.340614646 +0000 UTC m=+24.850115689" May 16 05:30:33.660713 kernel: eth0: renamed from tmp091ce May 16 05:30:33.660378 systemd-networkd[1488]: lxc9f5cf752c8ad: Link UP May 16 05:30:33.664597 systemd-networkd[1488]: lxc9f5cf752c8ad: Gained carrier May 16 05:30:33.688825 systemd-networkd[1488]: lxcd2d6e433d8df: Link UP May 16 05:30:33.689608 kernel: eth0: renamed from tmp8f87f May 16 05:30:33.690829 systemd-networkd[1488]: lxcd2d6e433d8df: Gained carrier May 16 05:30:34.026378 systemd-networkd[1488]: cilium_vxlan: Gained IPv6LL May 16 05:30:34.058863 kubelet[2649]: E0516 05:30:34.058828 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:35.175718 systemd-networkd[1488]: lxc_health: Gained IPv6LL May 16 05:30:35.367749 systemd-networkd[1488]: lxc9f5cf752c8ad: Gained IPv6LL May 16 05:30:35.431670 systemd-networkd[1488]: lxcd2d6e433d8df: Gained IPv6LL May 16 05:30:37.118643 containerd[1567]: time="2025-05-16T05:30:37.118563466Z" level=info msg="connecting to shim 091ce0937761cc89afded7e48a1d1612d5fc3a076b701a22315f8bddb6982592" address="unix:///run/containerd/s/a0c0d0e0360ddc4f64cfc1557edc382f19ead292c5e304b367d536bda1e7435d" namespace=k8s.io protocol=ttrpc version=3 May 16 05:30:37.118962 containerd[1567]: time="2025-05-16T05:30:37.118602009Z" level=info msg="connecting to shim 8f87febe3bcd82015d00173e911f93851e9f6d2e0d64ab3c5362a54c7aa1fb23" address="unix:///run/containerd/s/927f995771834f855ca39bd109baf0ef1f4b5a6ea7b402899c96e5e4817f1024" namespace=k8s.io protocol=ttrpc version=3 May 16 05:30:37.147700 systemd[1]: Started cri-containerd-091ce0937761cc89afded7e48a1d1612d5fc3a076b701a22315f8bddb6982592.scope - libcontainer container 091ce0937761cc89afded7e48a1d1612d5fc3a076b701a22315f8bddb6982592. May 16 05:30:37.151888 systemd[1]: Started cri-containerd-8f87febe3bcd82015d00173e911f93851e9f6d2e0d64ab3c5362a54c7aa1fb23.scope - libcontainer container 8f87febe3bcd82015d00173e911f93851e9f6d2e0d64ab3c5362a54c7aa1fb23. May 16 05:30:37.163697 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:30:37.165680 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:30:37.195051 containerd[1567]: time="2025-05-16T05:30:37.195001162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dflvd,Uid:93fdf127-161c-424f-a489-1c9779a514fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"091ce0937761cc89afded7e48a1d1612d5fc3a076b701a22315f8bddb6982592\"" May 16 05:30:37.198296 containerd[1567]: time="2025-05-16T05:30:37.198254261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s92ml,Uid:37d06866-ae72-4dae-81e4-de46fc814f81,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f87febe3bcd82015d00173e911f93851e9f6d2e0d64ab3c5362a54c7aa1fb23\"" May 16 05:30:37.198884 kubelet[2649]: E0516 05:30:37.198845 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:37.199466 kubelet[2649]: E0516 05:30:37.199443 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:37.203958 containerd[1567]: time="2025-05-16T05:30:37.203928314Z" level=info msg="CreateContainer within sandbox \"8f87febe3bcd82015d00173e911f93851e9f6d2e0d64ab3c5362a54c7aa1fb23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 05:30:37.205521 containerd[1567]: time="2025-05-16T05:30:37.205483786Z" level=info msg="CreateContainer within sandbox \"091ce0937761cc89afded7e48a1d1612d5fc3a076b701a22315f8bddb6982592\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 05:30:37.214155 containerd[1567]: time="2025-05-16T05:30:37.213871462Z" level=info msg="Container e6af65d270a572ebe2955a6e9875e524f5c291580396b3290a2d8dd191577450: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:37.224136 containerd[1567]: time="2025-05-16T05:30:37.224094747Z" level=info msg="CreateContainer within sandbox \"091ce0937761cc89afded7e48a1d1612d5fc3a076b701a22315f8bddb6982592\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6af65d270a572ebe2955a6e9875e524f5c291580396b3290a2d8dd191577450\"" May 16 05:30:37.224553 containerd[1567]: time="2025-05-16T05:30:37.224522073Z" level=info msg="StartContainer for \"e6af65d270a572ebe2955a6e9875e524f5c291580396b3290a2d8dd191577450\"" May 16 05:30:37.225279 containerd[1567]: time="2025-05-16T05:30:37.225236890Z" level=info msg="connecting to shim e6af65d270a572ebe2955a6e9875e524f5c291580396b3290a2d8dd191577450" address="unix:///run/containerd/s/a0c0d0e0360ddc4f64cfc1557edc382f19ead292c5e304b367d536bda1e7435d" protocol=ttrpc version=3 May 16 05:30:37.227519 containerd[1567]: time="2025-05-16T05:30:37.227485087Z" level=info msg="Container 1c9c7baac963b9efd1157fe6e5e592a6ad4b65272aa97aadd1aa8e7b0c5e5c7f: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:37.236077 containerd[1567]: time="2025-05-16T05:30:37.234296042Z" level=info msg="CreateContainer within sandbox \"8f87febe3bcd82015d00173e911f93851e9f6d2e0d64ab3c5362a54c7aa1fb23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c9c7baac963b9efd1157fe6e5e592a6ad4b65272aa97aadd1aa8e7b0c5e5c7f\"" May 16 05:30:37.236077 containerd[1567]: time="2025-05-16T05:30:37.235845301Z" level=info msg="StartContainer for \"1c9c7baac963b9efd1157fe6e5e592a6ad4b65272aa97aadd1aa8e7b0c5e5c7f\"" May 16 05:30:37.236548 containerd[1567]: time="2025-05-16T05:30:37.236517197Z" level=info msg="connecting to shim 1c9c7baac963b9efd1157fe6e5e592a6ad4b65272aa97aadd1aa8e7b0c5e5c7f" address="unix:///run/containerd/s/927f995771834f855ca39bd109baf0ef1f4b5a6ea7b402899c96e5e4817f1024" protocol=ttrpc version=3 May 16 05:30:37.254726 systemd[1]: Started cri-containerd-e6af65d270a572ebe2955a6e9875e524f5c291580396b3290a2d8dd191577450.scope - libcontainer container e6af65d270a572ebe2955a6e9875e524f5c291580396b3290a2d8dd191577450. May 16 05:30:37.267694 systemd[1]: Started cri-containerd-1c9c7baac963b9efd1157fe6e5e592a6ad4b65272aa97aadd1aa8e7b0c5e5c7f.scope - libcontainer container 1c9c7baac963b9efd1157fe6e5e592a6ad4b65272aa97aadd1aa8e7b0c5e5c7f. May 16 05:30:37.295575 containerd[1567]: time="2025-05-16T05:30:37.295518645Z" level=info msg="StartContainer for \"e6af65d270a572ebe2955a6e9875e524f5c291580396b3290a2d8dd191577450\" returns successfully" May 16 05:30:37.299031 containerd[1567]: time="2025-05-16T05:30:37.298994314Z" level=info msg="StartContainer for \"1c9c7baac963b9efd1157fe6e5e592a6ad4b65272aa97aadd1aa8e7b0c5e5c7f\" returns successfully" May 16 05:30:37.622945 systemd[1]: Started sshd@7-10.0.0.148:22-10.0.0.1:41078.service - OpenSSH per-connection server daemon (10.0.0.1:41078). May 16 05:30:37.675633 sshd[3991]: Accepted publickey for core from 10.0.0.1 port 41078 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:30:37.677303 sshd-session[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:30:37.681642 systemd-logind[1546]: New session 8 of user core. May 16 05:30:37.688699 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 05:30:37.838329 sshd[3993]: Connection closed by 10.0.0.1 port 41078 May 16 05:30:37.838629 sshd-session[3991]: pam_unix(sshd:session): session closed for user core May 16 05:30:37.841965 systemd[1]: sshd@7-10.0.0.148:22-10.0.0.1:41078.service: Deactivated successfully. May 16 05:30:37.843975 systemd[1]: session-8.scope: Deactivated successfully. May 16 05:30:37.846467 systemd-logind[1546]: Session 8 logged out. Waiting for processes to exit. May 16 05:30:37.847345 systemd-logind[1546]: Removed session 8. May 16 05:30:38.068519 kubelet[2649]: E0516 05:30:38.068470 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:38.070963 kubelet[2649]: E0516 05:30:38.070931 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:38.080226 kubelet[2649]: I0516 05:30:38.079957 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dflvd" podStartSLOduration=24.079827748 podStartE2EDuration="24.079827748s" podCreationTimestamp="2025-05-16 05:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:30:38.079674017 +0000 UTC m=+29.589175061" watchObservedRunningTime="2025-05-16 05:30:38.079827748 +0000 UTC m=+29.589328791" May 16 05:30:38.087927 kubelet[2649]: I0516 05:30:38.087865 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s92ml" podStartSLOduration=24.087845974 podStartE2EDuration="24.087845974s" podCreationTimestamp="2025-05-16 05:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:30:38.087139132 +0000 UTC m=+29.596640175" watchObservedRunningTime="2025-05-16 05:30:38.087845974 +0000 UTC m=+29.597347017" May 16 05:30:38.109278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936746333.mount: Deactivated successfully. May 16 05:30:39.072402 kubelet[2649]: E0516 05:30:39.072370 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:39.072823 kubelet[2649]: E0516 05:30:39.072550 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:40.083771 kubelet[2649]: E0516 05:30:40.083745 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:40.084150 kubelet[2649]: E0516 05:30:40.083745 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:41.317540 kubelet[2649]: I0516 05:30:41.317475 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 05:30:41.318034 kubelet[2649]: E0516 05:30:41.318012 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:42.088587 kubelet[2649]: E0516 05:30:42.088211 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:42.851234 systemd[1]: Started sshd@8-10.0.0.148:22-10.0.0.1:41084.service - OpenSSH per-connection server daemon (10.0.0.1:41084). May 16 05:30:42.895416 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 41084 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:30:42.897042 sshd-session[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:30:42.901118 systemd-logind[1546]: New session 9 of user core. May 16 05:30:42.910706 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 05:30:43.027996 sshd[4020]: Connection closed by 10.0.0.1 port 41084 May 16 05:30:43.028323 sshd-session[4018]: pam_unix(sshd:session): session closed for user core May 16 05:30:43.032611 systemd[1]: sshd@8-10.0.0.148:22-10.0.0.1:41084.service: Deactivated successfully. May 16 05:30:43.034606 systemd[1]: session-9.scope: Deactivated successfully. May 16 05:30:43.035416 systemd-logind[1546]: Session 9 logged out. Waiting for processes to exit. May 16 05:30:43.036798 systemd-logind[1546]: Removed session 9. May 16 05:30:48.044325 systemd[1]: Started sshd@9-10.0.0.148:22-10.0.0.1:57948.service - OpenSSH per-connection server daemon (10.0.0.1:57948). May 16 05:30:48.079892 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 57948 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:30:48.081220 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:30:48.085111 systemd-logind[1546]: New session 10 of user core. May 16 05:30:48.095675 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 05:30:48.200487 sshd[4039]: Connection closed by 10.0.0.1 port 57948 May 16 05:30:48.200777 sshd-session[4037]: pam_unix(sshd:session): session closed for user core May 16 05:30:48.204221 systemd[1]: sshd@9-10.0.0.148:22-10.0.0.1:57948.service: Deactivated successfully. May 16 05:30:48.206095 systemd[1]: session-10.scope: Deactivated successfully. May 16 05:30:48.206769 systemd-logind[1546]: Session 10 logged out. Waiting for processes to exit. May 16 05:30:48.207931 systemd-logind[1546]: Removed session 10. May 16 05:30:53.215321 systemd[1]: Started sshd@10-10.0.0.148:22-10.0.0.1:57952.service - OpenSSH per-connection server daemon (10.0.0.1:57952). May 16 05:30:53.263728 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 57952 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:30:53.265196 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:30:53.268944 systemd-logind[1546]: New session 11 of user core. May 16 05:30:53.275698 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 05:30:53.380118 sshd[4056]: Connection closed by 10.0.0.1 port 57952 May 16 05:30:53.380403 sshd-session[4054]: pam_unix(sshd:session): session closed for user core May 16 05:30:53.399158 systemd[1]: sshd@10-10.0.0.148:22-10.0.0.1:57952.service: Deactivated successfully. May 16 05:30:53.401040 systemd[1]: session-11.scope: Deactivated successfully. May 16 05:30:53.401887 systemd-logind[1546]: Session 11 logged out. Waiting for processes to exit. May 16 05:30:53.404833 systemd[1]: Started sshd@11-10.0.0.148:22-10.0.0.1:57958.service - OpenSSH per-connection server daemon (10.0.0.1:57958). May 16 05:30:53.405542 systemd-logind[1546]: Removed session 11. May 16 05:30:53.444744 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 57958 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:30:53.446075 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:30:53.450417 systemd-logind[1546]: New session 12 of user core. May 16 05:30:53.459690 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 05:30:53.592480 sshd[4073]: Connection closed by 10.0.0.1 port 57958 May 16 05:30:53.592812 sshd-session[4071]: pam_unix(sshd:session): session closed for user core May 16 05:30:53.606662 systemd[1]: sshd@11-10.0.0.148:22-10.0.0.1:57958.service: Deactivated successfully. May 16 05:30:53.609443 systemd[1]: session-12.scope: Deactivated successfully. May 16 05:30:53.610557 systemd-logind[1546]: Session 12 logged out. Waiting for processes to exit. May 16 05:30:53.614974 systemd[1]: Started sshd@12-10.0.0.148:22-10.0.0.1:40376.service - OpenSSH per-connection server daemon (10.0.0.1:40376). May 16 05:30:53.616733 systemd-logind[1546]: Removed session 12. May 16 05:30:53.653413 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 40376 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:30:53.655058 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:30:53.659265 systemd-logind[1546]: New session 13 of user core. May 16 05:30:53.671711 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 05:30:53.777411 sshd[4086]: Connection closed by 10.0.0.1 port 40376 May 16 05:30:53.777719 sshd-session[4084]: pam_unix(sshd:session): session closed for user core May 16 05:30:53.782330 systemd[1]: sshd@12-10.0.0.148:22-10.0.0.1:40376.service: Deactivated successfully. May 16 05:30:53.784359 systemd[1]: session-13.scope: Deactivated successfully. May 16 05:30:53.785093 systemd-logind[1546]: Session 13 logged out. Waiting for processes to exit. May 16 05:30:53.786292 systemd-logind[1546]: Removed session 13. May 16 05:30:58.793341 systemd[1]: Started sshd@13-10.0.0.148:22-10.0.0.1:40384.service - OpenSSH per-connection server daemon (10.0.0.1:40384). May 16 05:30:58.840885 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 40384 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:30:58.842183 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:30:58.846479 systemd-logind[1546]: New session 14 of user core. May 16 05:30:58.855706 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 05:30:58.961014 sshd[4101]: Connection closed by 10.0.0.1 port 40384 May 16 05:30:58.961277 sshd-session[4099]: pam_unix(sshd:session): session closed for user core May 16 05:30:58.965580 systemd[1]: sshd@13-10.0.0.148:22-10.0.0.1:40384.service: Deactivated successfully. May 16 05:30:58.967453 systemd[1]: session-14.scope: Deactivated successfully. May 16 05:30:58.968177 systemd-logind[1546]: Session 14 logged out. Waiting for processes to exit. May 16 05:30:58.969285 systemd-logind[1546]: Removed session 14. May 16 05:31:03.977346 systemd[1]: Started sshd@14-10.0.0.148:22-10.0.0.1:36236.service - OpenSSH per-connection server daemon (10.0.0.1:36236). May 16 05:31:04.023216 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 36236 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:04.024750 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:04.028617 systemd-logind[1546]: New session 15 of user core. May 16 05:31:04.038679 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 05:31:04.144760 sshd[4116]: Connection closed by 10.0.0.1 port 36236 May 16 05:31:04.145184 sshd-session[4114]: pam_unix(sshd:session): session closed for user core May 16 05:31:04.153364 systemd[1]: sshd@14-10.0.0.148:22-10.0.0.1:36236.service: Deactivated successfully. May 16 05:31:04.155212 systemd[1]: session-15.scope: Deactivated successfully. May 16 05:31:04.156019 systemd-logind[1546]: Session 15 logged out. Waiting for processes to exit. May 16 05:31:04.159479 systemd[1]: Started sshd@15-10.0.0.148:22-10.0.0.1:36240.service - OpenSSH per-connection server daemon (10.0.0.1:36240). May 16 05:31:04.160218 systemd-logind[1546]: Removed session 15. May 16 05:31:04.207005 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 36240 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:04.208302 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:04.212608 systemd-logind[1546]: New session 16 of user core. May 16 05:31:04.222716 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 05:31:04.429608 sshd[4132]: Connection closed by 10.0.0.1 port 36240 May 16 05:31:04.429871 sshd-session[4130]: pam_unix(sshd:session): session closed for user core May 16 05:31:04.438239 systemd[1]: sshd@15-10.0.0.148:22-10.0.0.1:36240.service: Deactivated successfully. May 16 05:31:04.440053 systemd[1]: session-16.scope: Deactivated successfully. May 16 05:31:04.440850 systemd-logind[1546]: Session 16 logged out. Waiting for processes to exit. May 16 05:31:04.443942 systemd[1]: Started sshd@16-10.0.0.148:22-10.0.0.1:36248.service - OpenSSH per-connection server daemon (10.0.0.1:36248). May 16 05:31:04.444559 systemd-logind[1546]: Removed session 16. May 16 05:31:04.497095 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 36248 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:04.498409 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:04.502285 systemd-logind[1546]: New session 17 of user core. May 16 05:31:04.511688 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 05:31:05.234738 sshd[4146]: Connection closed by 10.0.0.1 port 36248 May 16 05:31:05.235199 sshd-session[4144]: pam_unix(sshd:session): session closed for user core May 16 05:31:05.244928 systemd[1]: sshd@16-10.0.0.148:22-10.0.0.1:36248.service: Deactivated successfully. May 16 05:31:05.246916 systemd[1]: session-17.scope: Deactivated successfully. May 16 05:31:05.249917 systemd-logind[1546]: Session 17 logged out. Waiting for processes to exit. May 16 05:31:05.252479 systemd[1]: Started sshd@17-10.0.0.148:22-10.0.0.1:36260.service - OpenSSH per-connection server daemon (10.0.0.1:36260). May 16 05:31:05.255175 systemd-logind[1546]: Removed session 17. May 16 05:31:05.296365 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 36260 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:05.297957 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:05.302498 systemd-logind[1546]: New session 18 of user core. May 16 05:31:05.310710 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 05:31:05.520461 sshd[4166]: Connection closed by 10.0.0.1 port 36260 May 16 05:31:05.521309 sshd-session[4164]: pam_unix(sshd:session): session closed for user core May 16 05:31:05.531855 systemd[1]: sshd@17-10.0.0.148:22-10.0.0.1:36260.service: Deactivated successfully. May 16 05:31:05.533876 systemd[1]: session-18.scope: Deactivated successfully. May 16 05:31:05.534731 systemd-logind[1546]: Session 18 logged out. Waiting for processes to exit. May 16 05:31:05.538358 systemd[1]: Started sshd@18-10.0.0.148:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). May 16 05:31:05.539011 systemd-logind[1546]: Removed session 18. May 16 05:31:05.590240 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:05.591951 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:05.596227 systemd-logind[1546]: New session 19 of user core. May 16 05:31:05.610705 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 05:31:05.714410 sshd[4179]: Connection closed by 10.0.0.1 port 36266 May 16 05:31:05.714707 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 16 05:31:05.719144 systemd[1]: sshd@18-10.0.0.148:22-10.0.0.1:36266.service: Deactivated successfully. May 16 05:31:05.721260 systemd[1]: session-19.scope: Deactivated successfully. May 16 05:31:05.722136 systemd-logind[1546]: Session 19 logged out. Waiting for processes to exit. May 16 05:31:05.723316 systemd-logind[1546]: Removed session 19. May 16 05:31:10.727406 systemd[1]: Started sshd@19-10.0.0.148:22-10.0.0.1:36280.service - OpenSSH per-connection server daemon (10.0.0.1:36280). May 16 05:31:10.778100 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 36280 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:10.779731 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:10.783951 systemd-logind[1546]: New session 20 of user core. May 16 05:31:10.797700 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 05:31:10.902436 sshd[4199]: Connection closed by 10.0.0.1 port 36280 May 16 05:31:10.902761 sshd-session[4197]: pam_unix(sshd:session): session closed for user core May 16 05:31:10.907069 systemd[1]: sshd@19-10.0.0.148:22-10.0.0.1:36280.service: Deactivated successfully. May 16 05:31:10.908815 systemd[1]: session-20.scope: Deactivated successfully. May 16 05:31:10.909663 systemd-logind[1546]: Session 20 logged out. Waiting for processes to exit. May 16 05:31:10.910777 systemd-logind[1546]: Removed session 20. May 16 05:31:15.919347 systemd[1]: Started sshd@20-10.0.0.148:22-10.0.0.1:56166.service - OpenSSH per-connection server daemon (10.0.0.1:56166). May 16 05:31:15.954042 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 56166 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:15.955301 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:15.959458 systemd-logind[1546]: New session 21 of user core. May 16 05:31:15.970704 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 05:31:16.073618 sshd[4218]: Connection closed by 10.0.0.1 port 56166 May 16 05:31:16.074003 sshd-session[4216]: pam_unix(sshd:session): session closed for user core May 16 05:31:16.078766 systemd[1]: sshd@20-10.0.0.148:22-10.0.0.1:56166.service: Deactivated successfully. May 16 05:31:16.080801 systemd[1]: session-21.scope: Deactivated successfully. May 16 05:31:16.081687 systemd-logind[1546]: Session 21 logged out. Waiting for processes to exit. May 16 05:31:16.083039 systemd-logind[1546]: Removed session 21. May 16 05:31:21.090376 systemd[1]: Started sshd@21-10.0.0.148:22-10.0.0.1:56170.service - OpenSSH per-connection server daemon (10.0.0.1:56170). May 16 05:31:21.146993 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 56170 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:21.148255 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:21.152681 systemd-logind[1546]: New session 22 of user core. May 16 05:31:21.159718 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 05:31:21.265545 sshd[4233]: Connection closed by 10.0.0.1 port 56170 May 16 05:31:21.265874 sshd-session[4231]: pam_unix(sshd:session): session closed for user core May 16 05:31:21.275329 systemd[1]: sshd@21-10.0.0.148:22-10.0.0.1:56170.service: Deactivated successfully. May 16 05:31:21.277358 systemd[1]: session-22.scope: Deactivated successfully. May 16 05:31:21.278086 systemd-logind[1546]: Session 22 logged out. Waiting for processes to exit. May 16 05:31:21.281080 systemd[1]: Started sshd@22-10.0.0.148:22-10.0.0.1:56176.service - OpenSSH per-connection server daemon (10.0.0.1:56176). May 16 05:31:21.281744 systemd-logind[1546]: Removed session 22. May 16 05:31:21.331966 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 56176 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:21.333411 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:21.337744 systemd-logind[1546]: New session 23 of user core. May 16 05:31:21.349681 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 05:31:22.798975 containerd[1567]: time="2025-05-16T05:31:22.798367361Z" level=info msg="StopContainer for \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" with timeout 30 (s)" May 16 05:31:22.830044 containerd[1567]: time="2025-05-16T05:31:22.829961946Z" level=info msg="Stop container \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" with signal terminated" May 16 05:31:22.840398 containerd[1567]: time="2025-05-16T05:31:22.840355147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" id:\"8372201d75184ae06fdc97f35eabb39aa52a644510fd8eb5faaff57817f2f667\" pid:4268 exited_at:{seconds:1747373482 nanos:839695469}" May 16 05:31:22.841400 containerd[1567]: time="2025-05-16T05:31:22.841365531Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 05:31:22.841970 systemd[1]: cri-containerd-e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f.scope: Deactivated successfully. May 16 05:31:22.843393 containerd[1567]: time="2025-05-16T05:31:22.843362330Z" level=info msg="StopContainer for \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" with timeout 2 (s)" May 16 05:31:22.843725 containerd[1567]: time="2025-05-16T05:31:22.843698545Z" level=info msg="Stop container \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" with signal terminated" May 16 05:31:22.844106 containerd[1567]: time="2025-05-16T05:31:22.843889973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" id:\"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" pid:3250 exited_at:{seconds:1747373482 nanos:843604755}" May 16 05:31:22.844242 containerd[1567]: time="2025-05-16T05:31:22.844213055Z" level=info msg="received exit event container_id:\"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" id:\"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" pid:3250 exited_at:{seconds:1747373482 nanos:843604755}" May 16 05:31:22.852763 systemd-networkd[1488]: lxc_health: Link DOWN May 16 05:31:22.852774 systemd-networkd[1488]: lxc_health: Lost carrier May 16 05:31:22.869442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f-rootfs.mount: Deactivated successfully. May 16 05:31:22.871848 systemd[1]: cri-containerd-3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577.scope: Deactivated successfully. May 16 05:31:22.872199 systemd[1]: cri-containerd-3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577.scope: Consumed 6.190s CPU time, 126.1M memory peak, 232K read from disk, 13.3M written to disk. May 16 05:31:22.872722 containerd[1567]: time="2025-05-16T05:31:22.872676209Z" level=info msg="received exit event container_id:\"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" id:\"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" pid:3323 exited_at:{seconds:1747373482 nanos:872403635}" May 16 05:31:22.872722 containerd[1567]: time="2025-05-16T05:31:22.872704023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" id:\"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" pid:3323 exited_at:{seconds:1747373482 nanos:872403635}" May 16 05:31:22.882165 containerd[1567]: time="2025-05-16T05:31:22.882126529Z" level=info msg="StopContainer for \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" returns successfully" May 16 05:31:22.882871 containerd[1567]: time="2025-05-16T05:31:22.882835552Z" level=info msg="StopPodSandbox for \"8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244\"" May 16 05:31:22.882976 containerd[1567]: time="2025-05-16T05:31:22.882899094Z" level=info msg="Container to stop \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:31:22.889951 systemd[1]: cri-containerd-8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244.scope: Deactivated successfully. May 16 05:31:22.896247 containerd[1567]: time="2025-05-16T05:31:22.896209225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244\" id:\"8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244\" pid:2844 exit_status:137 exited_at:{seconds:1747373482 nanos:895917273}" May 16 05:31:22.896686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577-rootfs.mount: Deactivated successfully. May 16 05:31:22.922638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244-rootfs.mount: Deactivated successfully. May 16 05:31:23.003730 containerd[1567]: time="2025-05-16T05:31:23.003621766Z" level=info msg="shim disconnected" id=8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244 namespace=k8s.io May 16 05:31:23.003730 containerd[1567]: time="2025-05-16T05:31:23.003658096Z" level=warning msg="cleaning up after shim disconnected" id=8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244 namespace=k8s.io May 16 05:31:23.030161 containerd[1567]: time="2025-05-16T05:31:23.003667725Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 05:31:23.030289 containerd[1567]: time="2025-05-16T05:31:23.007194551Z" level=info msg="StopContainer for \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" returns successfully" May 16 05:31:23.031013 containerd[1567]: time="2025-05-16T05:31:23.030704391Z" level=info msg="StopPodSandbox for \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\"" May 16 05:31:23.031013 containerd[1567]: time="2025-05-16T05:31:23.030780467Z" level=info msg="Container to stop \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:31:23.031013 containerd[1567]: time="2025-05-16T05:31:23.030792030Z" level=info msg="Container to stop \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:31:23.031013 containerd[1567]: time="2025-05-16T05:31:23.030800656Z" level=info msg="Container to stop \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:31:23.031013 containerd[1567]: time="2025-05-16T05:31:23.030808291Z" level=info msg="Container to stop \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:31:23.031013 containerd[1567]: time="2025-05-16T05:31:23.030816006Z" level=info msg="Container to stop \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:31:23.037668 systemd[1]: cri-containerd-ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1.scope: Deactivated successfully. May 16 05:31:23.052253 containerd[1567]: time="2025-05-16T05:31:23.052113947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" id:\"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" pid:2849 exit_status:137 exited_at:{seconds:1747373483 nanos:38024737}" May 16 05:31:23.052253 containerd[1567]: time="2025-05-16T05:31:23.052225541Z" level=info msg="received exit event sandbox_id:\"8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244\" exit_status:137 exited_at:{seconds:1747373482 nanos:895917273}" May 16 05:31:23.054255 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244-shm.mount: Deactivated successfully. May 16 05:31:23.059229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1-rootfs.mount: Deactivated successfully. May 16 05:31:23.061867 containerd[1567]: time="2025-05-16T05:31:23.061797262Z" level=info msg="received exit event sandbox_id:\"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" exit_status:137 exited_at:{seconds:1747373483 nanos:38024737}" May 16 05:31:23.062171 containerd[1567]: time="2025-05-16T05:31:23.062117978Z" level=info msg="TearDown network for sandbox \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" successfully" May 16 05:31:23.062171 containerd[1567]: time="2025-05-16T05:31:23.062140761Z" level=info msg="StopPodSandbox for \"ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1\" returns successfully" May 16 05:31:23.062587 containerd[1567]: time="2025-05-16T05:31:23.062475956Z" level=info msg="TearDown network for sandbox \"8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244\" successfully" May 16 05:31:23.062587 containerd[1567]: time="2025-05-16T05:31:23.062511844Z" level=info msg="StopPodSandbox for \"8ad2eae41baf3bd9f97b3aa7a44f45b286c3756c00136c800e231182627aa244\" returns successfully" May 16 05:31:23.063845 containerd[1567]: time="2025-05-16T05:31:23.063810320Z" level=info msg="shim disconnected" id=ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1 namespace=k8s.io May 16 05:31:23.063845 containerd[1567]: time="2025-05-16T05:31:23.063833224Z" level=warning msg="cleaning up after shim disconnected" id=ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1 namespace=k8s.io May 16 05:31:23.063845 containerd[1567]: time="2025-05-16T05:31:23.063842141Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 05:31:23.119972 kubelet[2649]: I0516 05:31:23.119921 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snrh8\" (UniqueName: \"kubernetes.io/projected/f5ee9e88-2db9-4e60-8543-7eba4291819e-kube-api-access-snrh8\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.119972 kubelet[2649]: I0516 05:31:23.119959 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-config-path\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.119972 kubelet[2649]: I0516 05:31:23.119978 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-xtables-lock\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120414 kubelet[2649]: I0516 05:31:23.119994 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cni-path\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120414 kubelet[2649]: I0516 05:31:23.120009 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-etc-cni-netd\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120414 kubelet[2649]: I0516 05:31:23.120027 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-host-proc-sys-net\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120414 kubelet[2649]: I0516 05:31:23.120047 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:31:23.120414 kubelet[2649]: I0516 05:31:23.120069 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:31:23.120535 kubelet[2649]: I0516 05:31:23.120091 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cni-path" (OuterVolumeSpecName: "cni-path") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:31:23.120535 kubelet[2649]: I0516 05:31:23.120111 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:31:23.120535 kubelet[2649]: I0516 05:31:23.120126 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-hostproc\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120535 kubelet[2649]: I0516 05:31:23.120144 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-bpf-maps\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120535 kubelet[2649]: I0516 05:31:23.120159 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-lib-modules\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120535 kubelet[2649]: I0516 05:31:23.120176 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5ee9e88-2db9-4e60-8543-7eba4291819e-clustermesh-secrets\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120704 kubelet[2649]: I0516 05:31:23.120192 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-host-proc-sys-kernel\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120704 kubelet[2649]: I0516 05:31:23.120206 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-run\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120704 kubelet[2649]: I0516 05:31:23.120220 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5ee9e88-2db9-4e60-8543-7eba4291819e-hubble-tls\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120704 kubelet[2649]: I0516 05:31:23.120238 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-cgroup\") pod \"f5ee9e88-2db9-4e60-8543-7eba4291819e\" (UID: \"f5ee9e88-2db9-4e60-8543-7eba4291819e\") " May 16 05:31:23.120704 kubelet[2649]: I0516 05:31:23.120254 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft95s\" (UniqueName: \"kubernetes.io/projected/c9551319-d139-4d8d-90aa-ae368527bc1b-kube-api-access-ft95s\") pod \"c9551319-d139-4d8d-90aa-ae368527bc1b\" (UID: \"c9551319-d139-4d8d-90aa-ae368527bc1b\") " May 16 05:31:23.120704 kubelet[2649]: I0516 05:31:23.120269 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9551319-d139-4d8d-90aa-ae368527bc1b-cilium-config-path\") pod \"c9551319-d139-4d8d-90aa-ae368527bc1b\" (UID: \"c9551319-d139-4d8d-90aa-ae368527bc1b\") " May 16 05:31:23.120874 kubelet[2649]: I0516 05:31:23.120296 2649 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.120874 kubelet[2649]: I0516 05:31:23.120305 2649 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.120874 kubelet[2649]: I0516 05:31:23.120314 2649 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.120874 kubelet[2649]: I0516 05:31:23.120321 2649 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.120874 kubelet[2649]: I0516 05:31:23.120662 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:31:23.120983 kubelet[2649]: I0516 05:31:23.120909 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:31:23.121414 kubelet[2649]: I0516 05:31:23.121377 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-hostproc" (OuterVolumeSpecName: "hostproc") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:31:23.121543 kubelet[2649]: I0516 05:31:23.121514 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:31:23.121588 kubelet[2649]: I0516 05:31:23.121542 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:31:23.122683 kubelet[2649]: I0516 05:31:23.122637 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:31:23.123121 kubelet[2649]: I0516 05:31:23.123101 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 05:31:23.124010 kubelet[2649]: I0516 05:31:23.123979 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ee9e88-2db9-4e60-8543-7eba4291819e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 05:31:23.124698 kubelet[2649]: I0516 05:31:23.124650 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ee9e88-2db9-4e60-8543-7eba4291819e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 05:31:23.125842 kubelet[2649]: I0516 05:31:23.125812 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9551319-d139-4d8d-90aa-ae368527bc1b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c9551319-d139-4d8d-90aa-ae368527bc1b" (UID: "c9551319-d139-4d8d-90aa-ae368527bc1b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 05:31:23.126453 kubelet[2649]: I0516 05:31:23.126417 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9551319-d139-4d8d-90aa-ae368527bc1b-kube-api-access-ft95s" (OuterVolumeSpecName: "kube-api-access-ft95s") pod "c9551319-d139-4d8d-90aa-ae368527bc1b" (UID: "c9551319-d139-4d8d-90aa-ae368527bc1b"). InnerVolumeSpecName "kube-api-access-ft95s". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 05:31:23.126453 kubelet[2649]: I0516 05:31:23.126416 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ee9e88-2db9-4e60-8543-7eba4291819e-kube-api-access-snrh8" (OuterVolumeSpecName: "kube-api-access-snrh8") pod "f5ee9e88-2db9-4e60-8543-7eba4291819e" (UID: "f5ee9e88-2db9-4e60-8543-7eba4291819e"). InnerVolumeSpecName "kube-api-access-snrh8". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 05:31:23.172933 kubelet[2649]: I0516 05:31:23.172899 2649 scope.go:117] "RemoveContainer" containerID="3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577" May 16 05:31:23.175288 containerd[1567]: time="2025-05-16T05:31:23.175258281Z" level=info msg="RemoveContainer for \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\"" May 16 05:31:23.180075 systemd[1]: Removed slice kubepods-burstable-podf5ee9e88_2db9_4e60_8543_7eba4291819e.slice - libcontainer container kubepods-burstable-podf5ee9e88_2db9_4e60_8543_7eba4291819e.slice. May 16 05:31:23.180278 systemd[1]: kubepods-burstable-podf5ee9e88_2db9_4e60_8543_7eba4291819e.slice: Consumed 6.293s CPU time, 126.5M memory peak, 248K read from disk, 13.3M written to disk. May 16 05:31:23.181807 systemd[1]: Removed slice kubepods-besteffort-podc9551319_d139_4d8d_90aa_ae368527bc1b.slice - libcontainer container kubepods-besteffort-podc9551319_d139_4d8d_90aa_ae368527bc1b.slice. May 16 05:31:23.183431 containerd[1567]: time="2025-05-16T05:31:23.183402989Z" level=info msg="RemoveContainer for \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" returns successfully" May 16 05:31:23.183767 kubelet[2649]: I0516 05:31:23.183683 2649 scope.go:117] "RemoveContainer" containerID="28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9" May 16 05:31:23.185113 containerd[1567]: time="2025-05-16T05:31:23.185042119Z" level=info msg="RemoveContainer for \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\"" May 16 05:31:23.190150 containerd[1567]: time="2025-05-16T05:31:23.190043809Z" level=info msg="RemoveContainer for \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\" returns successfully" May 16 05:31:23.190435 kubelet[2649]: I0516 05:31:23.190409 2649 scope.go:117] "RemoveContainer" containerID="85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008" May 16 05:31:23.200282 containerd[1567]: time="2025-05-16T05:31:23.200239328Z" level=info msg="RemoveContainer for \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\"" May 16 05:31:23.204588 containerd[1567]: time="2025-05-16T05:31:23.204537878Z" level=info msg="RemoveContainer for \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\" returns successfully" May 16 05:31:23.204803 kubelet[2649]: I0516 05:31:23.204730 2649 scope.go:117] "RemoveContainer" containerID="1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467" May 16 05:31:23.206120 containerd[1567]: time="2025-05-16T05:31:23.206022449Z" level=info msg="RemoveContainer for \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\"" May 16 05:31:23.209466 containerd[1567]: time="2025-05-16T05:31:23.209437521Z" level=info msg="RemoveContainer for \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\" returns successfully" May 16 05:31:23.209611 kubelet[2649]: I0516 05:31:23.209585 2649 scope.go:117] "RemoveContainer" containerID="48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8" May 16 05:31:23.211017 containerd[1567]: time="2025-05-16T05:31:23.210987068Z" level=info msg="RemoveContainer for \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\"" May 16 05:31:23.214246 containerd[1567]: time="2025-05-16T05:31:23.214224167Z" level=info msg="RemoveContainer for \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\" returns successfully" May 16 05:31:23.214412 kubelet[2649]: I0516 05:31:23.214369 2649 scope.go:117] "RemoveContainer" containerID="3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577" May 16 05:31:23.214626 containerd[1567]: time="2025-05-16T05:31:23.214561125Z" level=error msg="ContainerStatus for \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\": not found" May 16 05:31:23.218190 kubelet[2649]: E0516 05:31:23.218150 2649 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\": not found" containerID="3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577" May 16 05:31:23.218237 kubelet[2649]: I0516 05:31:23.218179 2649 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577"} err="failed to get container status \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b2908fcc7c7bfb24e50e26d108682bcf31f126ffad5c6c6601979e800912577\": not found" May 16 05:31:23.218237 kubelet[2649]: I0516 05:31:23.218209 2649 scope.go:117] "RemoveContainer" containerID="28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9" May 16 05:31:23.218385 containerd[1567]: time="2025-05-16T05:31:23.218352249Z" level=error msg="ContainerStatus for \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\": not found" May 16 05:31:23.218452 kubelet[2649]: E0516 05:31:23.218438 2649 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\": not found" containerID="28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9" May 16 05:31:23.218499 kubelet[2649]: I0516 05:31:23.218453 2649 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9"} err="failed to get container status \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"28e86c2e9a00e453ef70a538348f00e4194dd24b87755ab59e87c7f522734bb9\": not found" May 16 05:31:23.218499 kubelet[2649]: I0516 05:31:23.218467 2649 scope.go:117] "RemoveContainer" containerID="85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008" May 16 05:31:23.218642 containerd[1567]: time="2025-05-16T05:31:23.218608983Z" level=error msg="ContainerStatus for \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\": not found" May 16 05:31:23.218799 kubelet[2649]: E0516 05:31:23.218766 2649 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\": not found" containerID="85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008" May 16 05:31:23.218840 kubelet[2649]: I0516 05:31:23.218801 2649 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008"} err="failed to get container status \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\": rpc error: code = NotFound desc = an error occurred when try to find container \"85b55d914db5de112597f715aaf7d2b49bda139345b5767da8dc08985c305008\": not found" May 16 05:31:23.218840 kubelet[2649]: I0516 05:31:23.218824 2649 scope.go:117] "RemoveContainer" containerID="1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467" May 16 05:31:23.219079 containerd[1567]: time="2025-05-16T05:31:23.219043257Z" level=error msg="ContainerStatus for \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\": not found" May 16 05:31:23.219194 kubelet[2649]: E0516 05:31:23.219171 2649 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\": not found" containerID="1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467" May 16 05:31:23.219227 kubelet[2649]: I0516 05:31:23.219193 2649 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467"} err="failed to get container status \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ac965194c4fb2c30bb0cc376045c0457abc0aa85158aea1a312c9a337c75467\": not found" May 16 05:31:23.219227 kubelet[2649]: I0516 05:31:23.219206 2649 scope.go:117] "RemoveContainer" containerID="48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8" May 16 05:31:23.219364 containerd[1567]: time="2025-05-16T05:31:23.219336059Z" level=error msg="ContainerStatus for \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\": not found" May 16 05:31:23.219488 kubelet[2649]: E0516 05:31:23.219453 2649 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\": not found" containerID="48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8" May 16 05:31:23.219527 kubelet[2649]: I0516 05:31:23.219494 2649 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8"} err="failed to get container status \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"48758e728c32203f21acb2799e02952bba0a4bcf3fd27a87c43707157c93f2e8\": not found" May 16 05:31:23.219527 kubelet[2649]: I0516 05:31:23.219520 2649 scope.go:117] "RemoveContainer" containerID="e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f" May 16 05:31:23.220654 kubelet[2649]: I0516 05:31:23.220637 2649 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.220841 kubelet[2649]: I0516 05:31:23.220728 2649 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.220841 kubelet[2649]: I0516 05:31:23.220741 2649 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.220841 kubelet[2649]: I0516 05:31:23.220759 2649 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5ee9e88-2db9-4e60-8543-7eba4291819e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.220841 kubelet[2649]: I0516 05:31:23.220768 2649 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.220841 kubelet[2649]: I0516 05:31:23.220776 2649 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.220841 kubelet[2649]: I0516 05:31:23.220783 2649 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5ee9e88-2db9-4e60-8543-7eba4291819e-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.220841 kubelet[2649]: I0516 05:31:23.220791 2649 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.220841 kubelet[2649]: I0516 05:31:23.220799 2649 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ft95s\" (UniqueName: \"kubernetes.io/projected/c9551319-d139-4d8d-90aa-ae368527bc1b-kube-api-access-ft95s\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.221022 containerd[1567]: time="2025-05-16T05:31:23.220775105Z" level=info msg="RemoveContainer for \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\"" May 16 05:31:23.221050 kubelet[2649]: I0516 05:31:23.220807 2649 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9551319-d139-4d8d-90aa-ae368527bc1b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.221050 kubelet[2649]: I0516 05:31:23.220816 2649 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-snrh8\" (UniqueName: \"kubernetes.io/projected/f5ee9e88-2db9-4e60-8543-7eba4291819e-kube-api-access-snrh8\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.221050 kubelet[2649]: I0516 05:31:23.220824 2649 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5ee9e88-2db9-4e60-8543-7eba4291819e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 05:31:23.223888 containerd[1567]: time="2025-05-16T05:31:23.223858328Z" level=info msg="RemoveContainer for \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" returns successfully" May 16 05:31:23.224013 kubelet[2649]: I0516 05:31:23.223989 2649 scope.go:117] "RemoveContainer" containerID="e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f" May 16 05:31:23.224194 containerd[1567]: time="2025-05-16T05:31:23.224156913Z" level=error msg="ContainerStatus for \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\": not found" May 16 05:31:23.224276 kubelet[2649]: E0516 05:31:23.224256 2649 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\": not found" containerID="e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f" May 16 05:31:23.224311 kubelet[2649]: I0516 05:31:23.224277 2649 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f"} err="failed to get container status \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e87aef37adaa378a2957285995a9f7bb6f8134e4719739de1e073c1d513cea7f\": not found" May 16 05:31:23.626054 kubelet[2649]: E0516 05:31:23.626018 2649 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 05:31:23.869453 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ebdd48b4a3276c8a29225422169a394a7d699e7c28f806e93719f5a85d3c50a1-shm.mount: Deactivated successfully. May 16 05:31:23.869591 systemd[1]: var-lib-kubelet-pods-f5ee9e88\x2d2db9\x2d4e60\x2d8543\x2d7eba4291819e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 05:31:23.869672 systemd[1]: var-lib-kubelet-pods-f5ee9e88\x2d2db9\x2d4e60\x2d8543\x2d7eba4291819e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 05:31:23.869746 systemd[1]: var-lib-kubelet-pods-c9551319\x2dd139\x2d4d8d\x2d90aa\x2dae368527bc1b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dft95s.mount: Deactivated successfully. May 16 05:31:23.869828 systemd[1]: var-lib-kubelet-pods-f5ee9e88\x2d2db9\x2d4e60\x2d8543\x2d7eba4291819e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsnrh8.mount: Deactivated successfully. May 16 05:31:24.583423 kubelet[2649]: I0516 05:31:24.583379 2649 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9551319-d139-4d8d-90aa-ae368527bc1b" path="/var/lib/kubelet/pods/c9551319-d139-4d8d-90aa-ae368527bc1b/volumes" May 16 05:31:24.583946 kubelet[2649]: I0516 05:31:24.583921 2649 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ee9e88-2db9-4e60-8543-7eba4291819e" path="/var/lib/kubelet/pods/f5ee9e88-2db9-4e60-8543-7eba4291819e/volumes" May 16 05:31:24.764834 sshd[4248]: Connection closed by 10.0.0.1 port 56176 May 16 05:31:24.765340 sshd-session[4246]: pam_unix(sshd:session): session closed for user core May 16 05:31:24.777375 systemd[1]: sshd@22-10.0.0.148:22-10.0.0.1:56176.service: Deactivated successfully. May 16 05:31:24.779311 systemd[1]: session-23.scope: Deactivated successfully. May 16 05:31:24.780066 systemd-logind[1546]: Session 23 logged out. Waiting for processes to exit. May 16 05:31:24.783310 systemd[1]: Started sshd@23-10.0.0.148:22-10.0.0.1:60772.service - OpenSSH per-connection server daemon (10.0.0.1:60772). May 16 05:31:24.783892 systemd-logind[1546]: Removed session 23. May 16 05:31:24.846309 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 60772 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:24.847798 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:24.852145 systemd-logind[1546]: New session 24 of user core. May 16 05:31:24.864688 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 05:31:25.577594 sshd[4405]: Connection closed by 10.0.0.1 port 60772 May 16 05:31:25.577325 sshd-session[4403]: pam_unix(sshd:session): session closed for user core May 16 05:31:25.592087 systemd[1]: sshd@23-10.0.0.148:22-10.0.0.1:60772.service: Deactivated successfully. May 16 05:31:25.595025 systemd[1]: session-24.scope: Deactivated successfully. May 16 05:31:25.598173 systemd-logind[1546]: Session 24 logged out. Waiting for processes to exit. May 16 05:31:25.603421 systemd[1]: Started sshd@24-10.0.0.148:22-10.0.0.1:60780.service - OpenSSH per-connection server daemon (10.0.0.1:60780). May 16 05:31:25.605682 systemd-logind[1546]: Removed session 24. May 16 05:31:25.618938 systemd[1]: Created slice kubepods-burstable-podabb8be61_5ccf_4cff_b2ad_fc2154164d37.slice - libcontainer container kubepods-burstable-podabb8be61_5ccf_4cff_b2ad_fc2154164d37.slice. May 16 05:31:25.633790 kubelet[2649]: I0516 05:31:25.633743 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abb8be61-5ccf-4cff-b2ad-fc2154164d37-etc-cni-netd\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.633790 kubelet[2649]: I0516 05:31:25.633775 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abb8be61-5ccf-4cff-b2ad-fc2154164d37-lib-modules\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.633790 kubelet[2649]: I0516 05:31:25.633794 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abb8be61-5ccf-4cff-b2ad-fc2154164d37-host-proc-sys-kernel\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634203 kubelet[2649]: I0516 05:31:25.633816 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abb8be61-5ccf-4cff-b2ad-fc2154164d37-hostproc\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634203 kubelet[2649]: I0516 05:31:25.633871 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abb8be61-5ccf-4cff-b2ad-fc2154164d37-xtables-lock\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634203 kubelet[2649]: I0516 05:31:25.633914 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abb8be61-5ccf-4cff-b2ad-fc2154164d37-cilium-config-path\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634203 kubelet[2649]: I0516 05:31:25.633930 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abb8be61-5ccf-4cff-b2ad-fc2154164d37-cni-path\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634203 kubelet[2649]: I0516 05:31:25.633948 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8kct\" (UniqueName: \"kubernetes.io/projected/abb8be61-5ccf-4cff-b2ad-fc2154164d37-kube-api-access-k8kct\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634203 kubelet[2649]: I0516 05:31:25.633965 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abb8be61-5ccf-4cff-b2ad-fc2154164d37-cilium-run\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634331 kubelet[2649]: I0516 05:31:25.633978 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abb8be61-5ccf-4cff-b2ad-fc2154164d37-bpf-maps\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634331 kubelet[2649]: I0516 05:31:25.633993 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abb8be61-5ccf-4cff-b2ad-fc2154164d37-cilium-cgroup\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634331 kubelet[2649]: I0516 05:31:25.634010 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abb8be61-5ccf-4cff-b2ad-fc2154164d37-clustermesh-secrets\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634331 kubelet[2649]: I0516 05:31:25.634045 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/abb8be61-5ccf-4cff-b2ad-fc2154164d37-cilium-ipsec-secrets\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634331 kubelet[2649]: I0516 05:31:25.634064 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abb8be61-5ccf-4cff-b2ad-fc2154164d37-host-proc-sys-net\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.634331 kubelet[2649]: I0516 05:31:25.634078 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abb8be61-5ccf-4cff-b2ad-fc2154164d37-hubble-tls\") pod \"cilium-k28lz\" (UID: \"abb8be61-5ccf-4cff-b2ad-fc2154164d37\") " pod="kube-system/cilium-k28lz" May 16 05:31:25.646419 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 60780 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:25.647677 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:25.651881 systemd-logind[1546]: New session 25 of user core. May 16 05:31:25.663706 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 05:31:25.713879 sshd[4419]: Connection closed by 10.0.0.1 port 60780 May 16 05:31:25.714199 sshd-session[4417]: pam_unix(sshd:session): session closed for user core May 16 05:31:25.722328 systemd[1]: sshd@24-10.0.0.148:22-10.0.0.1:60780.service: Deactivated successfully. May 16 05:31:25.724465 systemd[1]: session-25.scope: Deactivated successfully. May 16 05:31:25.725312 systemd-logind[1546]: Session 25 logged out. Waiting for processes to exit. May 16 05:31:25.728343 systemd[1]: Started sshd@25-10.0.0.148:22-10.0.0.1:60790.service - OpenSSH per-connection server daemon (10.0.0.1:60790). May 16 05:31:25.729111 systemd-logind[1546]: Removed session 25. May 16 05:31:25.772375 sshd[4426]: Accepted publickey for core from 10.0.0.1 port 60790 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:31:25.773738 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:31:25.778466 systemd-logind[1546]: New session 26 of user core. May 16 05:31:25.787714 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 05:31:25.922113 kubelet[2649]: E0516 05:31:25.921989 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:25.922918 containerd[1567]: time="2025-05-16T05:31:25.922552285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k28lz,Uid:abb8be61-5ccf-4cff-b2ad-fc2154164d37,Namespace:kube-system,Attempt:0,}" May 16 05:31:25.936981 containerd[1567]: time="2025-05-16T05:31:25.936928311Z" level=info msg="connecting to shim bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465" address="unix:///run/containerd/s/c8644de6762fc99fd74635f89cbb06fe528bbe6af2279fee91a0b61bfc59a2dc" namespace=k8s.io protocol=ttrpc version=3 May 16 05:31:25.961702 systemd[1]: Started cri-containerd-bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465.scope - libcontainer container bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465. May 16 05:31:25.984980 containerd[1567]: time="2025-05-16T05:31:25.984944551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k28lz,Uid:abb8be61-5ccf-4cff-b2ad-fc2154164d37,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\"" May 16 05:31:25.985722 kubelet[2649]: E0516 05:31:25.985690 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:25.991418 containerd[1567]: time="2025-05-16T05:31:25.991366184Z" level=info msg="CreateContainer within sandbox \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 05:31:25.998207 containerd[1567]: time="2025-05-16T05:31:25.998168046Z" level=info msg="Container 3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1: CDI devices from CRI Config.CDIDevices: []" May 16 05:31:26.004012 containerd[1567]: time="2025-05-16T05:31:26.003980216Z" level=info msg="CreateContainer within sandbox \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1\"" May 16 05:31:26.004393 containerd[1567]: time="2025-05-16T05:31:26.004362159Z" level=info msg="StartContainer for \"3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1\"" May 16 05:31:26.005305 containerd[1567]: time="2025-05-16T05:31:26.005282293Z" level=info msg="connecting to shim 3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1" address="unix:///run/containerd/s/c8644de6762fc99fd74635f89cbb06fe528bbe6af2279fee91a0b61bfc59a2dc" protocol=ttrpc version=3 May 16 05:31:26.024790 systemd[1]: Started cri-containerd-3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1.scope - libcontainer container 3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1. May 16 05:31:26.052761 containerd[1567]: time="2025-05-16T05:31:26.052722473Z" level=info msg="StartContainer for \"3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1\" returns successfully" May 16 05:31:26.060916 systemd[1]: cri-containerd-3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1.scope: Deactivated successfully. May 16 05:31:26.062159 containerd[1567]: time="2025-05-16T05:31:26.062128561Z" level=info msg="received exit event container_id:\"3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1\" id:\"3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1\" pid:4497 exited_at:{seconds:1747373486 nanos:61866308}" May 16 05:31:26.066519 containerd[1567]: time="2025-05-16T05:31:26.066478223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1\" id:\"3388247d9dfd4af734227a42941e8cfdb50b442792ce1bcac26625133e9988b1\" pid:4497 exited_at:{seconds:1747373486 nanos:61866308}" May 16 05:31:26.184005 kubelet[2649]: E0516 05:31:26.183912 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:26.188800 containerd[1567]: time="2025-05-16T05:31:26.188758829Z" level=info msg="CreateContainer within sandbox \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 05:31:26.195206 containerd[1567]: time="2025-05-16T05:31:26.195173781Z" level=info msg="Container 3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602: CDI devices from CRI Config.CDIDevices: []" May 16 05:31:26.201878 containerd[1567]: time="2025-05-16T05:31:26.201838864Z" level=info msg="CreateContainer within sandbox \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602\"" May 16 05:31:26.202313 containerd[1567]: time="2025-05-16T05:31:26.202274559Z" level=info msg="StartContainer for \"3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602\"" May 16 05:31:26.203141 containerd[1567]: time="2025-05-16T05:31:26.203119690Z" level=info msg="connecting to shim 3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602" address="unix:///run/containerd/s/c8644de6762fc99fd74635f89cbb06fe528bbe6af2279fee91a0b61bfc59a2dc" protocol=ttrpc version=3 May 16 05:31:26.225691 systemd[1]: Started cri-containerd-3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602.scope - libcontainer container 3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602. May 16 05:31:26.256414 containerd[1567]: time="2025-05-16T05:31:26.256378969Z" level=info msg="StartContainer for \"3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602\" returns successfully" May 16 05:31:26.261800 systemd[1]: cri-containerd-3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602.scope: Deactivated successfully. May 16 05:31:26.262204 containerd[1567]: time="2025-05-16T05:31:26.262176347Z" level=info msg="received exit event container_id:\"3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602\" id:\"3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602\" pid:4540 exited_at:{seconds:1747373486 nanos:262000520}" May 16 05:31:26.262467 containerd[1567]: time="2025-05-16T05:31:26.262440844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602\" id:\"3c0afceaa6919f0cb1c064f3a6a0d6fae28bc8c642609861e5281fc21838c602\" pid:4540 exited_at:{seconds:1747373486 nanos:262000520}" May 16 05:31:26.582159 kubelet[2649]: E0516 05:31:26.582120 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:27.187414 kubelet[2649]: E0516 05:31:27.187365 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:27.191907 containerd[1567]: time="2025-05-16T05:31:27.191855788Z" level=info msg="CreateContainer within sandbox \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 05:31:27.201609 containerd[1567]: time="2025-05-16T05:31:27.201549207Z" level=info msg="Container 62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3: CDI devices from CRI Config.CDIDevices: []" May 16 05:31:27.209197 containerd[1567]: time="2025-05-16T05:31:27.209157048Z" level=info msg="CreateContainer within sandbox \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3\"" May 16 05:31:27.209702 containerd[1567]: time="2025-05-16T05:31:27.209634673Z" level=info msg="StartContainer for \"62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3\"" May 16 05:31:27.210878 containerd[1567]: time="2025-05-16T05:31:27.210853289Z" level=info msg="connecting to shim 62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3" address="unix:///run/containerd/s/c8644de6762fc99fd74635f89cbb06fe528bbe6af2279fee91a0b61bfc59a2dc" protocol=ttrpc version=3 May 16 05:31:27.239039 systemd[1]: Started cri-containerd-62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3.scope - libcontainer container 62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3. May 16 05:31:27.278314 systemd[1]: cri-containerd-62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3.scope: Deactivated successfully. May 16 05:31:27.279046 containerd[1567]: time="2025-05-16T05:31:27.279010764Z" level=info msg="StartContainer for \"62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3\" returns successfully" May 16 05:31:27.279613 containerd[1567]: time="2025-05-16T05:31:27.279547302Z" level=info msg="received exit event container_id:\"62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3\" id:\"62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3\" pid:4586 exited_at:{seconds:1747373487 nanos:279349953}" May 16 05:31:27.279795 containerd[1567]: time="2025-05-16T05:31:27.279745782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3\" id:\"62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3\" pid:4586 exited_at:{seconds:1747373487 nanos:279349953}" May 16 05:31:27.299613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62dc00bc3265dff8805eaeb979be2b4dc504c27ed3405bf48f54e3539f3ceef3-rootfs.mount: Deactivated successfully. May 16 05:31:27.582087 kubelet[2649]: E0516 05:31:27.582052 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:28.191848 kubelet[2649]: E0516 05:31:28.191811 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:28.196967 containerd[1567]: time="2025-05-16T05:31:28.196921191Z" level=info msg="CreateContainer within sandbox \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 05:31:28.205681 containerd[1567]: time="2025-05-16T05:31:28.205635865Z" level=info msg="Container b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117: CDI devices from CRI Config.CDIDevices: []" May 16 05:31:28.212208 containerd[1567]: time="2025-05-16T05:31:28.212173458Z" level=info msg="CreateContainer within sandbox \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117\"" May 16 05:31:28.212819 containerd[1567]: time="2025-05-16T05:31:28.212722550Z" level=info msg="StartContainer for \"b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117\"" May 16 05:31:28.213618 containerd[1567]: time="2025-05-16T05:31:28.213595833Z" level=info msg="connecting to shim b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117" address="unix:///run/containerd/s/c8644de6762fc99fd74635f89cbb06fe528bbe6af2279fee91a0b61bfc59a2dc" protocol=ttrpc version=3 May 16 05:31:28.237700 systemd[1]: Started cri-containerd-b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117.scope - libcontainer container b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117. May 16 05:31:28.266749 systemd[1]: cri-containerd-b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117.scope: Deactivated successfully. May 16 05:31:28.268451 containerd[1567]: time="2025-05-16T05:31:28.268417767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117\" id:\"b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117\" pid:4625 exited_at:{seconds:1747373488 nanos:267249560}" May 16 05:31:28.268878 containerd[1567]: time="2025-05-16T05:31:28.268829516Z" level=info msg="received exit event container_id:\"b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117\" id:\"b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117\" pid:4625 exited_at:{seconds:1747373488 nanos:267249560}" May 16 05:31:28.277909 containerd[1567]: time="2025-05-16T05:31:28.277877708Z" level=info msg="StartContainer for \"b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117\" returns successfully" May 16 05:31:28.290378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b83a340523249415f3f7e07c92907d2bd1ebf6e37326053ca2470011e3ac4117-rootfs.mount: Deactivated successfully. May 16 05:31:28.626634 kubelet[2649]: E0516 05:31:28.626598 2649 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 05:31:29.199322 kubelet[2649]: E0516 05:31:29.199045 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:29.203823 containerd[1567]: time="2025-05-16T05:31:29.203769330Z" level=info msg="CreateContainer within sandbox \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 05:31:29.274258 containerd[1567]: time="2025-05-16T05:31:29.274213416Z" level=info msg="Container abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0: CDI devices from CRI Config.CDIDevices: []" May 16 05:31:29.282169 containerd[1567]: time="2025-05-16T05:31:29.282135493Z" level=info msg="CreateContainer within sandbox \"bcdd988268b99d3139a290284094ac9891d2f5281680649f44b5c14635553465\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0\"" May 16 05:31:29.283068 containerd[1567]: time="2025-05-16T05:31:29.283019265Z" level=info msg="StartContainer for \"abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0\"" May 16 05:31:29.284342 containerd[1567]: time="2025-05-16T05:31:29.284311780Z" level=info msg="connecting to shim abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0" address="unix:///run/containerd/s/c8644de6762fc99fd74635f89cbb06fe528bbe6af2279fee91a0b61bfc59a2dc" protocol=ttrpc version=3 May 16 05:31:29.308704 systemd[1]: Started cri-containerd-abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0.scope - libcontainer container abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0. May 16 05:31:29.341743 containerd[1567]: time="2025-05-16T05:31:29.341627741Z" level=info msg="StartContainer for \"abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0\" returns successfully" May 16 05:31:29.403969 containerd[1567]: time="2025-05-16T05:31:29.403921132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0\" id:\"daed30dec67aa1eb27d884e3114a9678e01e4b0f48122dcde882b359d400620b\" pid:4697 exited_at:{seconds:1747373489 nanos:403559831}" May 16 05:31:29.741607 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 16 05:31:29.823616 kubelet[2649]: I0516 05:31:29.823551 2649 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T05:31:29Z","lastTransitionTime":"2025-05-16T05:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 05:31:30.204341 kubelet[2649]: E0516 05:31:30.204316 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:30.581958 kubelet[2649]: E0516 05:31:30.581925 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:31.923582 kubelet[2649]: E0516 05:31:31.923532 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:32.172070 containerd[1567]: time="2025-05-16T05:31:32.171987086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0\" id:\"3782f03f1de3bcb5e46f683771b2a949406e434d7d6cd46f64f920ec68690c8c\" pid:5086 exit_status:1 exited_at:{seconds:1747373492 nanos:171516125}" May 16 05:31:32.613361 systemd-networkd[1488]: lxc_health: Link UP May 16 05:31:32.621669 systemd-networkd[1488]: lxc_health: Gained carrier May 16 05:31:33.923625 kubelet[2649]: E0516 05:31:33.923537 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:33.938230 kubelet[2649]: I0516 05:31:33.938184 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k28lz" podStartSLOduration=8.938171474 podStartE2EDuration="8.938171474s" podCreationTimestamp="2025-05-16 05:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:31:30.21666633 +0000 UTC m=+81.726167373" watchObservedRunningTime="2025-05-16 05:31:33.938171474 +0000 UTC m=+85.447672518" May 16 05:31:34.055726 systemd-networkd[1488]: lxc_health: Gained IPv6LL May 16 05:31:34.215097 kubelet[2649]: E0516 05:31:34.214979 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:34.267650 containerd[1567]: time="2025-05-16T05:31:34.267601228Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0\" id:\"4bf1dd90aa583eefe14f05db3d0f481996afe7cb75ca70ffaac2342cc171639e\" pid:5232 exited_at:{seconds:1747373494 nanos:266898606}" May 16 05:31:35.215932 kubelet[2649]: E0516 05:31:35.215883 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:31:36.356451 containerd[1567]: time="2025-05-16T05:31:36.356406407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0\" id:\"adcc2bb47b882af03e72436faabc1ac63ed04368bf980c075584be29cda3f450\" pid:5266 exited_at:{seconds:1747373496 nanos:356072490}" May 16 05:31:38.439808 containerd[1567]: time="2025-05-16T05:31:38.439762419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abf3715f4d9826c56a0214c3aa7d69de00e48947cfde14686b1c45ddd1e0a5e0\" id:\"e4452906d34e49c98a69c76ad462b8313df859005fdfab00f467e9dd51b2ec7f\" pid:5290 exited_at:{seconds:1747373498 nanos:439214624}" May 16 05:31:38.456942 sshd[4432]: Connection closed by 10.0.0.1 port 60790 May 16 05:31:38.457378 sshd-session[4426]: pam_unix(sshd:session): session closed for user core May 16 05:31:38.461724 systemd[1]: sshd@25-10.0.0.148:22-10.0.0.1:60790.service: Deactivated successfully. May 16 05:31:38.463785 systemd[1]: session-26.scope: Deactivated successfully. May 16 05:31:38.464528 systemd-logind[1546]: Session 26 logged out. Waiting for processes to exit. May 16 05:31:38.466037 systemd-logind[1546]: Removed session 26.