May 16 05:28:02.835262 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 03:57:41 -00 2025 May 16 05:28:02.835316 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3d34a100811c7d0b0788e18c6e7d09762e715c8b4f61568827372c0312d26be3 May 16 05:28:02.835328 kernel: BIOS-provided physical RAM map: May 16 05:28:02.835334 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 05:28:02.835341 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 05:28:02.835347 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 05:28:02.835355 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 16 05:28:02.835361 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 05:28:02.835374 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 05:28:02.835380 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 05:28:02.835387 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 16 05:28:02.835394 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 05:28:02.835400 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 05:28:02.835407 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 05:28:02.835417 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 05:28:02.835424 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 05:28:02.835433 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 16 05:28:02.835440 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 16 05:28:02.835448 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 16 05:28:02.835454 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 16 05:28:02.835461 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 05:28:02.835468 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 05:28:02.835475 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 05:28:02.835482 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 05:28:02.835489 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 05:28:02.835498 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 05:28:02.835505 kernel: NX (Execute Disable) protection: active May 16 05:28:02.835512 kernel: APIC: Static calls initialized May 16 05:28:02.835519 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 16 05:28:02.835526 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 16 05:28:02.835533 kernel: extended physical RAM map: May 16 05:28:02.835540 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 05:28:02.835547 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 05:28:02.835554 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 05:28:02.835561 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 16 05:28:02.835568 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 05:28:02.835578 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 05:28:02.835585 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 05:28:02.835592 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 16 05:28:02.835599 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 16 05:28:02.835615 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 16 05:28:02.835622 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 16 05:28:02.835631 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 16 05:28:02.835639 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 05:28:02.835646 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 05:28:02.835654 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 05:28:02.835661 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 05:28:02.835668 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 05:28:02.835675 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 16 05:28:02.835683 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 16 05:28:02.835690 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 16 05:28:02.835699 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 16 05:28:02.835706 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 05:28:02.835713 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 05:28:02.835721 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 05:28:02.835728 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 05:28:02.835735 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 05:28:02.835742 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 05:28:02.835752 kernel: efi: EFI v2.7 by EDK II May 16 05:28:02.835759 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 16 05:28:02.835766 kernel: random: crng init done May 16 05:28:02.835776 kernel: efi: Remove mem149: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 16 05:28:02.835783 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 16 05:28:02.835794 kernel: secureboot: Secure boot disabled May 16 05:28:02.835801 kernel: SMBIOS 2.8 present. May 16 05:28:02.835809 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 16 05:28:02.835816 kernel: DMI: Memory slots populated: 1/1 May 16 05:28:02.835823 kernel: Hypervisor detected: KVM May 16 05:28:02.835830 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 05:28:02.835837 kernel: kvm-clock: using sched offset of 4854598208 cycles May 16 05:28:02.835845 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 05:28:02.835853 kernel: tsc: Detected 2794.748 MHz processor May 16 05:28:02.835860 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 05:28:02.835868 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 05:28:02.835877 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 16 05:28:02.835885 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 16 05:28:02.835892 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 05:28:02.835899 kernel: Using GB pages for direct mapping May 16 05:28:02.835907 kernel: ACPI: Early table checksum verification disabled May 16 05:28:02.835914 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 16 05:28:02.835922 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 16 05:28:02.835930 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:28:02.835937 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:28:02.835946 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 16 05:28:02.835954 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:28:02.835961 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:28:02.835969 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:28:02.835976 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:28:02.835984 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 16 05:28:02.835991 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 16 05:28:02.835998 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 16 05:28:02.836008 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 16 05:28:02.836016 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 16 05:28:02.836023 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 16 05:28:02.836030 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 16 05:28:02.836038 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 16 05:28:02.836045 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 16 05:28:02.836052 kernel: No NUMA configuration found May 16 05:28:02.836060 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 16 05:28:02.836067 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 16 05:28:02.836074 kernel: Zone ranges: May 16 05:28:02.836084 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 05:28:02.836091 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 16 05:28:02.836099 kernel: Normal empty May 16 05:28:02.836106 kernel: Device empty May 16 05:28:02.836113 kernel: Movable zone start for each node May 16 05:28:02.836120 kernel: Early memory node ranges May 16 05:28:02.836128 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 16 05:28:02.836135 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 16 05:28:02.836145 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 16 05:28:02.836154 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 16 05:28:02.836162 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 16 05:28:02.836169 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 16 05:28:02.836176 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 16 05:28:02.836184 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 16 05:28:02.836191 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 16 05:28:02.836200 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 05:28:02.836208 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 16 05:28:02.836223 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 16 05:28:02.836231 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 05:28:02.836239 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 16 05:28:02.836246 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 16 05:28:02.836256 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 16 05:28:02.836276 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 16 05:28:02.836284 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 16 05:28:02.836292 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 05:28:02.836299 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 05:28:02.836310 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 05:28:02.836317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 05:28:02.836325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 05:28:02.836333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 05:28:02.836341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 05:28:02.836348 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 05:28:02.836356 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 05:28:02.836364 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 05:28:02.836371 kernel: TSC deadline timer available May 16 05:28:02.836381 kernel: CPU topo: Max. logical packages: 1 May 16 05:28:02.836389 kernel: CPU topo: Max. logical dies: 1 May 16 05:28:02.836396 kernel: CPU topo: Max. dies per package: 1 May 16 05:28:02.836404 kernel: CPU topo: Max. threads per core: 1 May 16 05:28:02.836411 kernel: CPU topo: Num. cores per package: 4 May 16 05:28:02.836419 kernel: CPU topo: Num. threads per package: 4 May 16 05:28:02.836426 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 16 05:28:02.836434 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 05:28:02.836442 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 05:28:02.836449 kernel: kvm-guest: setup PV sched yield May 16 05:28:02.836459 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 16 05:28:02.836467 kernel: Booting paravirtualized kernel on KVM May 16 05:28:02.836474 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 05:28:02.836482 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 05:28:02.836490 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 16 05:28:02.836498 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 16 05:28:02.836505 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 05:28:02.836513 kernel: kvm-guest: PV spinlocks enabled May 16 05:28:02.836521 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 05:28:02.836532 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3d34a100811c7d0b0788e18c6e7d09762e715c8b4f61568827372c0312d26be3 May 16 05:28:02.836543 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 05:28:02.836551 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 05:28:02.836558 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 05:28:02.836566 kernel: Fallback order for Node 0: 0 May 16 05:28:02.836574 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 16 05:28:02.836581 kernel: Policy zone: DMA32 May 16 05:28:02.836589 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 05:28:02.836598 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 05:28:02.836613 kernel: ftrace: allocating 40065 entries in 157 pages May 16 05:28:02.836620 kernel: ftrace: allocated 157 pages with 5 groups May 16 05:28:02.836628 kernel: Dynamic Preempt: voluntary May 16 05:28:02.836636 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 05:28:02.836644 kernel: rcu: RCU event tracing is enabled. May 16 05:28:02.836653 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 05:28:02.836660 kernel: Trampoline variant of Tasks RCU enabled. May 16 05:28:02.836668 kernel: Rude variant of Tasks RCU enabled. May 16 05:28:02.836678 kernel: Tracing variant of Tasks RCU enabled. May 16 05:28:02.836686 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 05:28:02.836696 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 05:28:02.836703 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 05:28:02.836711 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 05:28:02.836719 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 05:28:02.836727 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 05:28:02.836735 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 05:28:02.836742 kernel: Console: colour dummy device 80x25 May 16 05:28:02.836752 kernel: printk: legacy console [ttyS0] enabled May 16 05:28:02.836760 kernel: ACPI: Core revision 20240827 May 16 05:28:02.836768 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 05:28:02.836775 kernel: APIC: Switch to symmetric I/O mode setup May 16 05:28:02.836783 kernel: x2apic enabled May 16 05:28:02.836791 kernel: APIC: Switched APIC routing to: physical x2apic May 16 05:28:02.836798 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 05:28:02.836806 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 05:28:02.836814 kernel: kvm-guest: setup PV IPIs May 16 05:28:02.836824 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 05:28:02.836832 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 05:28:02.836840 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 05:28:02.836847 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 05:28:02.836855 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 05:28:02.836863 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 05:28:02.836870 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 05:28:02.836878 kernel: Spectre V2 : Mitigation: Retpolines May 16 05:28:02.836886 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 16 05:28:02.836895 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 16 05:28:02.836903 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 05:28:02.836911 kernel: RETBleed: Mitigation: untrained return thunk May 16 05:28:02.836921 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 05:28:02.836929 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 05:28:02.836936 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 05:28:02.836944 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 05:28:02.836952 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 05:28:02.836962 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 05:28:02.836970 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 05:28:02.836977 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 05:28:02.836985 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 05:28:02.836993 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 05:28:02.837001 kernel: Freeing SMP alternatives memory: 32K May 16 05:28:02.837008 kernel: pid_max: default: 32768 minimum: 301 May 16 05:28:02.837016 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 05:28:02.837024 kernel: landlock: Up and running. May 16 05:28:02.837033 kernel: SELinux: Initializing. May 16 05:28:02.837041 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 05:28:02.837049 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 05:28:02.837057 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 05:28:02.837065 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 05:28:02.837072 kernel: ... version: 0 May 16 05:28:02.837080 kernel: ... bit width: 48 May 16 05:28:02.837087 kernel: ... generic registers: 6 May 16 05:28:02.837095 kernel: ... value mask: 0000ffffffffffff May 16 05:28:02.837105 kernel: ... max period: 00007fffffffffff May 16 05:28:02.837112 kernel: ... fixed-purpose events: 0 May 16 05:28:02.837120 kernel: ... event mask: 000000000000003f May 16 05:28:02.837127 kernel: signal: max sigframe size: 1776 May 16 05:28:02.837135 kernel: rcu: Hierarchical SRCU implementation. May 16 05:28:02.837143 kernel: rcu: Max phase no-delay instances is 400. May 16 05:28:02.837153 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 05:28:02.837160 kernel: smp: Bringing up secondary CPUs ... May 16 05:28:02.837168 kernel: smpboot: x86: Booting SMP configuration: May 16 05:28:02.837176 kernel: .... node #0, CPUs: #1 #2 #3 May 16 05:28:02.837185 kernel: smp: Brought up 1 node, 4 CPUs May 16 05:28:02.837193 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 05:28:02.837201 kernel: Memory: 2422664K/2565800K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 137200K reserved, 0K cma-reserved) May 16 05:28:02.837209 kernel: devtmpfs: initialized May 16 05:28:02.837216 kernel: x86/mm: Memory block size: 128MB May 16 05:28:02.837224 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 16 05:28:02.837232 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 16 05:28:02.837239 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 16 05:28:02.837249 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 16 05:28:02.837257 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 16 05:28:02.837317 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 16 05:28:02.837325 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 05:28:02.837333 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 05:28:02.837341 kernel: pinctrl core: initialized pinctrl subsystem May 16 05:28:02.837348 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 05:28:02.837356 kernel: audit: initializing netlink subsys (disabled) May 16 05:28:02.837364 kernel: audit: type=2000 audit(1747373280.410:1): state=initialized audit_enabled=0 res=1 May 16 05:28:02.837374 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 05:28:02.837382 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 05:28:02.837389 kernel: cpuidle: using governor menu May 16 05:28:02.837397 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 05:28:02.837405 kernel: dca service started, version 1.12.1 May 16 05:28:02.837413 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 16 05:28:02.837420 kernel: PCI: Using configuration type 1 for base access May 16 05:28:02.837428 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 05:28:02.837436 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 05:28:02.837445 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 05:28:02.837453 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 05:28:02.837461 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 05:28:02.837469 kernel: ACPI: Added _OSI(Module Device) May 16 05:28:02.837476 kernel: ACPI: Added _OSI(Processor Device) May 16 05:28:02.837484 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 05:28:02.837492 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 05:28:02.837499 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 05:28:02.837507 kernel: ACPI: Interpreter enabled May 16 05:28:02.837516 kernel: ACPI: PM: (supports S0 S3 S5) May 16 05:28:02.837524 kernel: ACPI: Using IOAPIC for interrupt routing May 16 05:28:02.837532 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 05:28:02.837540 kernel: PCI: Using E820 reservations for host bridge windows May 16 05:28:02.837547 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 05:28:02.837555 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 05:28:02.837738 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 05:28:02.837863 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 05:28:02.837987 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 05:28:02.837998 kernel: PCI host bridge to bus 0000:00 May 16 05:28:02.838120 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 05:28:02.838231 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 05:28:02.838365 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 05:28:02.838476 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 16 05:28:02.838585 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 16 05:28:02.838707 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 16 05:28:02.838817 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 05:28:02.838954 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 16 05:28:02.839090 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 16 05:28:02.839211 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 16 05:28:02.839350 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 16 05:28:02.839476 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 16 05:28:02.839596 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 05:28:02.839737 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 05:28:02.839859 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 16 05:28:02.839979 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 16 05:28:02.840100 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 16 05:28:02.840230 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 16 05:28:02.840375 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 16 05:28:02.840497 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 16 05:28:02.840630 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 16 05:28:02.840764 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 16 05:28:02.840886 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 16 05:28:02.841007 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 16 05:28:02.841127 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 16 05:28:02.841253 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 16 05:28:02.841398 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 16 05:28:02.841519 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 05:28:02.841655 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 16 05:28:02.841776 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 16 05:28:02.841895 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 16 05:28:02.842030 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 16 05:28:02.842152 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 16 05:28:02.842162 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 05:28:02.842170 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 05:28:02.842178 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 05:28:02.842185 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 05:28:02.842193 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 05:28:02.842201 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 05:28:02.842208 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 05:28:02.842219 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 05:28:02.842227 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 05:28:02.842234 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 05:28:02.842242 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 05:28:02.842249 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 05:28:02.842257 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 05:28:02.842279 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 05:28:02.842287 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 05:28:02.842297 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 05:28:02.842305 kernel: iommu: Default domain type: Translated May 16 05:28:02.842312 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 05:28:02.842320 kernel: efivars: Registered efivars operations May 16 05:28:02.842328 kernel: PCI: Using ACPI for IRQ routing May 16 05:28:02.842336 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 05:28:02.842343 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 16 05:28:02.842351 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 16 05:28:02.842358 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 16 05:28:02.842366 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 16 05:28:02.842375 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 16 05:28:02.842383 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 16 05:28:02.842390 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 16 05:28:02.842398 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 16 05:28:02.842520 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 05:28:02.842652 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 05:28:02.842772 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 05:28:02.842786 kernel: vgaarb: loaded May 16 05:28:02.842794 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 05:28:02.842801 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 05:28:02.842809 kernel: clocksource: Switched to clocksource kvm-clock May 16 05:28:02.842817 kernel: VFS: Disk quotas dquot_6.6.0 May 16 05:28:02.842824 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 05:28:02.842832 kernel: pnp: PnP ACPI init May 16 05:28:02.842980 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 16 05:28:02.842995 kernel: pnp: PnP ACPI: found 6 devices May 16 05:28:02.843005 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 05:28:02.843013 kernel: NET: Registered PF_INET protocol family May 16 05:28:02.843021 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 05:28:02.843029 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 05:28:02.843037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 05:28:02.843046 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 05:28:02.843054 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 05:28:02.843062 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 05:28:02.843072 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 05:28:02.843080 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 05:28:02.843088 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 05:28:02.843096 kernel: NET: Registered PF_XDP protocol family May 16 05:28:02.843219 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 16 05:28:02.843357 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 16 05:28:02.843469 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 05:28:02.843580 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 05:28:02.843709 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 05:28:02.843820 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 16 05:28:02.843929 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 16 05:28:02.844040 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 16 05:28:02.844052 kernel: PCI: CLS 0 bytes, default 64 May 16 05:28:02.844060 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 05:28:02.844068 kernel: Initialise system trusted keyrings May 16 05:28:02.844080 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 05:28:02.844088 kernel: Key type asymmetric registered May 16 05:28:02.844096 kernel: Asymmetric key parser 'x509' registered May 16 05:28:02.844104 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 05:28:02.844112 kernel: io scheduler mq-deadline registered May 16 05:28:02.844120 kernel: io scheduler kyber registered May 16 05:28:02.844128 kernel: io scheduler bfq registered May 16 05:28:02.844136 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 05:28:02.844147 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 05:28:02.844155 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 05:28:02.844163 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 05:28:02.844171 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 05:28:02.844179 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 05:28:02.844188 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 05:28:02.844195 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 05:28:02.844203 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 05:28:02.844393 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 05:28:02.844413 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 05:28:02.844584 kernel: rtc_cmos 00:04: registered as rtc0 May 16 05:28:02.844713 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T05:28:02 UTC (1747373282) May 16 05:28:02.844827 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 05:28:02.844838 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 05:28:02.844846 kernel: efifb: probing for efifb May 16 05:28:02.844854 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 16 05:28:02.844865 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 16 05:28:02.844873 kernel: efifb: scrolling: redraw May 16 05:28:02.844882 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 05:28:02.844890 kernel: Console: switching to colour frame buffer device 160x50 May 16 05:28:02.844898 kernel: fb0: EFI VGA frame buffer device May 16 05:28:02.844906 kernel: pstore: Using crash dump compression: deflate May 16 05:28:02.844914 kernel: pstore: Registered efi_pstore as persistent store backend May 16 05:28:02.844922 kernel: NET: Registered PF_INET6 protocol family May 16 05:28:02.844930 kernel: Segment Routing with IPv6 May 16 05:28:02.844938 kernel: In-situ OAM (IOAM) with IPv6 May 16 05:28:02.844948 kernel: NET: Registered PF_PACKET protocol family May 16 05:28:02.844956 kernel: Key type dns_resolver registered May 16 05:28:02.844964 kernel: IPI shorthand broadcast: enabled May 16 05:28:02.844972 kernel: sched_clock: Marking stable (2861003595, 163043365)->(3071822761, -47775801) May 16 05:28:02.844980 kernel: registered taskstats version 1 May 16 05:28:02.844988 kernel: Loading compiled-in X.509 certificates May 16 05:28:02.844996 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: bdcb483d7df54fb18e2c40c35e34935cfa928c44' May 16 05:28:02.845004 kernel: Demotion targets for Node 0: null May 16 05:28:02.845012 kernel: Key type .fscrypt registered May 16 05:28:02.845022 kernel: Key type fscrypt-provisioning registered May 16 05:28:02.845030 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 05:28:02.845038 kernel: ima: Allocated hash algorithm: sha1 May 16 05:28:02.845046 kernel: ima: No architecture policies found May 16 05:28:02.845053 kernel: clk: Disabling unused clocks May 16 05:28:02.845061 kernel: Warning: unable to open an initial console. May 16 05:28:02.845070 kernel: Freeing unused kernel image (initmem) memory: 54416K May 16 05:28:02.845078 kernel: Write protecting the kernel read-only data: 24576k May 16 05:28:02.845088 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 16 05:28:02.845095 kernel: Run /init as init process May 16 05:28:02.845103 kernel: with arguments: May 16 05:28:02.845111 kernel: /init May 16 05:28:02.845119 kernel: with environment: May 16 05:28:02.845127 kernel: HOME=/ May 16 05:28:02.845135 kernel: TERM=linux May 16 05:28:02.845143 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 05:28:02.845152 systemd[1]: Successfully made /usr/ read-only. May 16 05:28:02.845164 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 05:28:02.845174 systemd[1]: Detected virtualization kvm. May 16 05:28:02.845182 systemd[1]: Detected architecture x86-64. May 16 05:28:02.845190 systemd[1]: Running in initrd. May 16 05:28:02.845198 systemd[1]: No hostname configured, using default hostname. May 16 05:28:02.845207 systemd[1]: Hostname set to . May 16 05:28:02.845216 systemd[1]: Initializing machine ID from VM UUID. May 16 05:28:02.845226 systemd[1]: Queued start job for default target initrd.target. May 16 05:28:02.845235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 05:28:02.845243 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 05:28:02.845252 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 05:28:02.845261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 05:28:02.845283 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 05:28:02.845293 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 05:28:02.845305 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 05:28:02.845314 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 05:28:02.845322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 05:28:02.845331 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 05:28:02.845339 systemd[1]: Reached target paths.target - Path Units. May 16 05:28:02.845348 systemd[1]: Reached target slices.target - Slice Units. May 16 05:28:02.845356 systemd[1]: Reached target swap.target - Swaps. May 16 05:28:02.845365 systemd[1]: Reached target timers.target - Timer Units. May 16 05:28:02.845373 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 05:28:02.845384 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 05:28:02.845392 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 05:28:02.845401 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 05:28:02.845409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 05:28:02.845417 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 05:28:02.845426 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 05:28:02.845434 systemd[1]: Reached target sockets.target - Socket Units. May 16 05:28:02.845443 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 05:28:02.845453 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 05:28:02.845462 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 05:28:02.845471 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 05:28:02.845479 systemd[1]: Starting systemd-fsck-usr.service... May 16 05:28:02.845488 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 05:28:02.845496 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 05:28:02.845505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:28:02.845513 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 05:28:02.845529 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 05:28:02.845588 systemd-journald[220]: Collecting audit messages is disabled. May 16 05:28:02.845618 systemd[1]: Finished systemd-fsck-usr.service. May 16 05:28:02.845627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 05:28:02.845636 systemd-journald[220]: Journal started May 16 05:28:02.845655 systemd-journald[220]: Runtime Journal (/run/log/journal/f0912df71ac64abc839df513f0afe591) is 6M, max 48.5M, 42.4M free. May 16 05:28:02.845871 systemd-modules-load[223]: Inserted module 'overlay' May 16 05:28:02.850312 systemd[1]: Started systemd-journald.service - Journal Service. May 16 05:28:02.852561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:28:02.856620 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 05:28:02.860820 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 05:28:02.865549 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 05:28:02.874293 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 05:28:02.876899 systemd-modules-load[223]: Inserted module 'br_netfilter' May 16 05:28:02.877949 kernel: Bridge firewalling registered May 16 05:28:02.879283 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 05:28:02.882844 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 05:28:02.885393 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 05:28:02.890204 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 05:28:02.893003 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 05:28:02.904404 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 05:28:02.905172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 05:28:02.911462 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 05:28:02.917961 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 05:28:02.921969 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 05:28:02.930317 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3d34a100811c7d0b0788e18c6e7d09762e715c8b4f61568827372c0312d26be3 May 16 05:28:02.970764 systemd-resolved[268]: Positive Trust Anchors: May 16 05:28:02.970779 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 05:28:02.970811 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 05:28:02.973235 systemd-resolved[268]: Defaulting to hostname 'linux'. May 16 05:28:02.981223 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 05:28:02.984198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 05:28:03.041306 kernel: SCSI subsystem initialized May 16 05:28:03.050297 kernel: Loading iSCSI transport class v2.0-870. May 16 05:28:03.061299 kernel: iscsi: registered transport (tcp) May 16 05:28:03.083307 kernel: iscsi: registered transport (qla4xxx) May 16 05:28:03.083345 kernel: QLogic iSCSI HBA Driver May 16 05:28:03.104827 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 05:28:03.121396 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 05:28:03.123360 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 05:28:03.181886 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 05:28:03.183824 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 05:28:03.245307 kernel: raid6: avx2x4 gen() 30613 MB/s May 16 05:28:03.262313 kernel: raid6: avx2x2 gen() 31286 MB/s May 16 05:28:03.279385 kernel: raid6: avx2x1 gen() 26030 MB/s May 16 05:28:03.279449 kernel: raid6: using algorithm avx2x2 gen() 31286 MB/s May 16 05:28:03.297397 kernel: raid6: .... xor() 19959 MB/s, rmw enabled May 16 05:28:03.297428 kernel: raid6: using avx2x2 recovery algorithm May 16 05:28:03.318304 kernel: xor: automatically using best checksumming function avx May 16 05:28:03.481313 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 05:28:03.490658 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 05:28:03.492809 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 05:28:03.531823 systemd-udevd[474]: Using default interface naming scheme 'v255'. May 16 05:28:03.537365 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 05:28:03.540777 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 05:28:03.574834 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation May 16 05:28:03.605782 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 05:28:03.609392 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 05:28:03.691234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 05:28:03.695932 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 05:28:03.732308 kernel: cryptd: max_cpu_qlen set to 1000 May 16 05:28:03.741287 kernel: AES CTR mode by8 optimization enabled May 16 05:28:03.750317 kernel: libata version 3.00 loaded. May 16 05:28:03.753288 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 05:28:03.760301 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 05:28:03.774177 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 05:28:03.774351 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 05:28:03.774370 kernel: GPT:9289727 != 19775487 May 16 05:28:03.774381 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 05:28:03.774391 kernel: GPT:9289727 != 19775487 May 16 05:28:03.774400 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 05:28:03.774411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:28:03.771962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 05:28:03.772131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:28:03.774286 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:28:03.775853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:28:03.783400 kernel: ahci 0000:00:1f.2: version 3.0 May 16 05:28:03.796546 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 05:28:03.796564 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 16 05:28:03.796734 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 16 05:28:03.796878 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 05:28:03.797017 kernel: scsi host0: ahci May 16 05:28:03.797172 kernel: scsi host1: ahci May 16 05:28:03.797354 kernel: scsi host2: ahci May 16 05:28:03.797508 kernel: scsi host3: ahci May 16 05:28:03.797660 kernel: scsi host4: ahci May 16 05:28:03.797800 kernel: scsi host5: ahci May 16 05:28:03.797940 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 16 05:28:03.797953 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 16 05:28:03.797969 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 16 05:28:03.797979 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 16 05:28:03.797990 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 16 05:28:03.798013 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 16 05:28:03.815098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:28:03.825456 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 05:28:03.835048 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 05:28:03.861079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 05:28:03.868330 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 05:28:03.868612 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 05:28:03.872847 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 05:28:03.906139 disk-uuid[638]: Primary Header is updated. May 16 05:28:03.906139 disk-uuid[638]: Secondary Entries is updated. May 16 05:28:03.906139 disk-uuid[638]: Secondary Header is updated. May 16 05:28:03.910286 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:28:03.915313 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:28:04.107302 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 05:28:04.107389 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 05:28:04.108304 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 05:28:04.109295 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 05:28:04.109313 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 05:28:04.110295 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 05:28:04.111303 kernel: ata3.00: applying bridge limits May 16 05:28:04.111332 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 05:28:04.112301 kernel: ata3.00: configured for UDMA/100 May 16 05:28:04.113330 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 05:28:04.158324 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 05:28:04.172288 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 05:28:04.172308 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 05:28:04.578801 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 05:28:04.581034 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 05:28:04.582470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 05:28:04.584922 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 05:28:04.588130 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 05:28:04.626761 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 05:28:04.916308 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:28:04.916950 disk-uuid[639]: The operation has completed successfully. May 16 05:28:04.950864 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 05:28:04.950985 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 05:28:04.983914 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 05:28:05.011761 sh[669]: Success May 16 05:28:05.029430 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 05:28:05.029460 kernel: device-mapper: uevent: version 1.0.3 May 16 05:28:05.030632 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 05:28:05.039293 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 16 05:28:05.072369 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 05:28:05.074808 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 05:28:05.101378 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 05:28:05.109313 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 05:28:05.109357 kernel: BTRFS: device fsid 902d5020-5ef8-4867-9c12-521b17a28d91 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (681) May 16 05:28:05.110670 kernel: BTRFS info (device dm-0): first mount of filesystem 902d5020-5ef8-4867-9c12-521b17a28d91 May 16 05:28:05.111557 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 05:28:05.111580 kernel: BTRFS info (device dm-0): using free-space-tree May 16 05:28:05.116681 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 05:28:05.117621 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 05:28:05.118757 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 05:28:05.120637 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 05:28:05.121815 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 05:28:05.159593 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (715) May 16 05:28:05.159644 kernel: BTRFS info (device vda6): first mount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:28:05.159655 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 05:28:05.160484 kernel: BTRFS info (device vda6): using free-space-tree May 16 05:28:05.167306 kernel: BTRFS info (device vda6): last unmount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:28:05.168520 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 05:28:05.170605 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 05:28:05.255336 ignition[761]: Ignition 2.21.0 May 16 05:28:05.256230 ignition[761]: Stage: fetch-offline May 16 05:28:05.256290 ignition[761]: no configs at "/usr/lib/ignition/base.d" May 16 05:28:05.256301 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:28:05.256403 ignition[761]: parsed url from cmdline: "" May 16 05:28:05.256407 ignition[761]: no config URL provided May 16 05:28:05.256415 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" May 16 05:28:05.256424 ignition[761]: no config at "/usr/lib/ignition/user.ign" May 16 05:28:05.256445 ignition[761]: op(1): [started] loading QEMU firmware config module May 16 05:28:05.256451 ignition[761]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 05:28:05.265404 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 05:28:05.265591 ignition[761]: op(1): [finished] loading QEMU firmware config module May 16 05:28:05.271701 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 05:28:05.308525 ignition[761]: parsing config with SHA512: 73aa929942e8e56960190a0f7015797f14a99f01926022b2a877e9e3c5b193854d9c63cde32eca66c71cfeaaa02cfa567edf58e21113bd97c76f47883145d47d May 16 05:28:05.312193 unknown[761]: fetched base config from "system" May 16 05:28:05.312600 ignition[761]: fetch-offline: fetch-offline passed May 16 05:28:05.312204 unknown[761]: fetched user config from "qemu" May 16 05:28:05.312665 ignition[761]: Ignition finished successfully May 16 05:28:05.316335 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 05:28:05.327332 systemd-networkd[859]: lo: Link UP May 16 05:28:05.327343 systemd-networkd[859]: lo: Gained carrier May 16 05:28:05.328905 systemd-networkd[859]: Enumeration completed May 16 05:28:05.329283 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:28:05.329287 systemd-networkd[859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 05:28:05.329739 systemd-networkd[859]: eth0: Link UP May 16 05:28:05.329742 systemd-networkd[859]: eth0: Gained carrier May 16 05:28:05.329750 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:28:05.330497 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 05:28:05.333006 systemd[1]: Reached target network.target - Network. May 16 05:28:05.333456 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 05:28:05.336756 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 05:28:05.353340 systemd-networkd[859]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 05:28:05.368673 ignition[863]: Ignition 2.21.0 May 16 05:28:05.368688 ignition[863]: Stage: kargs May 16 05:28:05.368836 ignition[863]: no configs at "/usr/lib/ignition/base.d" May 16 05:28:05.368847 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:28:05.373400 ignition[863]: kargs: kargs passed May 16 05:28:05.373454 ignition[863]: Ignition finished successfully May 16 05:28:05.379104 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 05:28:05.382095 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 05:28:05.417201 ignition[872]: Ignition 2.21.0 May 16 05:28:05.417216 ignition[872]: Stage: disks May 16 05:28:05.417391 ignition[872]: no configs at "/usr/lib/ignition/base.d" May 16 05:28:05.417402 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:28:05.419692 ignition[872]: disks: disks passed May 16 05:28:05.419794 ignition[872]: Ignition finished successfully May 16 05:28:05.422833 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 05:28:05.424133 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 05:28:05.425213 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 05:28:05.427552 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 05:28:05.427879 systemd[1]: Reached target sysinit.target - System Initialization. May 16 05:28:05.428221 systemd[1]: Reached target basic.target - Basic System. May 16 05:28:05.429673 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 05:28:05.465112 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 05:28:05.472840 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 05:28:05.474512 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 05:28:05.580302 kernel: EXT4-fs (vda9): mounted filesystem c6031f04-b45d-4ec8-a78e-9b0eb2cfd779 r/w with ordered data mode. Quota mode: none. May 16 05:28:05.581340 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 05:28:05.582324 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 05:28:05.585142 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 05:28:05.586905 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 05:28:05.587959 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 05:28:05.588003 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 05:28:05.588025 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 05:28:05.602550 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 05:28:05.605107 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 05:28:05.607444 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (890) May 16 05:28:05.609865 kernel: BTRFS info (device vda6): first mount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:28:05.609896 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 05:28:05.609908 kernel: BTRFS info (device vda6): using free-space-tree May 16 05:28:05.614691 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 05:28:05.645706 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory May 16 05:28:05.649823 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory May 16 05:28:05.654853 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory May 16 05:28:05.659594 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory May 16 05:28:05.747179 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 05:28:05.748659 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 05:28:05.750421 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 05:28:05.770285 kernel: BTRFS info (device vda6): last unmount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:28:05.782007 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 05:28:05.791784 ignition[1004]: INFO : Ignition 2.21.0 May 16 05:28:05.791784 ignition[1004]: INFO : Stage: mount May 16 05:28:05.793441 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 05:28:05.793441 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:28:05.795750 ignition[1004]: INFO : mount: mount passed May 16 05:28:05.795750 ignition[1004]: INFO : Ignition finished successfully May 16 05:28:05.800013 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 05:28:05.802201 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 05:28:06.108346 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 05:28:06.109986 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 05:28:06.142280 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1016) May 16 05:28:06.142319 kernel: BTRFS info (device vda6): first mount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:28:06.142330 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 05:28:06.143757 kernel: BTRFS info (device vda6): using free-space-tree May 16 05:28:06.147358 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 05:28:06.180497 ignition[1033]: INFO : Ignition 2.21.0 May 16 05:28:06.180497 ignition[1033]: INFO : Stage: files May 16 05:28:06.182540 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 05:28:06.182540 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:28:06.184827 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping May 16 05:28:06.186761 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 05:28:06.186761 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 05:28:06.191279 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 05:28:06.192846 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 05:28:06.194654 unknown[1033]: wrote ssh authorized keys file for user: core May 16 05:28:06.195856 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 05:28:06.197283 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 05:28:06.199314 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 16 05:28:06.235794 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 05:28:06.380413 systemd-networkd[859]: eth0: Gained IPv6LL May 16 05:28:06.428390 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 05:28:06.430315 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 05:28:06.432055 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 05:28:06.790861 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 05:28:06.885063 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 05:28:06.885063 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 05:28:06.889322 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 05:28:06.889322 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 05:28:06.889322 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 05:28:06.889322 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 05:28:06.889322 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 05:28:06.889322 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 05:28:06.889322 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 05:28:06.903096 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 05:28:06.903096 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 05:28:06.903096 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 05:28:06.903096 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 05:28:06.903096 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 05:28:06.903096 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 16 05:28:07.408156 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 05:28:07.862176 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 05:28:07.862176 ignition[1033]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 05:28:07.865962 ignition[1033]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 05:28:07.870858 ignition[1033]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 05:28:07.870858 ignition[1033]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 05:28:07.870858 ignition[1033]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 05:28:07.875576 ignition[1033]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 05:28:07.875576 ignition[1033]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 05:28:07.875576 ignition[1033]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 05:28:07.875576 ignition[1033]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 05:28:07.901019 ignition[1033]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 05:28:07.907147 ignition[1033]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 05:28:07.908912 ignition[1033]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 05:28:07.908912 ignition[1033]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 05:28:07.908912 ignition[1033]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 05:28:07.908912 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 05:28:07.908912 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 05:28:07.908912 ignition[1033]: INFO : files: files passed May 16 05:28:07.908912 ignition[1033]: INFO : Ignition finished successfully May 16 05:28:07.918815 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 05:28:07.922188 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 05:28:07.924244 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 05:28:07.938052 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 05:28:07.938179 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 05:28:07.943036 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory May 16 05:28:07.947781 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 05:28:07.947781 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 05:28:07.951002 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 05:28:07.954029 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 05:28:07.957362 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 05:28:07.960691 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 05:28:08.012205 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 05:28:08.012396 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 05:28:08.013352 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 05:28:08.017506 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 05:28:08.017898 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 05:28:08.021195 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 05:28:08.054920 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 05:28:08.056601 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 05:28:08.075401 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 05:28:08.075783 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 05:28:08.076138 systemd[1]: Stopped target timers.target - Timer Units. May 16 05:28:08.076840 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 05:28:08.076959 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 05:28:08.083466 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 05:28:08.084147 systemd[1]: Stopped target basic.target - Basic System. May 16 05:28:08.084650 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 05:28:08.084979 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 05:28:08.085324 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 05:28:08.085826 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 05:28:08.086147 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 05:28:08.086648 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 05:28:08.086986 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 05:28:08.087326 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 05:28:08.087815 systemd[1]: Stopped target swap.target - Swaps. May 16 05:28:08.088171 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 05:28:08.088306 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 05:28:08.089048 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 05:28:08.089563 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 05:28:08.089858 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 05:28:08.089948 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 05:28:08.090203 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 05:28:08.090331 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 05:28:08.115690 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 05:28:08.115812 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 05:28:08.116660 systemd[1]: Stopped target paths.target - Path Units. May 16 05:28:08.116907 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 05:28:08.124333 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 05:28:08.124690 systemd[1]: Stopped target slices.target - Slice Units. May 16 05:28:08.125016 systemd[1]: Stopped target sockets.target - Socket Units. May 16 05:28:08.125534 systemd[1]: iscsid.socket: Deactivated successfully. May 16 05:28:08.125623 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 05:28:08.130644 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 05:28:08.130728 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 05:28:08.132206 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 05:28:08.132344 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 05:28:08.134045 systemd[1]: ignition-files.service: Deactivated successfully. May 16 05:28:08.134148 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 05:28:08.138891 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 05:28:08.140219 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 05:28:08.142193 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 05:28:08.142325 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 05:28:08.142766 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 05:28:08.142861 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 05:28:08.150813 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 05:28:08.154377 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 05:28:08.177775 ignition[1088]: INFO : Ignition 2.21.0 May 16 05:28:08.177775 ignition[1088]: INFO : Stage: umount May 16 05:28:08.177775 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 05:28:08.177775 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:28:08.183224 ignition[1088]: INFO : umount: umount passed May 16 05:28:08.183224 ignition[1088]: INFO : Ignition finished successfully May 16 05:28:08.178117 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 05:28:08.183777 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 05:28:08.183892 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 05:28:08.185333 systemd[1]: Stopped target network.target - Network. May 16 05:28:08.187292 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 05:28:08.187354 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 05:28:08.187803 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 05:28:08.187850 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 05:28:08.188139 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 05:28:08.188186 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 05:28:08.188645 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 05:28:08.188687 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 05:28:08.189076 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 05:28:08.189557 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 05:28:08.206215 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 05:28:08.206872 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 05:28:08.210506 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 05:28:08.210742 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 05:28:08.210859 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 05:28:08.216205 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 05:28:08.217671 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 05:28:08.219200 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 05:28:08.219245 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 05:28:08.222444 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 05:28:08.224490 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 05:28:08.224546 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 05:28:08.225013 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 05:28:08.225058 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 05:28:08.228658 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 05:28:08.228734 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 05:28:08.229124 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 05:28:08.229169 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 05:28:08.233710 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 05:28:08.236391 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 05:28:08.236462 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 05:28:08.253221 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 05:28:08.253407 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 05:28:08.256148 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 05:28:08.256356 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 05:28:08.257026 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 05:28:08.257070 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 05:28:08.259852 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 05:28:08.259890 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 05:28:08.260145 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 05:28:08.260192 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 05:28:08.261016 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 05:28:08.261064 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 05:28:08.261830 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 05:28:08.261881 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 05:28:08.272631 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 05:28:08.273064 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 05:28:08.273115 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 05:28:08.278223 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 05:28:08.278312 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 05:28:08.281630 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 05:28:08.281680 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 05:28:08.286485 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 05:28:08.286546 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 05:28:08.286958 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 05:28:08.286999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:28:08.293002 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 16 05:28:08.293067 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 16 05:28:08.293111 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 05:28:08.293160 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 05:28:08.303705 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 05:28:08.303833 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 05:28:08.406785 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 05:28:08.406986 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 05:28:08.408441 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 05:28:08.411647 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 05:28:08.411773 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 05:28:08.413379 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 05:28:08.436699 systemd[1]: Switching root. May 16 05:28:08.480621 systemd-journald[220]: Journal stopped May 16 05:28:09.957924 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 16 05:28:09.957992 kernel: SELinux: policy capability network_peer_controls=1 May 16 05:28:09.958006 kernel: SELinux: policy capability open_perms=1 May 16 05:28:09.958018 kernel: SELinux: policy capability extended_socket_class=1 May 16 05:28:09.958029 kernel: SELinux: policy capability always_check_network=0 May 16 05:28:09.958041 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 05:28:09.958052 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 05:28:09.958072 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 05:28:09.958089 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 05:28:09.958100 kernel: SELinux: policy capability userspace_initial_context=0 May 16 05:28:09.958111 kernel: audit: type=1403 audit(1747373289.036:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 05:28:09.958130 systemd[1]: Successfully loaded SELinux policy in 52.837ms. May 16 05:28:09.958154 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.368ms. May 16 05:28:09.958167 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 05:28:09.958179 systemd[1]: Detected virtualization kvm. May 16 05:28:09.958193 systemd[1]: Detected architecture x86-64. May 16 05:28:09.958205 systemd[1]: Detected first boot. May 16 05:28:09.958219 systemd[1]: Initializing machine ID from VM UUID. May 16 05:28:09.958231 zram_generator::config[1133]: No configuration found. May 16 05:28:09.958243 kernel: Guest personality initialized and is inactive May 16 05:28:09.958261 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 05:28:09.958359 kernel: Initialized host personality May 16 05:28:09.958371 kernel: NET: Registered PF_VSOCK protocol family May 16 05:28:09.958383 systemd[1]: Populated /etc with preset unit settings. May 16 05:28:09.958397 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 05:28:09.958409 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 05:28:09.958424 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 05:28:09.958443 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 05:28:09.958456 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 05:28:09.958468 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 05:28:09.958480 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 05:28:09.958492 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 05:28:09.958511 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 05:28:09.958524 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 05:28:09.958539 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 05:28:09.958557 systemd[1]: Created slice user.slice - User and Session Slice. May 16 05:28:09.958569 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 05:28:09.958582 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 05:28:09.958594 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 05:28:09.958606 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 05:28:09.958618 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 05:28:09.958633 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 05:28:09.958645 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 05:28:09.958658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 05:28:09.958672 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 05:28:09.958684 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 05:28:09.958701 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 05:28:09.958717 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 05:28:09.958730 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 05:28:09.958742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 05:28:09.958760 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 05:28:09.958774 systemd[1]: Reached target slices.target - Slice Units. May 16 05:28:09.958786 systemd[1]: Reached target swap.target - Swaps. May 16 05:28:09.958798 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 05:28:09.958810 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 05:28:09.958823 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 05:28:09.958835 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 05:28:09.958847 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 05:28:09.958859 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 05:28:09.958871 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 05:28:09.958885 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 05:28:09.958898 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 05:28:09.958910 systemd[1]: Mounting media.mount - External Media Directory... May 16 05:28:09.958922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:28:09.958934 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 05:28:09.958947 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 05:28:09.958959 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 05:28:09.958971 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 05:28:09.958986 systemd[1]: Reached target machines.target - Containers. May 16 05:28:09.958998 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 05:28:09.959010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 05:28:09.959022 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 05:28:09.959034 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 05:28:09.959046 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 05:28:09.959058 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 05:28:09.959070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 05:28:09.959082 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 05:28:09.959096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 05:28:09.959109 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 05:28:09.959121 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 05:28:09.959133 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 05:28:09.959145 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 05:28:09.959157 systemd[1]: Stopped systemd-fsck-usr.service. May 16 05:28:09.959169 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 05:28:09.959182 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 05:28:09.959196 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 05:28:09.959208 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 05:28:09.959220 kernel: loop: module loaded May 16 05:28:09.959231 kernel: fuse: init (API version 7.41) May 16 05:28:09.959242 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 05:28:09.959254 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 05:28:09.959279 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 05:28:09.959295 systemd[1]: verity-setup.service: Deactivated successfully. May 16 05:28:09.959307 systemd[1]: Stopped verity-setup.service. May 16 05:28:09.959322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:28:09.959334 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 05:28:09.959350 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 05:28:09.959362 kernel: ACPI: bus type drm_connector registered May 16 05:28:09.959374 systemd[1]: Mounted media.mount - External Media Directory. May 16 05:28:09.959386 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 05:28:09.959398 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 05:28:09.959409 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 05:28:09.959422 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 05:28:09.959440 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 05:28:09.959477 systemd-journald[1208]: Collecting audit messages is disabled. May 16 05:28:09.959502 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 05:28:09.959514 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 05:28:09.959526 systemd-journald[1208]: Journal started May 16 05:28:09.959548 systemd-journald[1208]: Runtime Journal (/run/log/journal/f0912df71ac64abc839df513f0afe591) is 6M, max 48.5M, 42.4M free. May 16 05:28:09.651248 systemd[1]: Queued start job for default target multi-user.target. May 16 05:28:09.672550 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 05:28:09.673059 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 05:28:09.961972 systemd[1]: Started systemd-journald.service - Journal Service. May 16 05:28:09.963854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 05:28:09.964128 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 05:28:09.965611 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 05:28:09.965864 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 05:28:09.967289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 05:28:09.967526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 05:28:09.969136 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 05:28:09.969366 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 05:28:09.970834 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 05:28:09.971062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 05:28:09.972606 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 05:28:09.974069 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 05:28:09.975817 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 05:28:09.977655 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 05:28:09.999849 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 05:28:10.002775 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 05:28:10.007195 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 05:28:10.008367 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 05:28:10.008398 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 05:28:10.010652 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 05:28:10.015393 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 05:28:10.016668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 05:28:10.018040 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 05:28:10.022301 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 05:28:10.024457 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 05:28:10.025539 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 05:28:10.027327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 05:28:10.029671 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 05:28:10.033493 systemd-journald[1208]: Time spent on flushing to /var/log/journal/f0912df71ac64abc839df513f0afe591 is 18.856ms for 1067 entries. May 16 05:28:10.033493 systemd-journald[1208]: System Journal (/var/log/journal/f0912df71ac64abc839df513f0afe591) is 8M, max 195.6M, 187.6M free. May 16 05:28:10.075252 systemd-journald[1208]: Received client request to flush runtime journal. May 16 05:28:10.075414 kernel: loop0: detected capacity change from 0 to 224512 May 16 05:28:10.033177 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 05:28:10.042709 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 05:28:10.045766 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 05:28:10.048571 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 05:28:10.050095 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 05:28:10.064477 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 05:28:10.065919 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 05:28:10.070057 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 05:28:10.080502 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 05:28:10.086917 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 05:28:10.093789 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 16 05:28:10.094235 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 16 05:28:10.098292 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 05:28:10.102718 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 05:28:10.107461 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 05:28:10.121527 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 05:28:10.135296 kernel: loop1: detected capacity change from 0 to 146240 May 16 05:28:10.157754 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 05:28:10.162146 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 05:28:10.165328 kernel: loop2: detected capacity change from 0 to 113872 May 16 05:28:10.209302 kernel: loop3: detected capacity change from 0 to 224512 May 16 05:28:10.214081 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. May 16 05:28:10.214417 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. May 16 05:28:10.222459 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 05:28:10.280292 kernel: loop4: detected capacity change from 0 to 146240 May 16 05:28:10.296303 kernel: loop5: detected capacity change from 0 to 113872 May 16 05:28:10.309748 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 05:28:10.310838 (sd-merge)[1276]: Merged extensions into '/usr'. May 16 05:28:10.316374 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... May 16 05:28:10.316395 systemd[1]: Reloading... May 16 05:28:10.408317 zram_generator::config[1303]: No configuration found. May 16 05:28:10.546285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:28:10.613346 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 05:28:10.633936 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 05:28:10.634365 systemd[1]: Reloading finished in 317 ms. May 16 05:28:10.666708 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 05:28:10.668404 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 05:28:10.682807 systemd[1]: Starting ensure-sysext.service... May 16 05:28:10.684840 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 05:28:10.695722 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... May 16 05:28:10.695740 systemd[1]: Reloading... May 16 05:28:10.713387 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 05:28:10.713441 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 05:28:10.713733 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 05:28:10.713991 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 05:28:10.714882 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 05:28:10.715138 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. May 16 05:28:10.715209 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. May 16 05:28:10.745949 zram_generator::config[1369]: No configuration found. May 16 05:28:10.836379 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. May 16 05:28:10.836402 systemd-tmpfiles[1342]: Skipping /boot May 16 05:28:10.859583 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. May 16 05:28:10.859597 systemd-tmpfiles[1342]: Skipping /boot May 16 05:28:10.963143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:28:11.044454 systemd[1]: Reloading finished in 348 ms. May 16 05:28:11.070407 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 05:28:11.096863 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 05:28:11.106389 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 05:28:11.109166 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 05:28:11.111615 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 05:28:11.123237 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 05:28:11.126622 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 05:28:11.131455 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 05:28:11.135124 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:28:11.135314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 05:28:11.141201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 05:28:11.144578 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 05:28:11.147714 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 05:28:11.149035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 05:28:11.149189 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 05:28:11.149335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:28:11.151473 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 05:28:11.163586 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 05:28:11.166995 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 05:28:11.169458 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 05:28:11.171591 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 05:28:11.177824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 05:28:11.179914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 05:28:11.180154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 05:28:11.182031 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 05:28:11.182253 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 05:28:11.186867 systemd-udevd[1412]: Using default interface naming scheme 'v255'. May 16 05:28:11.190790 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:28:11.191045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 05:28:11.194207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 05:28:11.198872 augenrules[1443]: No rules May 16 05:28:11.198976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 05:28:11.205552 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 05:28:11.207790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 05:28:11.207918 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 05:28:11.208022 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:28:11.210224 systemd[1]: audit-rules.service: Deactivated successfully. May 16 05:28:11.211032 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 05:28:11.212732 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 05:28:11.214697 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 05:28:11.216486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 05:28:11.216708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 05:28:11.218465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 05:28:11.218711 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 05:28:11.220505 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 05:28:11.220712 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 05:28:11.230454 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 05:28:11.233057 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:28:11.234443 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 05:28:11.235517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 05:28:11.237586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 05:28:11.242332 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 05:28:11.251250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 05:28:11.255650 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 05:28:11.256874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 05:28:11.256985 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 05:28:11.260056 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 05:28:11.261377 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 05:28:11.261485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:28:11.262705 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 05:28:11.265724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 05:28:11.265952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 05:28:11.274319 systemd[1]: Finished ensure-sysext.service. May 16 05:28:11.280495 augenrules[1465]: /sbin/augenrules: No change May 16 05:28:11.282744 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 05:28:11.290794 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 05:28:11.292710 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 05:28:11.301453 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 05:28:11.301740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 05:28:11.303773 augenrules[1516]: No rules May 16 05:28:11.303652 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 05:28:11.303863 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 05:28:11.305352 systemd[1]: audit-rules.service: Deactivated successfully. May 16 05:28:11.305617 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 05:28:11.312802 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 05:28:11.312870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 05:28:11.367629 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 05:28:11.405088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 05:28:11.407782 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 05:28:11.408285 kernel: mousedev: PS/2 mouse device common for all mice May 16 05:28:11.412287 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 16 05:28:11.417423 kernel: ACPI: button: Power Button [PWRF] May 16 05:28:11.437557 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 05:28:11.453439 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 16 05:28:11.453750 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 05:28:11.453928 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 05:28:11.488548 systemd-networkd[1488]: lo: Link UP May 16 05:28:11.488559 systemd-networkd[1488]: lo: Gained carrier May 16 05:28:11.490235 systemd-networkd[1488]: Enumeration completed May 16 05:28:11.490348 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 05:28:11.491376 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:28:11.491398 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 05:28:11.491954 systemd-networkd[1488]: eth0: Link UP May 16 05:28:11.492137 systemd-networkd[1488]: eth0: Gained carrier May 16 05:28:11.492160 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:28:11.494061 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 05:28:11.498402 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 05:28:11.504324 systemd-networkd[1488]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 05:28:11.526683 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 05:28:11.533309 systemd-resolved[1411]: Positive Trust Anchors: May 16 05:28:11.533325 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 05:28:11.533358 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 05:28:11.537412 systemd-resolved[1411]: Defaulting to hostname 'linux'. May 16 05:28:11.539447 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 05:28:11.540826 systemd[1]: Reached target network.target - Network. May 16 05:28:11.542359 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 05:28:11.551335 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 05:28:11.552666 systemd[1]: Reached target sysinit.target - System Initialization. May 16 05:28:12.397156 systemd-resolved[1411]: Clock change detected. Flushing caches. May 16 05:28:12.397236 systemd-timesyncd[1509]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 05:28:12.397285 systemd-timesyncd[1509]: Initial clock synchronization to Fri 2025-05-16 05:28:12.397099 UTC. May 16 05:28:12.397318 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 05:28:12.398609 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 05:28:12.401215 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 16 05:28:12.403226 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 05:28:12.404715 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 05:28:12.404742 systemd[1]: Reached target paths.target - Path Units. May 16 05:28:12.405683 systemd[1]: Reached target time-set.target - System Time Set. May 16 05:28:12.407410 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 05:28:12.408623 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 05:28:12.409898 systemd[1]: Reached target timers.target - Timer Units. May 16 05:28:12.412271 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 05:28:12.415982 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 05:28:12.420044 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 05:28:12.425640 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 05:28:12.427297 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 05:28:12.445719 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 05:28:12.448603 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 05:28:12.450634 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 05:28:12.461588 systemd[1]: Reached target sockets.target - Socket Units. May 16 05:28:12.462831 systemd[1]: Reached target basic.target - Basic System. May 16 05:28:12.464360 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 05:28:12.464615 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 05:28:12.467703 systemd[1]: Starting containerd.service - containerd container runtime... May 16 05:28:12.472761 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 05:28:12.477248 kernel: kvm_amd: TSC scaling supported May 16 05:28:12.477291 kernel: kvm_amd: Nested Virtualization enabled May 16 05:28:12.477305 kernel: kvm_amd: Nested Paging enabled May 16 05:28:12.477317 kernel: kvm_amd: LBR virtualization supported May 16 05:28:12.481329 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 05:28:12.485209 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 05:28:12.488125 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 05:28:12.490187 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 05:28:12.490232 kernel: kvm_amd: Virtual GIF supported May 16 05:28:12.490858 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 05:28:12.493381 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 16 05:28:12.497478 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 05:28:12.500302 jq[1564]: false May 16 05:28:12.500786 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 05:28:12.509892 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 05:28:12.514201 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 05:28:12.518659 extend-filesystems[1565]: Found loop3 May 16 05:28:12.518659 extend-filesystems[1565]: Found loop4 May 16 05:28:12.518659 extend-filesystems[1565]: Found loop5 May 16 05:28:12.518659 extend-filesystems[1565]: Found sr0 May 16 05:28:12.518659 extend-filesystems[1565]: Found vda May 16 05:28:12.518659 extend-filesystems[1565]: Found vda1 May 16 05:28:12.518659 extend-filesystems[1565]: Found vda2 May 16 05:28:12.518659 extend-filesystems[1565]: Found vda3 May 16 05:28:12.518659 extend-filesystems[1565]: Found usr May 16 05:28:12.518659 extend-filesystems[1565]: Found vda4 May 16 05:28:12.518659 extend-filesystems[1565]: Found vda6 May 16 05:28:12.518659 extend-filesystems[1565]: Found vda7 May 16 05:28:12.518659 extend-filesystems[1565]: Found vda9 May 16 05:28:12.518659 extend-filesystems[1565]: Checking size of /dev/vda9 May 16 05:28:12.544274 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache May 16 05:28:12.544274 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting May 16 05:28:12.544274 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 05:28:12.544274 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache May 16 05:28:12.544274 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting May 16 05:28:12.544274 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 05:28:12.523418 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 05:28:12.518739 oslogin_cache_refresh[1566]: Refreshing passwd entry cache May 16 05:28:12.527176 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 05:28:12.528535 oslogin_cache_refresh[1566]: Failure getting users, quitting May 16 05:28:12.527782 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 05:28:12.528560 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 05:28:12.528699 systemd[1]: Starting update-engine.service - Update Engine... May 16 05:28:12.528625 oslogin_cache_refresh[1566]: Refreshing group entry cache May 16 05:28:12.531272 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 05:28:12.535268 oslogin_cache_refresh[1566]: Failure getting groups, quitting May 16 05:28:12.541985 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 05:28:12.535281 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 05:28:12.543109 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 05:28:12.543372 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 05:28:12.543900 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 16 05:28:12.544125 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 16 05:28:12.547155 jq[1576]: true May 16 05:28:12.547666 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 05:28:12.548263 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 05:28:12.552544 extend-filesystems[1565]: Resized partition /dev/vda9 May 16 05:28:12.559171 extend-filesystems[1583]: resize2fs 1.47.2 (1-Jan-2025) May 16 05:28:12.565175 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 05:28:12.570169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:28:12.570615 jq[1581]: true May 16 05:28:12.570878 (ntainerd)[1584]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 05:28:12.580168 update_engine[1575]: I20250516 05:28:12.579543 1575 main.cc:92] Flatcar Update Engine starting May 16 05:28:12.598520 systemd[1]: motdgen.service: Deactivated successfully. May 16 05:28:12.598805 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 05:28:12.600416 tar[1578]: linux-amd64/LICENSE May 16 05:28:12.604242 tar[1578]: linux-amd64/helm May 16 05:28:12.607461 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 05:28:12.623006 dbus-daemon[1562]: [system] SELinux support is enabled May 16 05:28:12.623219 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 05:28:12.637580 extend-filesystems[1583]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 05:28:12.637580 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 05:28:12.637580 extend-filesystems[1583]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 05:28:12.628187 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 05:28:12.638876 extend-filesystems[1565]: Resized filesystem in /dev/vda9 May 16 05:28:12.628210 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 05:28:12.630082 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 05:28:12.630098 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 05:28:12.642104 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 05:28:12.646296 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 05:28:12.651625 update_engine[1575]: I20250516 05:28:12.651569 1575 update_check_scheduler.cc:74] Next update check in 5m41s May 16 05:28:12.655897 bash[1619]: Updated "/home/core/.ssh/authorized_keys" May 16 05:28:12.658167 kernel: EDAC MC: Ver: 3.0.0 May 16 05:28:12.666116 systemd-logind[1573]: Watching system buttons on /dev/input/event2 (Power Button) May 16 05:28:12.666242 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 05:28:12.670154 systemd-logind[1573]: New seat seat0. May 16 05:28:12.710078 systemd[1]: Started systemd-logind.service - User Login Management. May 16 05:28:12.711686 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:28:12.713305 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 05:28:12.736672 systemd[1]: Started update-engine.service - Update Engine. May 16 05:28:12.738977 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 05:28:12.743598 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 05:28:12.787976 containerd[1584]: time="2025-05-16T05:28:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 05:28:12.788648 containerd[1584]: time="2025-05-16T05:28:12.788610855Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 05:28:12.799948 containerd[1584]: time="2025-05-16T05:28:12.799911010Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.015µs" May 16 05:28:12.800078 containerd[1584]: time="2025-05-16T05:28:12.800060651Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 05:28:12.800150 containerd[1584]: time="2025-05-16T05:28:12.800121766Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 05:28:12.800380 containerd[1584]: time="2025-05-16T05:28:12.800362087Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 05:28:12.800444 containerd[1584]: time="2025-05-16T05:28:12.800431186Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 05:28:12.800505 containerd[1584]: time="2025-05-16T05:28:12.800493112Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 05:28:12.800649 containerd[1584]: time="2025-05-16T05:28:12.800629067Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 05:28:12.800707 containerd[1584]: time="2025-05-16T05:28:12.800695081Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 05:28:12.801019 containerd[1584]: time="2025-05-16T05:28:12.800997338Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 05:28:12.801078 containerd[1584]: time="2025-05-16T05:28:12.801065386Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 05:28:12.801162 containerd[1584]: time="2025-05-16T05:28:12.801128544Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 05:28:12.801261 containerd[1584]: time="2025-05-16T05:28:12.801239382Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 05:28:12.801412 containerd[1584]: time="2025-05-16T05:28:12.801394543Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 05:28:12.801691 containerd[1584]: time="2025-05-16T05:28:12.801673266Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 05:28:12.801771 containerd[1584]: time="2025-05-16T05:28:12.801755109Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 05:28:12.801851 containerd[1584]: time="2025-05-16T05:28:12.801821063Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 05:28:12.801921 containerd[1584]: time="2025-05-16T05:28:12.801906243Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 05:28:12.802437 containerd[1584]: time="2025-05-16T05:28:12.802323095Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 05:28:12.802437 containerd[1584]: time="2025-05-16T05:28:12.802416971Z" level=info msg="metadata content store policy set" policy=shared May 16 05:28:12.804470 locksmithd[1636]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 05:28:12.819944 containerd[1584]: time="2025-05-16T05:28:12.819885377Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 05:28:12.820068 containerd[1584]: time="2025-05-16T05:28:12.820020149Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 05:28:12.820068 containerd[1584]: time="2025-05-16T05:28:12.820042301Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 05:28:12.820267 containerd[1584]: time="2025-05-16T05:28:12.820196109Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 05:28:12.820267 containerd[1584]: time="2025-05-16T05:28:12.820215746Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 05:28:12.820267 containerd[1584]: time="2025-05-16T05:28:12.820225805Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 05:28:12.820267 containerd[1584]: time="2025-05-16T05:28:12.820239150Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 05:28:12.820447 containerd[1584]: time="2025-05-16T05:28:12.820378542Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 05:28:12.820447 containerd[1584]: time="2025-05-16T05:28:12.820397046Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 05:28:12.820447 containerd[1584]: time="2025-05-16T05:28:12.820408047Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 05:28:12.820447 containerd[1584]: time="2025-05-16T05:28:12.820418106Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 05:28:12.820608 containerd[1584]: time="2025-05-16T05:28:12.820432793Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 05:28:12.820777 containerd[1584]: time="2025-05-16T05:28:12.820759316Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 05:28:12.820913 containerd[1584]: time="2025-05-16T05:28:12.820823947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 05:28:12.820913 containerd[1584]: time="2025-05-16T05:28:12.820840648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 05:28:12.820913 containerd[1584]: time="2025-05-16T05:28:12.820851749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 05:28:12.820913 containerd[1584]: time="2025-05-16T05:28:12.820861027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 05:28:12.821071 containerd[1584]: time="2025-05-16T05:28:12.820870304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 05:28:12.821071 containerd[1584]: time="2025-05-16T05:28:12.821011379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 05:28:12.821071 containerd[1584]: time="2025-05-16T05:28:12.821022359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 05:28:12.821071 containerd[1584]: time="2025-05-16T05:28:12.821032448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 05:28:12.821071 containerd[1584]: time="2025-05-16T05:28:12.821041716Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 05:28:12.821304 containerd[1584]: time="2025-05-16T05:28:12.821052616Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 05:28:12.821423 containerd[1584]: time="2025-05-16T05:28:12.821359442Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 05:28:12.821423 containerd[1584]: time="2025-05-16T05:28:12.821384969Z" level=info msg="Start snapshots syncer" May 16 05:28:12.821546 containerd[1584]: time="2025-05-16T05:28:12.821498573Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 05:28:12.821926 containerd[1584]: time="2025-05-16T05:28:12.821889496Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 05:28:12.822160 containerd[1584]: time="2025-05-16T05:28:12.822071247Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 05:28:12.822941 containerd[1584]: time="2025-05-16T05:28:12.822922904Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 05:28:12.823144 containerd[1584]: time="2025-05-16T05:28:12.823099315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 05:28:12.823208 containerd[1584]: time="2025-05-16T05:28:12.823196097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 05:28:12.823537 containerd[1584]: time="2025-05-16T05:28:12.823256009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 05:28:12.823537 containerd[1584]: time="2025-05-16T05:28:12.823279583Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 05:28:12.823537 containerd[1584]: time="2025-05-16T05:28:12.823299761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 05:28:12.823537 containerd[1584]: time="2025-05-16T05:28:12.823311002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 05:28:12.823537 containerd[1584]: time="2025-05-16T05:28:12.823326231Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 05:28:12.823537 containerd[1584]: time="2025-05-16T05:28:12.823349154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 05:28:12.823537 containerd[1584]: time="2025-05-16T05:28:12.823358631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 05:28:12.823537 containerd[1584]: time="2025-05-16T05:28:12.823368280Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 05:28:12.824101 containerd[1584]: time="2025-05-16T05:28:12.824082790Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 05:28:12.824200 containerd[1584]: time="2025-05-16T05:28:12.824182847Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 05:28:12.824256 containerd[1584]: time="2025-05-16T05:28:12.824237740Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 05:28:12.824305 containerd[1584]: time="2025-05-16T05:28:12.824293064Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 05:28:12.824470 containerd[1584]: time="2025-05-16T05:28:12.824336997Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 05:28:12.824470 containerd[1584]: time="2025-05-16T05:28:12.824349550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 05:28:12.824470 containerd[1584]: time="2025-05-16T05:28:12.824359078Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 05:28:12.824470 containerd[1584]: time="2025-05-16T05:28:12.824370810Z" level=info msg="runtime interface created" May 16 05:28:12.824470 containerd[1584]: time="2025-05-16T05:28:12.824375609Z" level=info msg="created NRI interface" May 16 05:28:12.824470 containerd[1584]: time="2025-05-16T05:28:12.824386880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 05:28:12.824470 containerd[1584]: time="2025-05-16T05:28:12.824397059Z" level=info msg="Connect containerd service" May 16 05:28:12.824470 containerd[1584]: time="2025-05-16T05:28:12.824419882Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 05:28:12.825605 containerd[1584]: time="2025-05-16T05:28:12.825582974Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 05:28:12.921882 containerd[1584]: time="2025-05-16T05:28:12.921764483Z" level=info msg="Start subscribing containerd event" May 16 05:28:12.921882 containerd[1584]: time="2025-05-16T05:28:12.921833913Z" level=info msg="Start recovering state" May 16 05:28:12.922010 containerd[1584]: time="2025-05-16T05:28:12.921956894Z" level=info msg="Start event monitor" May 16 05:28:12.922010 containerd[1584]: time="2025-05-16T05:28:12.921960180Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 05:28:12.922213 containerd[1584]: time="2025-05-16T05:28:12.922056821Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 05:28:12.922213 containerd[1584]: time="2025-05-16T05:28:12.921984716Z" level=info msg="Start cni network conf syncer for default" May 16 05:28:12.922213 containerd[1584]: time="2025-05-16T05:28:12.922085194Z" level=info msg="Start streaming server" May 16 05:28:12.922213 containerd[1584]: time="2025-05-16T05:28:12.922095654Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 05:28:12.922213 containerd[1584]: time="2025-05-16T05:28:12.922107496Z" level=info msg="runtime interface starting up..." May 16 05:28:12.922213 containerd[1584]: time="2025-05-16T05:28:12.922115050Z" level=info msg="starting plugins..." May 16 05:28:12.922213 containerd[1584]: time="2025-05-16T05:28:12.922146860Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 05:28:12.923046 containerd[1584]: time="2025-05-16T05:28:12.922388393Z" level=info msg="containerd successfully booted in 0.135085s" May 16 05:28:12.922504 systemd[1]: Started containerd.service - containerd container runtime. May 16 05:28:13.080022 tar[1578]: linux-amd64/README.md May 16 05:28:13.102014 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 05:28:13.187718 sshd_keygen[1603]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 05:28:13.211967 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 05:28:13.215007 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 05:28:13.231482 systemd[1]: issuegen.service: Deactivated successfully. May 16 05:28:13.231746 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 05:28:13.234404 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 05:28:13.263288 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 05:28:13.265996 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 05:28:13.268084 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 05:28:13.269471 systemd[1]: Reached target getty.target - Login Prompts. May 16 05:28:13.879382 systemd-networkd[1488]: eth0: Gained IPv6LL May 16 05:28:13.883187 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 05:28:13.885209 systemd[1]: Reached target network-online.target - Network is Online. May 16 05:28:13.887936 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 05:28:13.890523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:28:13.892913 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 05:28:13.921202 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 05:28:13.921592 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 05:28:13.923588 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 05:28:13.925476 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 05:28:14.628220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:28:14.630105 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 05:28:14.631459 systemd[1]: Startup finished in 2.948s (kernel) + 6.373s (initrd) + 4.804s (userspace) = 14.127s. May 16 05:28:14.645445 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 05:28:15.048782 kubelet[1701]: E0516 05:28:15.048650 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 05:28:15.052668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 05:28:15.052871 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 05:28:15.053269 systemd[1]: kubelet.service: Consumed 963ms CPU time, 265M memory peak. May 16 05:28:16.659446 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 05:28:16.661117 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:35408.service - OpenSSH per-connection server daemon (10.0.0.1:35408). May 16 05:28:16.728099 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 35408 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:28:16.730244 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:28:16.743129 systemd-logind[1573]: New session 1 of user core. May 16 05:28:16.744734 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 05:28:16.746195 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 05:28:16.780480 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 05:28:16.782980 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 05:28:16.800549 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 05:28:16.802822 systemd-logind[1573]: New session c1 of user core. May 16 05:28:16.949691 systemd[1719]: Queued start job for default target default.target. May 16 05:28:16.970364 systemd[1719]: Created slice app.slice - User Application Slice. May 16 05:28:16.970388 systemd[1719]: Reached target paths.target - Paths. May 16 05:28:16.970426 systemd[1719]: Reached target timers.target - Timers. May 16 05:28:16.971903 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 05:28:16.983121 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 05:28:16.983277 systemd[1719]: Reached target sockets.target - Sockets. May 16 05:28:16.983317 systemd[1719]: Reached target basic.target - Basic System. May 16 05:28:16.983355 systemd[1719]: Reached target default.target - Main User Target. May 16 05:28:16.983385 systemd[1719]: Startup finished in 173ms. May 16 05:28:16.983799 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 05:28:16.994270 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 05:28:17.062031 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:35414.service - OpenSSH per-connection server daemon (10.0.0.1:35414). May 16 05:28:17.116469 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 35414 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:28:17.117842 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:28:17.121857 systemd-logind[1573]: New session 2 of user core. May 16 05:28:17.131270 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 05:28:17.185825 sshd[1732]: Connection closed by 10.0.0.1 port 35414 May 16 05:28:17.186219 sshd-session[1730]: pam_unix(sshd:session): session closed for user core May 16 05:28:17.198877 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:35414.service: Deactivated successfully. May 16 05:28:17.200721 systemd[1]: session-2.scope: Deactivated successfully. May 16 05:28:17.201639 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. May 16 05:28:17.204407 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:35420.service - OpenSSH per-connection server daemon (10.0.0.1:35420). May 16 05:28:17.205223 systemd-logind[1573]: Removed session 2. May 16 05:28:17.258454 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 35420 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:28:17.259665 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:28:17.263934 systemd-logind[1573]: New session 3 of user core. May 16 05:28:17.281254 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 05:28:17.330879 sshd[1740]: Connection closed by 10.0.0.1 port 35420 May 16 05:28:17.331255 sshd-session[1738]: pam_unix(sshd:session): session closed for user core May 16 05:28:17.343471 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:35420.service: Deactivated successfully. May 16 05:28:17.344997 systemd[1]: session-3.scope: Deactivated successfully. May 16 05:28:17.345748 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. May 16 05:28:17.348688 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:35426.service - OpenSSH per-connection server daemon (10.0.0.1:35426). May 16 05:28:17.349210 systemd-logind[1573]: Removed session 3. May 16 05:28:17.407834 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 35426 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:28:17.409269 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:28:17.413381 systemd-logind[1573]: New session 4 of user core. May 16 05:28:17.423252 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 05:28:17.475506 sshd[1748]: Connection closed by 10.0.0.1 port 35426 May 16 05:28:17.475725 sshd-session[1746]: pam_unix(sshd:session): session closed for user core May 16 05:28:17.495635 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:35426.service: Deactivated successfully. May 16 05:28:17.497394 systemd[1]: session-4.scope: Deactivated successfully. May 16 05:28:17.498085 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. May 16 05:28:17.500636 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:35432.service - OpenSSH per-connection server daemon (10.0.0.1:35432). May 16 05:28:17.501184 systemd-logind[1573]: Removed session 4. May 16 05:28:17.554053 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 35432 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:28:17.555279 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:28:17.559506 systemd-logind[1573]: New session 5 of user core. May 16 05:28:17.573264 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 05:28:17.630090 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 05:28:17.630412 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:28:17.650423 sudo[1757]: pam_unix(sudo:session): session closed for user root May 16 05:28:17.651945 sshd[1756]: Connection closed by 10.0.0.1 port 35432 May 16 05:28:17.652242 sshd-session[1754]: pam_unix(sshd:session): session closed for user core May 16 05:28:17.664565 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:35432.service: Deactivated successfully. May 16 05:28:17.666182 systemd[1]: session-5.scope: Deactivated successfully. May 16 05:28:17.666828 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. May 16 05:28:17.669533 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:35434.service - OpenSSH per-connection server daemon (10.0.0.1:35434). May 16 05:28:17.670245 systemd-logind[1573]: Removed session 5. May 16 05:28:17.720010 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 35434 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:28:17.721305 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:28:17.725703 systemd-logind[1573]: New session 6 of user core. May 16 05:28:17.735253 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 05:28:17.788457 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 05:28:17.788755 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:28:17.797072 sudo[1767]: pam_unix(sudo:session): session closed for user root May 16 05:28:17.802729 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 05:28:17.803015 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:28:17.813326 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 05:28:17.860716 augenrules[1789]: No rules May 16 05:28:17.862390 systemd[1]: audit-rules.service: Deactivated successfully. May 16 05:28:17.862663 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 05:28:17.863862 sudo[1766]: pam_unix(sudo:session): session closed for user root May 16 05:28:17.865328 sshd[1765]: Connection closed by 10.0.0.1 port 35434 May 16 05:28:17.865605 sshd-session[1763]: pam_unix(sshd:session): session closed for user core May 16 05:28:17.877950 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:35434.service: Deactivated successfully. May 16 05:28:17.879865 systemd[1]: session-6.scope: Deactivated successfully. May 16 05:28:17.880591 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. May 16 05:28:17.883627 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:35448.service - OpenSSH per-connection server daemon (10.0.0.1:35448). May 16 05:28:17.884209 systemd-logind[1573]: Removed session 6. May 16 05:28:17.943379 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 35448 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:28:17.944748 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:28:17.949451 systemd-logind[1573]: New session 7 of user core. May 16 05:28:17.956308 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 05:28:18.009539 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 05:28:18.009857 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:28:18.296759 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 05:28:18.316444 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 05:28:18.523857 dockerd[1821]: time="2025-05-16T05:28:18.523787978Z" level=info msg="Starting up" May 16 05:28:18.525324 dockerd[1821]: time="2025-05-16T05:28:18.525281118Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 05:28:19.173734 dockerd[1821]: time="2025-05-16T05:28:19.173676793Z" level=info msg="Loading containers: start." May 16 05:28:19.184155 kernel: Initializing XFRM netlink socket May 16 05:28:19.444069 systemd-networkd[1488]: docker0: Link UP May 16 05:28:19.449126 dockerd[1821]: time="2025-05-16T05:28:19.449056316Z" level=info msg="Loading containers: done." May 16 05:28:19.463306 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2060515629-merged.mount: Deactivated successfully. May 16 05:28:19.560304 dockerd[1821]: time="2025-05-16T05:28:19.560183720Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 05:28:19.560754 dockerd[1821]: time="2025-05-16T05:28:19.560336647Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 05:28:19.560754 dockerd[1821]: time="2025-05-16T05:28:19.560481258Z" level=info msg="Initializing buildkit" May 16 05:28:19.810701 dockerd[1821]: time="2025-05-16T05:28:19.810641777Z" level=info msg="Completed buildkit initialization" May 16 05:28:19.816873 dockerd[1821]: time="2025-05-16T05:28:19.816820116Z" level=info msg="Daemon has completed initialization" May 16 05:28:19.817010 dockerd[1821]: time="2025-05-16T05:28:19.816903302Z" level=info msg="API listen on /run/docker.sock" May 16 05:28:19.817066 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 05:28:20.537613 containerd[1584]: time="2025-05-16T05:28:20.537550450Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 05:28:21.388753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650913006.mount: Deactivated successfully. May 16 05:28:22.265703 containerd[1584]: time="2025-05-16T05:28:22.265632629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:22.266353 containerd[1584]: time="2025-05-16T05:28:22.266310490Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 16 05:28:22.267520 containerd[1584]: time="2025-05-16T05:28:22.267486185Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:22.269903 containerd[1584]: time="2025-05-16T05:28:22.269851231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:22.270760 containerd[1584]: time="2025-05-16T05:28:22.270725622Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.733133243s" May 16 05:28:22.270794 containerd[1584]: time="2025-05-16T05:28:22.270764755Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 16 05:28:22.271335 containerd[1584]: time="2025-05-16T05:28:22.271313815Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 05:28:23.464924 containerd[1584]: time="2025-05-16T05:28:23.464842674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:23.466002 containerd[1584]: time="2025-05-16T05:28:23.465963536Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 16 05:28:23.467828 containerd[1584]: time="2025-05-16T05:28:23.467770626Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:23.470338 containerd[1584]: time="2025-05-16T05:28:23.470314187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:23.471333 containerd[1584]: time="2025-05-16T05:28:23.471277975Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.199935355s" May 16 05:28:23.471333 containerd[1584]: time="2025-05-16T05:28:23.471311658Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 16 05:28:23.471955 containerd[1584]: time="2025-05-16T05:28:23.471758987Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 05:28:25.060184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 05:28:25.062337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:28:25.415281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:28:25.419977 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 05:28:25.586287 containerd[1584]: time="2025-05-16T05:28:25.586210644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:25.587025 containerd[1584]: time="2025-05-16T05:28:25.586993943Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 16 05:28:25.588511 containerd[1584]: time="2025-05-16T05:28:25.588483227Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:25.591130 containerd[1584]: time="2025-05-16T05:28:25.591076170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:25.591923 containerd[1584]: time="2025-05-16T05:28:25.591892271Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 2.120092197s" May 16 05:28:25.591923 containerd[1584]: time="2025-05-16T05:28:25.591923660Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 16 05:28:25.592435 containerd[1584]: time="2025-05-16T05:28:25.592402999Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 05:28:25.616625 kubelet[2101]: E0516 05:28:25.616514 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 05:28:25.623345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 05:28:25.623558 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 05:28:25.623961 systemd[1]: kubelet.service: Consumed 226ms CPU time, 111.4M memory peak. May 16 05:28:26.467129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611592554.mount: Deactivated successfully. May 16 05:28:27.138765 containerd[1584]: time="2025-05-16T05:28:27.138672028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:27.139507 containerd[1584]: time="2025-05-16T05:28:27.139434959Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 16 05:28:27.140606 containerd[1584]: time="2025-05-16T05:28:27.140563235Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:27.142506 containerd[1584]: time="2025-05-16T05:28:27.142463890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:27.142998 containerd[1584]: time="2025-05-16T05:28:27.142929584Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.550500836s" May 16 05:28:27.142998 containerd[1584]: time="2025-05-16T05:28:27.142976502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 16 05:28:27.143489 containerd[1584]: time="2025-05-16T05:28:27.143464807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 05:28:27.705200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121835626.mount: Deactivated successfully. May 16 05:28:28.373796 containerd[1584]: time="2025-05-16T05:28:28.373730685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:28.374659 containerd[1584]: time="2025-05-16T05:28:28.374604564Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 05:28:28.376249 containerd[1584]: time="2025-05-16T05:28:28.376197702Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:28.380165 containerd[1584]: time="2025-05-16T05:28:28.378808780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:28.380640 containerd[1584]: time="2025-05-16T05:28:28.380617332Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.237124322s" May 16 05:28:28.380723 containerd[1584]: time="2025-05-16T05:28:28.380708152Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 05:28:28.381757 containerd[1584]: time="2025-05-16T05:28:28.381722345Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 05:28:30.400338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264109204.mount: Deactivated successfully. May 16 05:28:30.689221 containerd[1584]: time="2025-05-16T05:28:30.689059069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:28:30.712896 containerd[1584]: time="2025-05-16T05:28:30.712847730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 05:28:30.750311 containerd[1584]: time="2025-05-16T05:28:30.750271326Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:28:30.753764 containerd[1584]: time="2025-05-16T05:28:30.753721197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:28:30.754500 containerd[1584]: time="2025-05-16T05:28:30.754460173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.372701199s" May 16 05:28:30.754558 containerd[1584]: time="2025-05-16T05:28:30.754502472Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 05:28:30.755057 containerd[1584]: time="2025-05-16T05:28:30.755030422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 05:28:31.249614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286743694.mount: Deactivated successfully. May 16 05:28:33.020468 containerd[1584]: time="2025-05-16T05:28:33.020392925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:33.021493 containerd[1584]: time="2025-05-16T05:28:33.021072039Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 16 05:28:33.022454 containerd[1584]: time="2025-05-16T05:28:33.022403316Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:33.027158 containerd[1584]: time="2025-05-16T05:28:33.025697154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:33.028894 containerd[1584]: time="2025-05-16T05:28:33.028633031Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.273568254s" May 16 05:28:33.028894 containerd[1584]: time="2025-05-16T05:28:33.028671443Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 16 05:28:35.734969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 05:28:35.736708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:28:35.751121 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 05:28:35.751279 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 05:28:35.751657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:28:35.754625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:28:35.782948 systemd[1]: Reload requested from client PID 2259 ('systemctl') (unit session-7.scope)... May 16 05:28:35.782973 systemd[1]: Reloading... May 16 05:28:35.941170 zram_generator::config[2301]: No configuration found. May 16 05:28:36.270099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:28:36.386888 systemd[1]: Reloading finished in 603 ms. May 16 05:28:36.450228 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 05:28:36.450344 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 05:28:36.450731 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:28:36.450792 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.2M memory peak. May 16 05:28:36.452731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:28:36.655941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:28:36.669543 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 05:28:36.748468 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:28:36.748468 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 05:28:36.748468 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:28:36.748468 kubelet[2349]: I0516 05:28:36.747858 2349 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 05:28:36.884340 kubelet[2349]: I0516 05:28:36.884290 2349 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 05:28:36.884340 kubelet[2349]: I0516 05:28:36.884325 2349 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 05:28:36.884721 kubelet[2349]: I0516 05:28:36.884698 2349 server.go:954] "Client rotation is on, will bootstrap in background" May 16 05:28:36.910209 kubelet[2349]: E0516 05:28:36.910035 2349 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" May 16 05:28:36.911441 kubelet[2349]: I0516 05:28:36.911406 2349 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 05:28:36.918192 kubelet[2349]: I0516 05:28:36.918153 2349 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 05:28:36.924804 kubelet[2349]: I0516 05:28:36.924753 2349 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 05:28:36.925979 kubelet[2349]: I0516 05:28:36.925919 2349 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 05:28:36.926152 kubelet[2349]: I0516 05:28:36.925954 2349 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 05:28:36.926152 kubelet[2349]: I0516 05:28:36.926150 2349 topology_manager.go:138] "Creating topology manager with none policy" May 16 05:28:36.926368 kubelet[2349]: I0516 05:28:36.926165 2349 container_manager_linux.go:304] "Creating device plugin manager" May 16 05:28:36.926368 kubelet[2349]: I0516 05:28:36.926355 2349 state_mem.go:36] "Initialized new in-memory state store" May 16 05:28:36.928871 kubelet[2349]: I0516 05:28:36.928826 2349 kubelet.go:446] "Attempting to sync node with API server" May 16 05:28:36.928871 kubelet[2349]: I0516 05:28:36.928862 2349 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 05:28:36.928967 kubelet[2349]: I0516 05:28:36.928888 2349 kubelet.go:352] "Adding apiserver pod source" May 16 05:28:36.928967 kubelet[2349]: I0516 05:28:36.928902 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 05:28:36.934011 kubelet[2349]: W0516 05:28:36.933863 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 16 05:28:36.934011 kubelet[2349]: W0516 05:28:36.933881 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 16 05:28:36.934011 kubelet[2349]: E0516 05:28:36.933943 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" May 16 05:28:36.934011 kubelet[2349]: E0516 05:28:36.933973 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" May 16 05:28:36.936483 kubelet[2349]: I0516 05:28:36.936450 2349 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 05:28:36.936988 kubelet[2349]: I0516 05:28:36.936969 2349 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 05:28:36.937047 kubelet[2349]: W0516 05:28:36.937033 2349 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 05:28:36.939774 kubelet[2349]: I0516 05:28:36.939743 2349 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 05:28:36.939826 kubelet[2349]: I0516 05:28:36.939788 2349 server.go:1287] "Started kubelet" May 16 05:28:36.942701 kubelet[2349]: I0516 05:28:36.942682 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 05:28:36.942992 kubelet[2349]: I0516 05:28:36.942963 2349 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 05:28:36.944829 kubelet[2349]: I0516 05:28:36.942642 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 05:28:36.945069 kubelet[2349]: I0516 05:28:36.945049 2349 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 05:28:36.945105 kubelet[2349]: I0516 05:28:36.942707 2349 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 05:28:36.946664 kubelet[2349]: I0516 05:28:36.946032 2349 server.go:479] "Adding debug handlers to kubelet server" May 16 05:28:36.948205 kubelet[2349]: E0516 05:28:36.946967 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:28:36.948205 kubelet[2349]: I0516 05:28:36.947011 2349 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 05:28:36.948205 kubelet[2349]: I0516 05:28:36.947184 2349 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 05:28:36.948205 kubelet[2349]: I0516 05:28:36.947243 2349 reconciler.go:26] "Reconciler: start to sync state" May 16 05:28:36.948205 kubelet[2349]: W0516 05:28:36.947573 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 16 05:28:36.948205 kubelet[2349]: E0516 05:28:36.947612 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" May 16 05:28:36.948205 kubelet[2349]: E0516 05:28:36.947670 2349 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 05:28:36.949536 kubelet[2349]: E0516 05:28:36.948059 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" May 16 05:28:36.949536 kubelet[2349]: I0516 05:28:36.949244 2349 factory.go:221] Registration of the systemd container factory successfully May 16 05:28:36.949536 kubelet[2349]: I0516 05:28:36.949359 2349 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 05:28:36.950725 kubelet[2349]: I0516 05:28:36.950675 2349 factory.go:221] Registration of the containerd container factory successfully May 16 05:28:36.951795 kubelet[2349]: E0516 05:28:36.950634 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183feac61630c6d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 05:28:36.93976136 +0000 UTC m=+0.235426384,LastTimestamp:2025-05-16 05:28:36.93976136 +0000 UTC m=+0.235426384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 05:28:36.966886 kubelet[2349]: I0516 05:28:36.966602 2349 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 05:28:36.966886 kubelet[2349]: I0516 05:28:36.966626 2349 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 05:28:36.966886 kubelet[2349]: I0516 05:28:36.966646 2349 state_mem.go:36] "Initialized new in-memory state store" May 16 05:28:36.971223 kubelet[2349]: I0516 05:28:36.971161 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 05:28:36.972674 kubelet[2349]: I0516 05:28:36.972632 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 05:28:36.972737 kubelet[2349]: I0516 05:28:36.972680 2349 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 05:28:36.972737 kubelet[2349]: I0516 05:28:36.972700 2349 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 05:28:36.972737 kubelet[2349]: I0516 05:28:36.972707 2349 kubelet.go:2382] "Starting kubelet main sync loop" May 16 05:28:36.972818 kubelet[2349]: E0516 05:28:36.972761 2349 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 05:28:37.047898 kubelet[2349]: E0516 05:28:37.047833 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:28:37.073100 kubelet[2349]: E0516 05:28:37.073042 2349 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 05:28:37.148480 kubelet[2349]: E0516 05:28:37.148406 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:28:37.150105 kubelet[2349]: E0516 05:28:37.150063 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" May 16 05:28:37.231459 kubelet[2349]: I0516 05:28:37.231323 2349 policy_none.go:49] "None policy: Start" May 16 05:28:37.231459 kubelet[2349]: I0516 05:28:37.231366 2349 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 05:28:37.231459 kubelet[2349]: I0516 05:28:37.231380 2349 state_mem.go:35] "Initializing new in-memory state store" May 16 05:28:37.231748 kubelet[2349]: W0516 05:28:37.231690 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 16 05:28:37.231799 kubelet[2349]: E0516 05:28:37.231753 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" May 16 05:28:37.239894 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 05:28:37.248833 kubelet[2349]: E0516 05:28:37.248781 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:28:37.255909 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 05:28:37.259633 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 05:28:37.273407 kubelet[2349]: E0516 05:28:37.273359 2349 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 05:28:37.273524 kubelet[2349]: I0516 05:28:37.273512 2349 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 05:28:37.273799 kubelet[2349]: I0516 05:28:37.273776 2349 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 05:28:37.273836 kubelet[2349]: I0516 05:28:37.273792 2349 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 05:28:37.274519 kubelet[2349]: I0516 05:28:37.274446 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 05:28:37.274889 kubelet[2349]: E0516 05:28:37.274871 2349 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 05:28:37.274951 kubelet[2349]: E0516 05:28:37.274912 2349 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 05:28:37.377573 kubelet[2349]: I0516 05:28:37.377538 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:28:37.377981 kubelet[2349]: E0516 05:28:37.377946 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" May 16 05:28:37.551022 kubelet[2349]: E0516 05:28:37.550872 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" May 16 05:28:37.580186 kubelet[2349]: I0516 05:28:37.580157 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:28:37.580443 kubelet[2349]: E0516 05:28:37.580413 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" May 16 05:28:37.682949 systemd[1]: Created slice kubepods-burstable-pod882740d4a5122f84d91eae70aa36c969.slice - libcontainer container kubepods-burstable-pod882740d4a5122f84d91eae70aa36c969.slice. May 16 05:28:37.701237 kubelet[2349]: E0516 05:28:37.701198 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:28:37.703382 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 16 05:28:37.715355 kubelet[2349]: E0516 05:28:37.715314 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:28:37.718195 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 16 05:28:37.720109 kubelet[2349]: E0516 05:28:37.720064 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:28:37.751428 kubelet[2349]: I0516 05:28:37.751390 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:28:37.751428 kubelet[2349]: I0516 05:28:37.751423 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:28:37.751835 kubelet[2349]: I0516 05:28:37.751451 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/882740d4a5122f84d91eae70aa36c969-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"882740d4a5122f84d91eae70aa36c969\") " pod="kube-system/kube-apiserver-localhost" May 16 05:28:37.751835 kubelet[2349]: I0516 05:28:37.751467 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/882740d4a5122f84d91eae70aa36c969-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"882740d4a5122f84d91eae70aa36c969\") " pod="kube-system/kube-apiserver-localhost" May 16 05:28:37.751835 kubelet[2349]: I0516 05:28:37.751483 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:28:37.751835 kubelet[2349]: I0516 05:28:37.751497 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:28:37.751835 kubelet[2349]: I0516 05:28:37.751511 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:28:37.751939 kubelet[2349]: I0516 05:28:37.751525 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 05:28:37.751939 kubelet[2349]: I0516 05:28:37.751540 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/882740d4a5122f84d91eae70aa36c969-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"882740d4a5122f84d91eae70aa36c969\") " pod="kube-system/kube-apiserver-localhost" May 16 05:28:37.982202 kubelet[2349]: I0516 05:28:37.982160 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:28:37.982708 kubelet[2349]: E0516 05:28:37.982649 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" May 16 05:28:38.001865 kubelet[2349]: E0516 05:28:38.001835 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:38.002430 containerd[1584]: time="2025-05-16T05:28:38.002396720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:882740d4a5122f84d91eae70aa36c969,Namespace:kube-system,Attempt:0,}" May 16 05:28:38.015837 kubelet[2349]: E0516 05:28:38.015794 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:38.016498 containerd[1584]: time="2025-05-16T05:28:38.016438328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 16 05:28:38.020808 kubelet[2349]: E0516 05:28:38.020754 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:38.021259 containerd[1584]: time="2025-05-16T05:28:38.021215879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 16 05:28:38.149879 kubelet[2349]: W0516 05:28:38.149784 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 16 05:28:38.149879 kubelet[2349]: E0516 05:28:38.149867 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" May 16 05:28:38.171908 kubelet[2349]: W0516 05:28:38.171847 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 16 05:28:38.171908 kubelet[2349]: E0516 05:28:38.171882 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" May 16 05:28:38.257230 kubelet[2349]: W0516 05:28:38.257040 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 16 05:28:38.257230 kubelet[2349]: E0516 05:28:38.257078 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" May 16 05:28:38.352547 kubelet[2349]: E0516 05:28:38.352473 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" May 16 05:28:38.507096 kubelet[2349]: W0516 05:28:38.507007 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 16 05:28:38.507096 kubelet[2349]: E0516 05:28:38.507081 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" May 16 05:28:38.765928 containerd[1584]: time="2025-05-16T05:28:38.765866822Z" level=info msg="connecting to shim 4b90fff3d6e1a290729c0aa240b44f5f110806805ef6b95f4c0aeef9c713e03a" address="unix:///run/containerd/s/0982fe569113b2b64ac629fc5a19924d63b884c47eb5b679b1a5191390a7fee8" namespace=k8s.io protocol=ttrpc version=3 May 16 05:28:38.772158 containerd[1584]: time="2025-05-16T05:28:38.771940995Z" level=info msg="connecting to shim 93cca8756483be1f7497f4df7b92e298a2300c3ef41e268a4c5423039ce79af3" address="unix:///run/containerd/s/43e2a6aaa849383e4a33500b3c91ffc60fef07b949f213c1682faf7082ba9a95" namespace=k8s.io protocol=ttrpc version=3 May 16 05:28:38.782160 containerd[1584]: time="2025-05-16T05:28:38.782053583Z" level=info msg="connecting to shim c5e97bd9043b7f9974a62a0074990acc4697d45f93d6e4ab296543b45c9306e6" address="unix:///run/containerd/s/6d2e7e919f17501bd44f1bfc58a427e3f4e97a604f3869755eae933ff8ff9101" namespace=k8s.io protocol=ttrpc version=3 May 16 05:28:38.784489 kubelet[2349]: I0516 05:28:38.784458 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:28:38.786400 kubelet[2349]: E0516 05:28:38.786338 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" May 16 05:28:38.799283 systemd[1]: Started cri-containerd-4b90fff3d6e1a290729c0aa240b44f5f110806805ef6b95f4c0aeef9c713e03a.scope - libcontainer container 4b90fff3d6e1a290729c0aa240b44f5f110806805ef6b95f4c0aeef9c713e03a. May 16 05:28:38.803704 systemd[1]: Started cri-containerd-93cca8756483be1f7497f4df7b92e298a2300c3ef41e268a4c5423039ce79af3.scope - libcontainer container 93cca8756483be1f7497f4df7b92e298a2300c3ef41e268a4c5423039ce79af3. May 16 05:28:38.808558 systemd[1]: Started cri-containerd-c5e97bd9043b7f9974a62a0074990acc4697d45f93d6e4ab296543b45c9306e6.scope - libcontainer container c5e97bd9043b7f9974a62a0074990acc4697d45f93d6e4ab296543b45c9306e6. May 16 05:28:38.861618 containerd[1584]: time="2025-05-16T05:28:38.861556127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b90fff3d6e1a290729c0aa240b44f5f110806805ef6b95f4c0aeef9c713e03a\"" May 16 05:28:38.861937 containerd[1584]: time="2025-05-16T05:28:38.861892118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:882740d4a5122f84d91eae70aa36c969,Namespace:kube-system,Attempt:0,} returns sandbox id \"93cca8756483be1f7497f4df7b92e298a2300c3ef41e268a4c5423039ce79af3\"" May 16 05:28:38.863431 kubelet[2349]: E0516 05:28:38.863406 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:38.863771 kubelet[2349]: E0516 05:28:38.863552 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:38.865940 containerd[1584]: time="2025-05-16T05:28:38.865901067Z" level=info msg="CreateContainer within sandbox \"4b90fff3d6e1a290729c0aa240b44f5f110806805ef6b95f4c0aeef9c713e03a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 05:28:38.866738 containerd[1584]: time="2025-05-16T05:28:38.866241125Z" level=info msg="CreateContainer within sandbox \"93cca8756483be1f7497f4df7b92e298a2300c3ef41e268a4c5423039ce79af3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 05:28:38.874427 containerd[1584]: time="2025-05-16T05:28:38.874372808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5e97bd9043b7f9974a62a0074990acc4697d45f93d6e4ab296543b45c9306e6\"" May 16 05:28:38.875232 kubelet[2349]: E0516 05:28:38.875199 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:38.876681 containerd[1584]: time="2025-05-16T05:28:38.876625433Z" level=info msg="CreateContainer within sandbox \"c5e97bd9043b7f9974a62a0074990acc4697d45f93d6e4ab296543b45c9306e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 05:28:38.878566 containerd[1584]: time="2025-05-16T05:28:38.878530537Z" level=info msg="Container 0864dd1a5222728c6d01c1c2bc0f8c45c2a1b770e950869f7afeb936965f9402: CDI devices from CRI Config.CDIDevices: []" May 16 05:28:38.881620 containerd[1584]: time="2025-05-16T05:28:38.881449091Z" level=info msg="Container 7e0b75fa62e6351d759bb09ea9818dd80028be2b3a95a7f486dc3363188f57a5: CDI devices from CRI Config.CDIDevices: []" May 16 05:28:38.888404 containerd[1584]: time="2025-05-16T05:28:38.888350185Z" level=info msg="CreateContainer within sandbox \"93cca8756483be1f7497f4df7b92e298a2300c3ef41e268a4c5423039ce79af3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0864dd1a5222728c6d01c1c2bc0f8c45c2a1b770e950869f7afeb936965f9402\"" May 16 05:28:38.889079 containerd[1584]: time="2025-05-16T05:28:38.888964458Z" level=info msg="StartContainer for \"0864dd1a5222728c6d01c1c2bc0f8c45c2a1b770e950869f7afeb936965f9402\"" May 16 05:28:38.890226 containerd[1584]: time="2025-05-16T05:28:38.890188293Z" level=info msg="connecting to shim 0864dd1a5222728c6d01c1c2bc0f8c45c2a1b770e950869f7afeb936965f9402" address="unix:///run/containerd/s/43e2a6aaa849383e4a33500b3c91ffc60fef07b949f213c1682faf7082ba9a95" protocol=ttrpc version=3 May 16 05:28:38.892801 containerd[1584]: time="2025-05-16T05:28:38.892756841Z" level=info msg="CreateContainer within sandbox \"4b90fff3d6e1a290729c0aa240b44f5f110806805ef6b95f4c0aeef9c713e03a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7e0b75fa62e6351d759bb09ea9818dd80028be2b3a95a7f486dc3363188f57a5\"" May 16 05:28:38.893128 containerd[1584]: time="2025-05-16T05:28:38.893100055Z" level=info msg="StartContainer for \"7e0b75fa62e6351d759bb09ea9818dd80028be2b3a95a7f486dc3363188f57a5\"" May 16 05:28:38.896156 containerd[1584]: time="2025-05-16T05:28:38.895843982Z" level=info msg="connecting to shim 7e0b75fa62e6351d759bb09ea9818dd80028be2b3a95a7f486dc3363188f57a5" address="unix:///run/containerd/s/0982fe569113b2b64ac629fc5a19924d63b884c47eb5b679b1a5191390a7fee8" protocol=ttrpc version=3 May 16 05:28:38.896156 containerd[1584]: time="2025-05-16T05:28:38.895885690Z" level=info msg="Container 397977d12d346680fc5da2aa8cec5858a995a3190fccc48f33cd6fe76080d4b3: CDI devices from CRI Config.CDIDevices: []" May 16 05:28:38.905412 containerd[1584]: time="2025-05-16T05:28:38.905369068Z" level=info msg="CreateContainer within sandbox \"c5e97bd9043b7f9974a62a0074990acc4697d45f93d6e4ab296543b45c9306e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"397977d12d346680fc5da2aa8cec5858a995a3190fccc48f33cd6fe76080d4b3\"" May 16 05:28:38.906078 containerd[1584]: time="2025-05-16T05:28:38.906048312Z" level=info msg="StartContainer for \"397977d12d346680fc5da2aa8cec5858a995a3190fccc48f33cd6fe76080d4b3\"" May 16 05:28:38.907191 containerd[1584]: time="2025-05-16T05:28:38.907160538Z" level=info msg="connecting to shim 397977d12d346680fc5da2aa8cec5858a995a3190fccc48f33cd6fe76080d4b3" address="unix:///run/containerd/s/6d2e7e919f17501bd44f1bfc58a427e3f4e97a604f3869755eae933ff8ff9101" protocol=ttrpc version=3 May 16 05:28:38.914300 systemd[1]: Started cri-containerd-0864dd1a5222728c6d01c1c2bc0f8c45c2a1b770e950869f7afeb936965f9402.scope - libcontainer container 0864dd1a5222728c6d01c1c2bc0f8c45c2a1b770e950869f7afeb936965f9402. May 16 05:28:38.927287 systemd[1]: Started cri-containerd-7e0b75fa62e6351d759bb09ea9818dd80028be2b3a95a7f486dc3363188f57a5.scope - libcontainer container 7e0b75fa62e6351d759bb09ea9818dd80028be2b3a95a7f486dc3363188f57a5. May 16 05:28:38.931256 systemd[1]: Started cri-containerd-397977d12d346680fc5da2aa8cec5858a995a3190fccc48f33cd6fe76080d4b3.scope - libcontainer container 397977d12d346680fc5da2aa8cec5858a995a3190fccc48f33cd6fe76080d4b3. May 16 05:28:38.966697 kubelet[2349]: E0516 05:28:38.966660 2349 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" May 16 05:28:38.983172 containerd[1584]: time="2025-05-16T05:28:38.982871490Z" level=info msg="StartContainer for \"0864dd1a5222728c6d01c1c2bc0f8c45c2a1b770e950869f7afeb936965f9402\" returns successfully" May 16 05:28:38.990834 containerd[1584]: time="2025-05-16T05:28:38.990778652Z" level=info msg="StartContainer for \"397977d12d346680fc5da2aa8cec5858a995a3190fccc48f33cd6fe76080d4b3\" returns successfully" May 16 05:28:38.999232 containerd[1584]: time="2025-05-16T05:28:38.999183437Z" level=info msg="StartContainer for \"7e0b75fa62e6351d759bb09ea9818dd80028be2b3a95a7f486dc3363188f57a5\" returns successfully" May 16 05:28:39.003270 kubelet[2349]: E0516 05:28:39.003215 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:28:39.003453 kubelet[2349]: E0516 05:28:39.003426 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:39.960191 kubelet[2349]: E0516 05:28:39.959913 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 05:28:40.004478 kubelet[2349]: E0516 05:28:40.004449 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:28:40.004587 kubelet[2349]: E0516 05:28:40.004566 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:40.005468 kubelet[2349]: E0516 05:28:40.005438 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:28:40.005783 kubelet[2349]: E0516 05:28:40.005578 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:40.005783 kubelet[2349]: E0516 05:28:40.005660 2349 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:28:40.005783 kubelet[2349]: E0516 05:28:40.005744 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:40.306672 kubelet[2349]: E0516 05:28:40.306530 2349 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 16 05:28:40.388337 kubelet[2349]: I0516 05:28:40.388304 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:28:40.393774 kubelet[2349]: I0516 05:28:40.393630 2349 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 05:28:40.393774 kubelet[2349]: E0516 05:28:40.393660 2349 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 05:28:40.448248 kubelet[2349]: I0516 05:28:40.448090 2349 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 05:28:40.453123 kubelet[2349]: E0516 05:28:40.453074 2349 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 16 05:28:40.453123 kubelet[2349]: I0516 05:28:40.453101 2349 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 05:28:40.454887 kubelet[2349]: E0516 05:28:40.454845 2349 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 05:28:40.454887 kubelet[2349]: I0516 05:28:40.454865 2349 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 05:28:40.456271 kubelet[2349]: E0516 05:28:40.456239 2349 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 05:28:40.933979 kubelet[2349]: I0516 05:28:40.933930 2349 apiserver.go:52] "Watching apiserver" May 16 05:28:40.948166 kubelet[2349]: I0516 05:28:40.948102 2349 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 05:28:41.006951 kubelet[2349]: I0516 05:28:41.006744 2349 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 05:28:41.006951 kubelet[2349]: I0516 05:28:41.006921 2349 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 05:28:41.010826 kubelet[2349]: E0516 05:28:41.010798 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:41.011909 kubelet[2349]: E0516 05:28:41.011874 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:42.008298 kubelet[2349]: E0516 05:28:42.008268 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:42.008762 kubelet[2349]: E0516 05:28:42.008361 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:42.097624 systemd[1]: Reload requested from client PID 2623 ('systemctl') (unit session-7.scope)... May 16 05:28:42.097641 systemd[1]: Reloading... May 16 05:28:42.179171 zram_generator::config[2669]: No configuration found. May 16 05:28:42.271171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:28:42.401681 systemd[1]: Reloading finished in 303 ms. May 16 05:28:42.432387 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:28:42.455628 systemd[1]: kubelet.service: Deactivated successfully. May 16 05:28:42.455970 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:28:42.456040 systemd[1]: kubelet.service: Consumed 759ms CPU time, 132M memory peak. May 16 05:28:42.458208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:28:42.652677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:28:42.667551 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 05:28:42.706807 kubelet[2711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:28:42.706807 kubelet[2711]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 05:28:42.706807 kubelet[2711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:28:42.707236 kubelet[2711]: I0516 05:28:42.706873 2711 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 05:28:42.712978 kubelet[2711]: I0516 05:28:42.712949 2711 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 05:28:42.712978 kubelet[2711]: I0516 05:28:42.712968 2711 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 05:28:42.713196 kubelet[2711]: I0516 05:28:42.713173 2711 server.go:954] "Client rotation is on, will bootstrap in background" May 16 05:28:42.714187 kubelet[2711]: I0516 05:28:42.714157 2711 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 05:28:42.716712 kubelet[2711]: I0516 05:28:42.716681 2711 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 05:28:42.721304 kubelet[2711]: I0516 05:28:42.721274 2711 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 05:28:42.725838 kubelet[2711]: I0516 05:28:42.725811 2711 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 05:28:42.726055 kubelet[2711]: I0516 05:28:42.726021 2711 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 05:28:42.726222 kubelet[2711]: I0516 05:28:42.726047 2711 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 05:28:42.726313 kubelet[2711]: I0516 05:28:42.726227 2711 topology_manager.go:138] "Creating topology manager with none policy" May 16 05:28:42.726313 kubelet[2711]: I0516 05:28:42.726236 2711 container_manager_linux.go:304] "Creating device plugin manager" May 16 05:28:42.726313 kubelet[2711]: I0516 05:28:42.726283 2711 state_mem.go:36] "Initialized new in-memory state store" May 16 05:28:42.726438 kubelet[2711]: I0516 05:28:42.726424 2711 kubelet.go:446] "Attempting to sync node with API server" May 16 05:28:42.726469 kubelet[2711]: I0516 05:28:42.726446 2711 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 05:28:42.726469 kubelet[2711]: I0516 05:28:42.726467 2711 kubelet.go:352] "Adding apiserver pod source" May 16 05:28:42.726529 kubelet[2711]: I0516 05:28:42.726476 2711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 05:28:42.727114 kubelet[2711]: I0516 05:28:42.726926 2711 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 05:28:42.727311 kubelet[2711]: I0516 05:28:42.727281 2711 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 05:28:42.727710 kubelet[2711]: I0516 05:28:42.727678 2711 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 05:28:42.727710 kubelet[2711]: I0516 05:28:42.727707 2711 server.go:1287] "Started kubelet" May 16 05:28:42.730162 kubelet[2711]: I0516 05:28:42.729115 2711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 05:28:42.737044 kubelet[2711]: I0516 05:28:42.736923 2711 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 05:28:42.737044 kubelet[2711]: I0516 05:28:42.737004 2711 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 05:28:42.737250 kubelet[2711]: I0516 05:28:42.737153 2711 reconciler.go:26] "Reconciler: start to sync state" May 16 05:28:42.737250 kubelet[2711]: I0516 05:28:42.737172 2711 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 05:28:42.738460 kubelet[2711]: E0516 05:28:42.738419 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:28:42.740781 kubelet[2711]: I0516 05:28:42.740027 2711 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 05:28:42.741123 kubelet[2711]: I0516 05:28:42.741102 2711 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 05:28:42.741405 kubelet[2711]: I0516 05:28:42.741382 2711 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 05:28:42.741769 kubelet[2711]: I0516 05:28:42.741703 2711 factory.go:221] Registration of the systemd container factory successfully May 16 05:28:42.742263 kubelet[2711]: I0516 05:28:42.742230 2711 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 05:28:42.746589 kubelet[2711]: I0516 05:28:42.746502 2711 server.go:479] "Adding debug handlers to kubelet server" May 16 05:28:42.747004 kubelet[2711]: I0516 05:28:42.746969 2711 factory.go:221] Registration of the containerd container factory successfully May 16 05:28:42.747831 kubelet[2711]: E0516 05:28:42.747801 2711 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 05:28:42.750610 kubelet[2711]: I0516 05:28:42.750561 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 05:28:42.752089 kubelet[2711]: I0516 05:28:42.752046 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 05:28:42.752089 kubelet[2711]: I0516 05:28:42.752077 2711 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 05:28:42.752278 kubelet[2711]: I0516 05:28:42.752100 2711 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 05:28:42.752278 kubelet[2711]: I0516 05:28:42.752108 2711 kubelet.go:2382] "Starting kubelet main sync loop" May 16 05:28:42.752278 kubelet[2711]: E0516 05:28:42.752234 2711 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 05:28:42.776612 kubelet[2711]: I0516 05:28:42.776582 2711 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 05:28:42.776612 kubelet[2711]: I0516 05:28:42.776603 2711 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 05:28:42.776612 kubelet[2711]: I0516 05:28:42.776623 2711 state_mem.go:36] "Initialized new in-memory state store" May 16 05:28:42.776807 kubelet[2711]: I0516 05:28:42.776777 2711 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 05:28:42.776807 kubelet[2711]: I0516 05:28:42.776787 2711 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 05:28:42.776807 kubelet[2711]: I0516 05:28:42.776803 2711 policy_none.go:49] "None policy: Start" May 16 05:28:42.776869 kubelet[2711]: I0516 05:28:42.776812 2711 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 05:28:42.776869 kubelet[2711]: I0516 05:28:42.776822 2711 state_mem.go:35] "Initializing new in-memory state store" May 16 05:28:42.776947 kubelet[2711]: I0516 05:28:42.776910 2711 state_mem.go:75] "Updated machine memory state" May 16 05:28:42.780806 kubelet[2711]: I0516 05:28:42.780784 2711 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 05:28:42.780959 kubelet[2711]: I0516 05:28:42.780935 2711 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 05:28:42.780996 kubelet[2711]: I0516 05:28:42.780952 2711 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 05:28:42.781244 kubelet[2711]: I0516 05:28:42.781127 2711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 05:28:42.783529 kubelet[2711]: E0516 05:28:42.783462 2711 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 05:28:42.853395 kubelet[2711]: I0516 05:28:42.853362 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 05:28:42.853577 kubelet[2711]: I0516 05:28:42.853466 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 05:28:42.853577 kubelet[2711]: I0516 05:28:42.853486 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 05:28:42.859054 kubelet[2711]: E0516 05:28:42.859012 2711 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 05:28:42.859054 kubelet[2711]: E0516 05:28:42.859012 2711 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 05:28:42.885132 kubelet[2711]: I0516 05:28:42.885106 2711 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:28:42.892170 kubelet[2711]: I0516 05:28:42.891342 2711 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 05:28:42.892170 kubelet[2711]: I0516 05:28:42.891422 2711 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 05:28:43.038786 kubelet[2711]: I0516 05:28:43.038667 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/882740d4a5122f84d91eae70aa36c969-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"882740d4a5122f84d91eae70aa36c969\") " pod="kube-system/kube-apiserver-localhost" May 16 05:28:43.038786 kubelet[2711]: I0516 05:28:43.038700 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:28:43.038786 kubelet[2711]: I0516 05:28:43.038722 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:28:43.038786 kubelet[2711]: I0516 05:28:43.038744 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:28:43.038786 kubelet[2711]: I0516 05:28:43.038765 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:28:43.038978 kubelet[2711]: I0516 05:28:43.038785 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 05:28:43.038978 kubelet[2711]: I0516 05:28:43.038801 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/882740d4a5122f84d91eae70aa36c969-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"882740d4a5122f84d91eae70aa36c969\") " pod="kube-system/kube-apiserver-localhost" May 16 05:28:43.038978 kubelet[2711]: I0516 05:28:43.038820 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/882740d4a5122f84d91eae70aa36c969-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"882740d4a5122f84d91eae70aa36c969\") " pod="kube-system/kube-apiserver-localhost" May 16 05:28:43.038978 kubelet[2711]: I0516 05:28:43.038839 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:28:43.098954 sudo[2750]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 05:28:43.099302 sudo[2750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 05:28:43.158892 kubelet[2711]: E0516 05:28:43.158853 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:43.159299 kubelet[2711]: E0516 05:28:43.159277 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:43.159439 kubelet[2711]: E0516 05:28:43.159400 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:43.558830 sudo[2750]: pam_unix(sudo:session): session closed for user root May 16 05:28:43.727183 kubelet[2711]: I0516 05:28:43.727151 2711 apiserver.go:52] "Watching apiserver" May 16 05:28:43.737424 kubelet[2711]: I0516 05:28:43.737387 2711 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 05:28:43.765485 kubelet[2711]: I0516 05:28:43.765281 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 05:28:43.765758 kubelet[2711]: E0516 05:28:43.765535 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:43.765758 kubelet[2711]: I0516 05:28:43.765654 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 05:28:43.773604 kubelet[2711]: E0516 05:28:43.773556 2711 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 05:28:43.773998 kubelet[2711]: E0516 05:28:43.773775 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:43.774694 kubelet[2711]: E0516 05:28:43.774659 2711 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 05:28:43.774891 kubelet[2711]: E0516 05:28:43.774751 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:43.783798 kubelet[2711]: I0516 05:28:43.783595 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7835262969999999 podStartE2EDuration="1.783526297s" podCreationTimestamp="2025-05-16 05:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:28:43.783480921 +0000 UTC m=+1.112093230" watchObservedRunningTime="2025-05-16 05:28:43.783526297 +0000 UTC m=+1.112138595" May 16 05:28:43.798750 kubelet[2711]: I0516 05:28:43.798663 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.7986429409999998 podStartE2EDuration="2.798642941s" podCreationTimestamp="2025-05-16 05:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:28:43.798586856 +0000 UTC m=+1.127199154" watchObservedRunningTime="2025-05-16 05:28:43.798642941 +0000 UTC m=+1.127255239" May 16 05:28:43.798983 kubelet[2711]: I0516 05:28:43.798826 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.798820204 podStartE2EDuration="2.798820204s" podCreationTimestamp="2025-05-16 05:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:28:43.791032456 +0000 UTC m=+1.119644754" watchObservedRunningTime="2025-05-16 05:28:43.798820204 +0000 UTC m=+1.127432502" May 16 05:28:44.768392 kubelet[2711]: E0516 05:28:44.768358 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:44.768822 kubelet[2711]: E0516 05:28:44.768704 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:45.008392 sudo[1801]: pam_unix(sudo:session): session closed for user root May 16 05:28:45.010294 sshd[1800]: Connection closed by 10.0.0.1 port 35448 May 16 05:28:45.010761 sshd-session[1798]: pam_unix(sshd:session): session closed for user core May 16 05:28:45.015414 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:35448.service: Deactivated successfully. May 16 05:28:45.017789 systemd[1]: session-7.scope: Deactivated successfully. May 16 05:28:45.018033 systemd[1]: session-7.scope: Consumed 4.599s CPU time, 261.5M memory peak. May 16 05:28:45.019702 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. May 16 05:28:45.020979 systemd-logind[1573]: Removed session 7. May 16 05:28:45.769797 kubelet[2711]: E0516 05:28:45.769748 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:47.695972 kubelet[2711]: E0516 05:28:47.695923 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:48.227444 kubelet[2711]: I0516 05:28:48.227341 2711 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 05:28:48.229003 containerd[1584]: time="2025-05-16T05:28:48.228769141Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 05:28:48.229433 kubelet[2711]: I0516 05:28:48.228975 2711 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 05:28:48.886774 systemd[1]: Created slice kubepods-besteffort-pod103f0e49_6455_48c9_a05c_29573b480fca.slice - libcontainer container kubepods-besteffort-pod103f0e49_6455_48c9_a05c_29573b480fca.slice. May 16 05:28:48.903991 systemd[1]: Created slice kubepods-burstable-pod8520c209_0a41_4078_8256_47e643d3f48e.slice - libcontainer container kubepods-burstable-pod8520c209_0a41_4078_8256_47e643d3f48e.slice. May 16 05:28:48.978274 kubelet[2711]: I0516 05:28:48.978227 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8520c209-0a41-4078-8256-47e643d3f48e-clustermesh-secrets\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978274 kubelet[2711]: I0516 05:28:48.978264 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8520c209-0a41-4078-8256-47e643d3f48e-hubble-tls\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978274 kubelet[2711]: I0516 05:28:48.978282 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/103f0e49-6455-48c9-a05c-29573b480fca-kube-proxy\") pod \"kube-proxy-dpj75\" (UID: \"103f0e49-6455-48c9-a05c-29573b480fca\") " pod="kube-system/kube-proxy-dpj75" May 16 05:28:48.978743 kubelet[2711]: I0516 05:28:48.978296 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/103f0e49-6455-48c9-a05c-29573b480fca-xtables-lock\") pod \"kube-proxy-dpj75\" (UID: \"103f0e49-6455-48c9-a05c-29573b480fca\") " pod="kube-system/kube-proxy-dpj75" May 16 05:28:48.978743 kubelet[2711]: I0516 05:28:48.978313 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-host-proc-sys-kernel\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978743 kubelet[2711]: I0516 05:28:48.978432 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cni-path\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978743 kubelet[2711]: I0516 05:28:48.978476 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-etc-cni-netd\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978743 kubelet[2711]: I0516 05:28:48.978542 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98vwj\" (UniqueName: \"kubernetes.io/projected/103f0e49-6455-48c9-a05c-29573b480fca-kube-api-access-98vwj\") pod \"kube-proxy-dpj75\" (UID: \"103f0e49-6455-48c9-a05c-29573b480fca\") " pod="kube-system/kube-proxy-dpj75" May 16 05:28:48.978743 kubelet[2711]: I0516 05:28:48.978595 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cilium-run\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978876 kubelet[2711]: I0516 05:28:48.978626 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-bpf-maps\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978876 kubelet[2711]: I0516 05:28:48.978645 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-hostproc\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978876 kubelet[2711]: I0516 05:28:48.978671 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-xtables-lock\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978876 kubelet[2711]: I0516 05:28:48.978703 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-lib-modules\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978876 kubelet[2711]: I0516 05:28:48.978717 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8520c209-0a41-4078-8256-47e643d3f48e-cilium-config-path\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.978876 kubelet[2711]: I0516 05:28:48.978740 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/103f0e49-6455-48c9-a05c-29573b480fca-lib-modules\") pod \"kube-proxy-dpj75\" (UID: \"103f0e49-6455-48c9-a05c-29573b480fca\") " pod="kube-system/kube-proxy-dpj75" May 16 05:28:48.979094 kubelet[2711]: I0516 05:28:48.978757 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v676j\" (UniqueName: \"kubernetes.io/projected/8520c209-0a41-4078-8256-47e643d3f48e-kube-api-access-v676j\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.979094 kubelet[2711]: I0516 05:28:48.978779 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-host-proc-sys-net\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:48.979094 kubelet[2711]: I0516 05:28:48.978813 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cilium-cgroup\") pod \"cilium-hqqcj\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " pod="kube-system/cilium-hqqcj" May 16 05:28:49.197445 kubelet[2711]: E0516 05:28:49.197305 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:49.198794 containerd[1584]: time="2025-05-16T05:28:49.198729718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dpj75,Uid:103f0e49-6455-48c9-a05c-29573b480fca,Namespace:kube-system,Attempt:0,}" May 16 05:28:49.208107 kubelet[2711]: E0516 05:28:49.208074 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:49.208657 containerd[1584]: time="2025-05-16T05:28:49.208616297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hqqcj,Uid:8520c209-0a41-4078-8256-47e643d3f48e,Namespace:kube-system,Attempt:0,}" May 16 05:28:49.374238 systemd[1]: Created slice kubepods-besteffort-pod9db568bf_2ba8_4d14_87b6_1e4c3322b82c.slice - libcontainer container kubepods-besteffort-pod9db568bf_2ba8_4d14_87b6_1e4c3322b82c.slice. May 16 05:28:49.461163 kubelet[2711]: I0516 05:28:49.381397 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsrr6\" (UniqueName: \"kubernetes.io/projected/9db568bf-2ba8-4d14-87b6-1e4c3322b82c-kube-api-access-zsrr6\") pod \"cilium-operator-6c4d7847fc-p2t9v\" (UID: \"9db568bf-2ba8-4d14-87b6-1e4c3322b82c\") " pod="kube-system/cilium-operator-6c4d7847fc-p2t9v" May 16 05:28:49.461163 kubelet[2711]: I0516 05:28:49.381430 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9db568bf-2ba8-4d14-87b6-1e4c3322b82c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-p2t9v\" (UID: \"9db568bf-2ba8-4d14-87b6-1e4c3322b82c\") " pod="kube-system/cilium-operator-6c4d7847fc-p2t9v" May 16 05:28:49.505880 containerd[1584]: time="2025-05-16T05:28:49.505805201Z" level=info msg="connecting to shim e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236" address="unix:///run/containerd/s/08bc7371e4da49d04cb64ac8c06acec093403d3a33929b28d6a4a20147c5e56c" namespace=k8s.io protocol=ttrpc version=3 May 16 05:28:49.508628 containerd[1584]: time="2025-05-16T05:28:49.508580468Z" level=info msg="connecting to shim 2fc3c82a45c008bb2b4c35cc26b9d679c52af7ad5abaa25564de8837c5f7269a" address="unix:///run/containerd/s/ebfe5d3492628dae8a690b50787bf954e465a95a8b1242be9013edf41978e486" namespace=k8s.io protocol=ttrpc version=3 May 16 05:28:49.533298 systemd[1]: Started cri-containerd-e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236.scope - libcontainer container e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236. May 16 05:28:49.537403 systemd[1]: Started cri-containerd-2fc3c82a45c008bb2b4c35cc26b9d679c52af7ad5abaa25564de8837c5f7269a.scope - libcontainer container 2fc3c82a45c008bb2b4c35cc26b9d679c52af7ad5abaa25564de8837c5f7269a. May 16 05:28:49.570516 containerd[1584]: time="2025-05-16T05:28:49.570473428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hqqcj,Uid:8520c209-0a41-4078-8256-47e643d3f48e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\"" May 16 05:28:49.571211 kubelet[2711]: E0516 05:28:49.571167 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:49.572455 containerd[1584]: time="2025-05-16T05:28:49.572417238Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 05:28:49.573917 containerd[1584]: time="2025-05-16T05:28:49.573862006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dpj75,Uid:103f0e49-6455-48c9-a05c-29573b480fca,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fc3c82a45c008bb2b4c35cc26b9d679c52af7ad5abaa25564de8837c5f7269a\"" May 16 05:28:49.574516 kubelet[2711]: E0516 05:28:49.574489 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:49.576603 containerd[1584]: time="2025-05-16T05:28:49.576559966Z" level=info msg="CreateContainer within sandbox \"2fc3c82a45c008bb2b4c35cc26b9d679c52af7ad5abaa25564de8837c5f7269a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 05:28:49.587913 containerd[1584]: time="2025-05-16T05:28:49.587865083Z" level=info msg="Container c8ff19de00daf4a6090f0779e8d8a45b37bc3df3964a786322a94341c2b15ece: CDI devices from CRI Config.CDIDevices: []" May 16 05:28:49.598900 containerd[1584]: time="2025-05-16T05:28:49.598859086Z" level=info msg="CreateContainer within sandbox \"2fc3c82a45c008bb2b4c35cc26b9d679c52af7ad5abaa25564de8837c5f7269a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c8ff19de00daf4a6090f0779e8d8a45b37bc3df3964a786322a94341c2b15ece\"" May 16 05:28:49.599508 containerd[1584]: time="2025-05-16T05:28:49.599482366Z" level=info msg="StartContainer for \"c8ff19de00daf4a6090f0779e8d8a45b37bc3df3964a786322a94341c2b15ece\"" May 16 05:28:49.601305 containerd[1584]: time="2025-05-16T05:28:49.601265189Z" level=info msg="connecting to shim c8ff19de00daf4a6090f0779e8d8a45b37bc3df3964a786322a94341c2b15ece" address="unix:///run/containerd/s/ebfe5d3492628dae8a690b50787bf954e465a95a8b1242be9013edf41978e486" protocol=ttrpc version=3 May 16 05:28:49.630351 systemd[1]: Started cri-containerd-c8ff19de00daf4a6090f0779e8d8a45b37bc3df3964a786322a94341c2b15ece.scope - libcontainer container c8ff19de00daf4a6090f0779e8d8a45b37bc3df3964a786322a94341c2b15ece. May 16 05:28:49.676706 containerd[1584]: time="2025-05-16T05:28:49.676657645Z" level=info msg="StartContainer for \"c8ff19de00daf4a6090f0779e8d8a45b37bc3df3964a786322a94341c2b15ece\" returns successfully" May 16 05:28:49.762853 kubelet[2711]: E0516 05:28:49.762491 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:49.763907 containerd[1584]: time="2025-05-16T05:28:49.763578585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p2t9v,Uid:9db568bf-2ba8-4d14-87b6-1e4c3322b82c,Namespace:kube-system,Attempt:0,}" May 16 05:28:49.778352 kubelet[2711]: E0516 05:28:49.778316 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:49.787175 kubelet[2711]: I0516 05:28:49.786978 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dpj75" podStartSLOduration=1.786963589 podStartE2EDuration="1.786963589s" podCreationTimestamp="2025-05-16 05:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:28:49.786802841 +0000 UTC m=+7.115415129" watchObservedRunningTime="2025-05-16 05:28:49.786963589 +0000 UTC m=+7.115575887" May 16 05:28:49.807033 containerd[1584]: time="2025-05-16T05:28:49.806980904Z" level=info msg="connecting to shim 7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d" address="unix:///run/containerd/s/21a0a0536c6fd673174a96b1ace83710e16146fd1e3393d6649c9ecdc16b47f0" namespace=k8s.io protocol=ttrpc version=3 May 16 05:28:49.867332 systemd[1]: Started cri-containerd-7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d.scope - libcontainer container 7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d. May 16 05:28:49.911658 containerd[1584]: time="2025-05-16T05:28:49.911617376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p2t9v,Uid:9db568bf-2ba8-4d14-87b6-1e4c3322b82c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d\"" May 16 05:28:49.912428 kubelet[2711]: E0516 05:28:49.912400 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:52.888032 kubelet[2711]: E0516 05:28:52.887892 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:52.890909 kubelet[2711]: E0516 05:28:52.890866 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:53.372641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181896638.mount: Deactivated successfully. May 16 05:28:53.785878 kubelet[2711]: E0516 05:28:53.785636 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:53.785878 kubelet[2711]: E0516 05:28:53.785737 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:54.786882 kubelet[2711]: E0516 05:28:54.786844 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:57.511891 update_engine[1575]: I20250516 05:28:57.511802 1575 update_attempter.cc:509] Updating boot flags... May 16 05:28:57.565103 containerd[1584]: time="2025-05-16T05:28:57.564245675Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:57.565551 containerd[1584]: time="2025-05-16T05:28:57.565515161Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 05:28:57.568289 containerd[1584]: time="2025-05-16T05:28:57.568256206Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:28:57.570273 containerd[1584]: time="2025-05-16T05:28:57.570239615Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.997785086s" May 16 05:28:57.570273 containerd[1584]: time="2025-05-16T05:28:57.570271206Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 05:28:57.574063 containerd[1584]: time="2025-05-16T05:28:57.572591704Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 05:28:57.575361 containerd[1584]: time="2025-05-16T05:28:57.575330554Z" level=info msg="CreateContainer within sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 05:28:57.586733 containerd[1584]: time="2025-05-16T05:28:57.586688913Z" level=info msg="Container e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d: CDI devices from CRI Config.CDIDevices: []" May 16 05:28:57.592630 containerd[1584]: time="2025-05-16T05:28:57.592595037Z" level=info msg="CreateContainer within sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\"" May 16 05:28:57.595170 containerd[1584]: time="2025-05-16T05:28:57.593368483Z" level=info msg="StartContainer for \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\"" May 16 05:28:57.595170 containerd[1584]: time="2025-05-16T05:28:57.594043052Z" level=info msg="connecting to shim e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d" address="unix:///run/containerd/s/08bc7371e4da49d04cb64ac8c06acec093403d3a33929b28d6a4a20147c5e56c" protocol=ttrpc version=3 May 16 05:28:57.703618 kubelet[2711]: E0516 05:28:57.703564 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:57.726277 systemd[1]: Started cri-containerd-e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d.scope - libcontainer container e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d. May 16 05:28:57.784085 containerd[1584]: time="2025-05-16T05:28:57.783589861Z" level=info msg="StartContainer for \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\" returns successfully" May 16 05:28:57.793184 kubelet[2711]: E0516 05:28:57.793130 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:57.794430 kubelet[2711]: E0516 05:28:57.794395 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:57.796386 systemd[1]: cri-containerd-e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d.scope: Deactivated successfully. May 16 05:28:57.798679 containerd[1584]: time="2025-05-16T05:28:57.798377398Z" level=info msg="received exit event container_id:\"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\" id:\"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\" pid:3153 exited_at:{seconds:1747373337 nanos:797841432}" May 16 05:28:57.798679 containerd[1584]: time="2025-05-16T05:28:57.798456748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\" id:\"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\" pid:3153 exited_at:{seconds:1747373337 nanos:797841432}" May 16 05:28:57.823145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d-rootfs.mount: Deactivated successfully. May 16 05:28:58.796591 kubelet[2711]: E0516 05:28:58.796559 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:58.798280 containerd[1584]: time="2025-05-16T05:28:58.798236766Z" level=info msg="CreateContainer within sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 05:28:58.813191 containerd[1584]: time="2025-05-16T05:28:58.812584673Z" level=info msg="Container a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85: CDI devices from CRI Config.CDIDevices: []" May 16 05:28:58.818916 containerd[1584]: time="2025-05-16T05:28:58.818866050Z" level=info msg="CreateContainer within sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\"" May 16 05:28:58.819377 containerd[1584]: time="2025-05-16T05:28:58.819344717Z" level=info msg="StartContainer for \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\"" May 16 05:28:58.820063 containerd[1584]: time="2025-05-16T05:28:58.820034664Z" level=info msg="connecting to shim a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85" address="unix:///run/containerd/s/08bc7371e4da49d04cb64ac8c06acec093403d3a33929b28d6a4a20147c5e56c" protocol=ttrpc version=3 May 16 05:28:58.842285 systemd[1]: Started cri-containerd-a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85.scope - libcontainer container a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85. May 16 05:28:58.872894 containerd[1584]: time="2025-05-16T05:28:58.872851897Z" level=info msg="StartContainer for \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\" returns successfully" May 16 05:28:58.886956 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 05:28:58.887234 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 05:28:58.888307 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 05:28:58.890202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 05:28:58.890653 containerd[1584]: time="2025-05-16T05:28:58.890619020Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\" id:\"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\" pid:3197 exited_at:{seconds:1747373338 nanos:890341675}" May 16 05:28:58.890781 containerd[1584]: time="2025-05-16T05:28:58.890725852Z" level=info msg="received exit event container_id:\"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\" id:\"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\" pid:3197 exited_at:{seconds:1747373338 nanos:890341675}" May 16 05:28:58.892082 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 05:28:58.892520 systemd[1]: cri-containerd-a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85.scope: Deactivated successfully. May 16 05:28:58.920032 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 05:28:59.800507 kubelet[2711]: E0516 05:28:59.800469 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:28:59.803000 containerd[1584]: time="2025-05-16T05:28:59.802939168Z" level=info msg="CreateContainer within sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 05:28:59.811739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85-rootfs.mount: Deactivated successfully. May 16 05:28:59.819094 containerd[1584]: time="2025-05-16T05:28:59.818973752Z" level=info msg="Container 9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1: CDI devices from CRI Config.CDIDevices: []" May 16 05:28:59.829506 containerd[1584]: time="2025-05-16T05:28:59.829453209Z" level=info msg="CreateContainer within sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\"" May 16 05:28:59.830038 containerd[1584]: time="2025-05-16T05:28:59.829970979Z" level=info msg="StartContainer for \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\"" May 16 05:28:59.831595 containerd[1584]: time="2025-05-16T05:28:59.831557662Z" level=info msg="connecting to shim 9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1" address="unix:///run/containerd/s/08bc7371e4da49d04cb64ac8c06acec093403d3a33929b28d6a4a20147c5e56c" protocol=ttrpc version=3 May 16 05:28:59.853339 systemd[1]: Started cri-containerd-9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1.scope - libcontainer container 9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1. May 16 05:28:59.896320 systemd[1]: cri-containerd-9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1.scope: Deactivated successfully. May 16 05:28:59.899157 containerd[1584]: time="2025-05-16T05:28:59.899056076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\" id:\"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\" pid:3245 exited_at:{seconds:1747373339 nanos:898734096}" May 16 05:28:59.899550 containerd[1584]: time="2025-05-16T05:28:59.899478245Z" level=info msg="received exit event container_id:\"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\" id:\"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\" pid:3245 exited_at:{seconds:1747373339 nanos:898734096}" May 16 05:28:59.899838 containerd[1584]: time="2025-05-16T05:28:59.899705845Z" level=info msg="StartContainer for \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\" returns successfully" May 16 05:28:59.924715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1-rootfs.mount: Deactivated successfully. May 16 05:29:00.591856 containerd[1584]: time="2025-05-16T05:29:00.591801590Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:00.592539 containerd[1584]: time="2025-05-16T05:29:00.592499280Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 05:29:00.593622 containerd[1584]: time="2025-05-16T05:29:00.593574805Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:29:00.594767 containerd[1584]: time="2025-05-16T05:29:00.594731242Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.022106005s" May 16 05:29:00.594767 containerd[1584]: time="2025-05-16T05:29:00.594759556Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 05:29:00.596817 containerd[1584]: time="2025-05-16T05:29:00.596780600Z" level=info msg="CreateContainer within sandbox \"7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 05:29:00.604072 containerd[1584]: time="2025-05-16T05:29:00.604018954Z" level=info msg="Container 79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9: CDI devices from CRI Config.CDIDevices: []" May 16 05:29:00.610049 containerd[1584]: time="2025-05-16T05:29:00.610014847Z" level=info msg="CreateContainer within sandbox \"7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\"" May 16 05:29:00.610446 containerd[1584]: time="2025-05-16T05:29:00.610409865Z" level=info msg="StartContainer for \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\"" May 16 05:29:00.611213 containerd[1584]: time="2025-05-16T05:29:00.611188878Z" level=info msg="connecting to shim 79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9" address="unix:///run/containerd/s/21a0a0536c6fd673174a96b1ace83710e16146fd1e3393d6649c9ecdc16b47f0" protocol=ttrpc version=3 May 16 05:29:00.638261 systemd[1]: Started cri-containerd-79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9.scope - libcontainer container 79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9. May 16 05:29:00.667823 containerd[1584]: time="2025-05-16T05:29:00.667781827Z" level=info msg="StartContainer for \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" returns successfully" May 16 05:29:00.807679 kubelet[2711]: E0516 05:29:00.807638 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:00.812354 containerd[1584]: time="2025-05-16T05:29:00.812314407Z" level=info msg="CreateContainer within sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 05:29:00.812665 kubelet[2711]: E0516 05:29:00.812629 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:00.899713 kubelet[2711]: I0516 05:29:00.899458 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-p2t9v" podStartSLOduration=1.21716991 podStartE2EDuration="11.899443036s" podCreationTimestamp="2025-05-16 05:28:49 +0000 UTC" firstStartedPulling="2025-05-16 05:28:49.913319536 +0000 UTC m=+7.241931834" lastFinishedPulling="2025-05-16 05:29:00.595592662 +0000 UTC m=+17.924204960" observedRunningTime="2025-05-16 05:29:00.898930757 +0000 UTC m=+18.227543055" watchObservedRunningTime="2025-05-16 05:29:00.899443036 +0000 UTC m=+18.228055334" May 16 05:29:00.911172 containerd[1584]: time="2025-05-16T05:29:00.911088829Z" level=info msg="Container 072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447: CDI devices from CRI Config.CDIDevices: []" May 16 05:29:00.918889 containerd[1584]: time="2025-05-16T05:29:00.918850573Z" level=info msg="CreateContainer within sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\"" May 16 05:29:00.919717 containerd[1584]: time="2025-05-16T05:29:00.919592527Z" level=info msg="StartContainer for \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\"" May 16 05:29:00.920782 containerd[1584]: time="2025-05-16T05:29:00.920761980Z" level=info msg="connecting to shim 072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447" address="unix:///run/containerd/s/08bc7371e4da49d04cb64ac8c06acec093403d3a33929b28d6a4a20147c5e56c" protocol=ttrpc version=3 May 16 05:29:00.945276 systemd[1]: Started cri-containerd-072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447.scope - libcontainer container 072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447. May 16 05:29:00.978348 systemd[1]: cri-containerd-072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447.scope: Deactivated successfully. May 16 05:29:00.979440 containerd[1584]: time="2025-05-16T05:29:00.978668864Z" level=info msg="received exit event container_id:\"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\" id:\"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\" pid:3335 exited_at:{seconds:1747373340 nanos:978483172}" May 16 05:29:00.979440 containerd[1584]: time="2025-05-16T05:29:00.978824037Z" level=info msg="TaskExit event in podsandbox handler container_id:\"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\" id:\"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\" pid:3335 exited_at:{seconds:1747373340 nanos:978483172}" May 16 05:29:00.979922 containerd[1584]: time="2025-05-16T05:29:00.979805143Z" level=info msg="StartContainer for \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\" returns successfully" May 16 05:29:01.000641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447-rootfs.mount: Deactivated successfully. May 16 05:29:01.817822 kubelet[2711]: E0516 05:29:01.817771 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:01.818359 kubelet[2711]: E0516 05:29:01.817903 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:01.819387 containerd[1584]: time="2025-05-16T05:29:01.819350430Z" level=info msg="CreateContainer within sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 05:29:01.833530 containerd[1584]: time="2025-05-16T05:29:01.833469217Z" level=info msg="Container 4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924: CDI devices from CRI Config.CDIDevices: []" May 16 05:29:01.840677 containerd[1584]: time="2025-05-16T05:29:01.840625709Z" level=info msg="CreateContainer within sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\"" May 16 05:29:01.841420 containerd[1584]: time="2025-05-16T05:29:01.841155771Z" level=info msg="StartContainer for \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\"" May 16 05:29:01.842094 containerd[1584]: time="2025-05-16T05:29:01.842047598Z" level=info msg="connecting to shim 4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924" address="unix:///run/containerd/s/08bc7371e4da49d04cb64ac8c06acec093403d3a33929b28d6a4a20147c5e56c" protocol=ttrpc version=3 May 16 05:29:01.875276 systemd[1]: Started cri-containerd-4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924.scope - libcontainer container 4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924. May 16 05:29:01.910853 containerd[1584]: time="2025-05-16T05:29:01.910801483Z" level=info msg="StartContainer for \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" returns successfully" May 16 05:29:01.990805 containerd[1584]: time="2025-05-16T05:29:01.990748053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" id:\"19541cbfb8f363db05fe66ae61af602a03ecea6658ea5f094ce97925e416f92d\" pid:3402 exited_at:{seconds:1747373341 nanos:989782578}" May 16 05:29:02.058421 kubelet[2711]: I0516 05:29:02.058392 2711 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 05:29:02.141589 systemd[1]: Created slice kubepods-burstable-pod773b8091_2959_4920_85f3_d1e364a9468c.slice - libcontainer container kubepods-burstable-pod773b8091_2959_4920_85f3_d1e364a9468c.slice. May 16 05:29:02.148284 systemd[1]: Created slice kubepods-burstable-pod5632b72d_7968_44da_bf98_98ac2c4416a9.slice - libcontainer container kubepods-burstable-pod5632b72d_7968_44da_bf98_98ac2c4416a9.slice. May 16 05:29:02.162470 kubelet[2711]: I0516 05:29:02.162427 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/773b8091-2959-4920-85f3-d1e364a9468c-config-volume\") pod \"coredns-668d6bf9bc-77d6n\" (UID: \"773b8091-2959-4920-85f3-d1e364a9468c\") " pod="kube-system/coredns-668d6bf9bc-77d6n" May 16 05:29:02.162470 kubelet[2711]: I0516 05:29:02.162463 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv8kl\" (UniqueName: \"kubernetes.io/projected/773b8091-2959-4920-85f3-d1e364a9468c-kube-api-access-hv8kl\") pod \"coredns-668d6bf9bc-77d6n\" (UID: \"773b8091-2959-4920-85f3-d1e364a9468c\") " pod="kube-system/coredns-668d6bf9bc-77d6n" May 16 05:29:02.162470 kubelet[2711]: I0516 05:29:02.162483 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsnnf\" (UniqueName: \"kubernetes.io/projected/5632b72d-7968-44da-bf98-98ac2c4416a9-kube-api-access-dsnnf\") pod \"coredns-668d6bf9bc-7cx6b\" (UID: \"5632b72d-7968-44da-bf98-98ac2c4416a9\") " pod="kube-system/coredns-668d6bf9bc-7cx6b" May 16 05:29:02.162792 kubelet[2711]: I0516 05:29:02.162509 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5632b72d-7968-44da-bf98-98ac2c4416a9-config-volume\") pod \"coredns-668d6bf9bc-7cx6b\" (UID: \"5632b72d-7968-44da-bf98-98ac2c4416a9\") " pod="kube-system/coredns-668d6bf9bc-7cx6b" May 16 05:29:02.447046 kubelet[2711]: E0516 05:29:02.446902 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:02.447845 containerd[1584]: time="2025-05-16T05:29:02.447793862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77d6n,Uid:773b8091-2959-4920-85f3-d1e364a9468c,Namespace:kube-system,Attempt:0,}" May 16 05:29:02.453091 kubelet[2711]: E0516 05:29:02.453052 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:02.453659 containerd[1584]: time="2025-05-16T05:29:02.453613461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7cx6b,Uid:5632b72d-7968-44da-bf98-98ac2c4416a9,Namespace:kube-system,Attempt:0,}" May 16 05:29:02.825749 kubelet[2711]: E0516 05:29:02.825713 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:02.846325 kubelet[2711]: I0516 05:29:02.845672 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hqqcj" podStartSLOduration=6.845495923 podStartE2EDuration="14.845649404s" podCreationTimestamp="2025-05-16 05:28:48 +0000 UTC" firstStartedPulling="2025-05-16 05:28:49.571962681 +0000 UTC m=+6.900574979" lastFinishedPulling="2025-05-16 05:28:57.572116162 +0000 UTC m=+14.900728460" observedRunningTime="2025-05-16 05:29:02.844719797 +0000 UTC m=+20.173332085" watchObservedRunningTime="2025-05-16 05:29:02.845649404 +0000 UTC m=+20.174261702" May 16 05:29:03.827385 kubelet[2711]: E0516 05:29:03.827349 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:04.104934 systemd-networkd[1488]: cilium_host: Link UP May 16 05:29:04.105089 systemd-networkd[1488]: cilium_net: Link UP May 16 05:29:04.105320 systemd-networkd[1488]: cilium_net: Gained carrier May 16 05:29:04.105507 systemd-networkd[1488]: cilium_host: Gained carrier May 16 05:29:04.204077 systemd-networkd[1488]: cilium_vxlan: Link UP May 16 05:29:04.204086 systemd-networkd[1488]: cilium_vxlan: Gained carrier May 16 05:29:04.405174 kernel: NET: Registered PF_ALG protocol family May 16 05:29:04.543354 systemd-networkd[1488]: cilium_host: Gained IPv6LL May 16 05:29:04.695397 systemd-networkd[1488]: cilium_net: Gained IPv6LL May 16 05:29:04.829364 kubelet[2711]: E0516 05:29:04.829333 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:05.034816 systemd-networkd[1488]: lxc_health: Link UP May 16 05:29:05.035607 systemd-networkd[1488]: lxc_health: Gained carrier May 16 05:29:05.527414 systemd-networkd[1488]: cilium_vxlan: Gained IPv6LL May 16 05:29:05.565907 systemd-networkd[1488]: lxc2fb1faa3022c: Link UP May 16 05:29:05.567173 kernel: eth0: renamed from tmpa2800 May 16 05:29:05.567611 systemd-networkd[1488]: lxc2fb1faa3022c: Gained carrier May 16 05:29:05.586598 kernel: eth0: renamed from tmp5ebc3 May 16 05:29:05.585969 systemd-networkd[1488]: lxc4d9242c64bbc: Link UP May 16 05:29:05.588305 systemd-networkd[1488]: lxc4d9242c64bbc: Gained carrier May 16 05:29:05.830815 kubelet[2711]: E0516 05:29:05.830779 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:06.807381 systemd-networkd[1488]: lxc_health: Gained IPv6LL May 16 05:29:06.831818 kubelet[2711]: E0516 05:29:06.831791 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:07.127342 systemd-networkd[1488]: lxc4d9242c64bbc: Gained IPv6LL May 16 05:29:07.511363 systemd-networkd[1488]: lxc2fb1faa3022c: Gained IPv6LL May 16 05:29:07.833938 kubelet[2711]: E0516 05:29:07.833895 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:09.188224 containerd[1584]: time="2025-05-16T05:29:09.188170304Z" level=info msg="connecting to shim a28001e67c10da4af78ab10d75745b99748b11354c6f32e48fdf03eb029d4a70" address="unix:///run/containerd/s/f92cee26bd7775e729e23489e8ff6299166d4666d38331022b48e979ea0401f3" namespace=k8s.io protocol=ttrpc version=3 May 16 05:29:09.191300 containerd[1584]: time="2025-05-16T05:29:09.191264947Z" level=info msg="connecting to shim 5ebc36afea0735e6c35db5151331c51f582a3c74fc05df51c41dbd45a7e143a4" address="unix:///run/containerd/s/a85d985527082c89421ce058b64a40028cb5c1bca5e267835c1a48cb989ab5ad" namespace=k8s.io protocol=ttrpc version=3 May 16 05:29:09.214275 systemd[1]: Started cri-containerd-a28001e67c10da4af78ab10d75745b99748b11354c6f32e48fdf03eb029d4a70.scope - libcontainer container a28001e67c10da4af78ab10d75745b99748b11354c6f32e48fdf03eb029d4a70. May 16 05:29:09.218723 systemd[1]: Started cri-containerd-5ebc36afea0735e6c35db5151331c51f582a3c74fc05df51c41dbd45a7e143a4.scope - libcontainer container 5ebc36afea0735e6c35db5151331c51f582a3c74fc05df51c41dbd45a7e143a4. May 16 05:29:09.228071 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:29:09.231067 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:29:09.262192 containerd[1584]: time="2025-05-16T05:29:09.261675213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7cx6b,Uid:5632b72d-7968-44da-bf98-98ac2c4416a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ebc36afea0735e6c35db5151331c51f582a3c74fc05df51c41dbd45a7e143a4\"" May 16 05:29:09.264433 containerd[1584]: time="2025-05-16T05:29:09.264369471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77d6n,Uid:773b8091-2959-4920-85f3-d1e364a9468c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a28001e67c10da4af78ab10d75745b99748b11354c6f32e48fdf03eb029d4a70\"" May 16 05:29:09.265124 kubelet[2711]: E0516 05:29:09.264911 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:09.266294 kubelet[2711]: E0516 05:29:09.266071 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:09.268307 containerd[1584]: time="2025-05-16T05:29:09.268278739Z" level=info msg="CreateContainer within sandbox \"a28001e67c10da4af78ab10d75745b99748b11354c6f32e48fdf03eb029d4a70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 05:29:09.268639 containerd[1584]: time="2025-05-16T05:29:09.268590136Z" level=info msg="CreateContainer within sandbox \"5ebc36afea0735e6c35db5151331c51f582a3c74fc05df51c41dbd45a7e143a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 05:29:09.282526 containerd[1584]: time="2025-05-16T05:29:09.281983921Z" level=info msg="Container 65d6ca1fa3f370fccb1740fae056204bc1bbe701567eb4e95133de9accdefce9: CDI devices from CRI Config.CDIDevices: []" May 16 05:29:09.290557 containerd[1584]: time="2025-05-16T05:29:09.290513086Z" level=info msg="CreateContainer within sandbox \"a28001e67c10da4af78ab10d75745b99748b11354c6f32e48fdf03eb029d4a70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65d6ca1fa3f370fccb1740fae056204bc1bbe701567eb4e95133de9accdefce9\"" May 16 05:29:09.292312 containerd[1584]: time="2025-05-16T05:29:09.292280597Z" level=info msg="StartContainer for \"65d6ca1fa3f370fccb1740fae056204bc1bbe701567eb4e95133de9accdefce9\"" May 16 05:29:09.293187 containerd[1584]: time="2025-05-16T05:29:09.293161758Z" level=info msg="connecting to shim 65d6ca1fa3f370fccb1740fae056204bc1bbe701567eb4e95133de9accdefce9" address="unix:///run/containerd/s/f92cee26bd7775e729e23489e8ff6299166d4666d38331022b48e979ea0401f3" protocol=ttrpc version=3 May 16 05:29:09.323461 systemd[1]: Started cri-containerd-65d6ca1fa3f370fccb1740fae056204bc1bbe701567eb4e95133de9accdefce9.scope - libcontainer container 65d6ca1fa3f370fccb1740fae056204bc1bbe701567eb4e95133de9accdefce9. May 16 05:29:09.339504 containerd[1584]: time="2025-05-16T05:29:09.339441184Z" level=info msg="Container 5bd7f377011d652539fd3b366f2d37bb63a7343df774f020bb9e76e4f97d4435: CDI devices from CRI Config.CDIDevices: []" May 16 05:29:09.347695 containerd[1584]: time="2025-05-16T05:29:09.347538736Z" level=info msg="CreateContainer within sandbox \"5ebc36afea0735e6c35db5151331c51f582a3c74fc05df51c41dbd45a7e143a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bd7f377011d652539fd3b366f2d37bb63a7343df774f020bb9e76e4f97d4435\"" May 16 05:29:09.348812 containerd[1584]: time="2025-05-16T05:29:09.348783252Z" level=info msg="StartContainer for \"5bd7f377011d652539fd3b366f2d37bb63a7343df774f020bb9e76e4f97d4435\"" May 16 05:29:09.350008 containerd[1584]: time="2025-05-16T05:29:09.349988793Z" level=info msg="connecting to shim 5bd7f377011d652539fd3b366f2d37bb63a7343df774f020bb9e76e4f97d4435" address="unix:///run/containerd/s/a85d985527082c89421ce058b64a40028cb5c1bca5e267835c1a48cb989ab5ad" protocol=ttrpc version=3 May 16 05:29:09.365638 containerd[1584]: time="2025-05-16T05:29:09.365518714Z" level=info msg="StartContainer for \"65d6ca1fa3f370fccb1740fae056204bc1bbe701567eb4e95133de9accdefce9\" returns successfully" May 16 05:29:09.374282 systemd[1]: Started cri-containerd-5bd7f377011d652539fd3b366f2d37bb63a7343df774f020bb9e76e4f97d4435.scope - libcontainer container 5bd7f377011d652539fd3b366f2d37bb63a7343df774f020bb9e76e4f97d4435. May 16 05:29:09.409604 containerd[1584]: time="2025-05-16T05:29:09.409484519Z" level=info msg="StartContainer for \"5bd7f377011d652539fd3b366f2d37bb63a7343df774f020bb9e76e4f97d4435\" returns successfully" May 16 05:29:09.697067 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:45048.service - OpenSSH per-connection server daemon (10.0.0.1:45048). May 16 05:29:09.749980 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 45048 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:09.752037 sshd-session[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:09.756909 systemd-logind[1573]: New session 8 of user core. May 16 05:29:09.766286 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 05:29:09.844670 kubelet[2711]: E0516 05:29:09.844594 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:09.848129 kubelet[2711]: E0516 05:29:09.848099 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:09.869534 kubelet[2711]: I0516 05:29:09.869259 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-77d6n" podStartSLOduration=20.869240624 podStartE2EDuration="20.869240624s" podCreationTimestamp="2025-05-16 05:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:29:09.867669603 +0000 UTC m=+27.196281901" watchObservedRunningTime="2025-05-16 05:29:09.869240624 +0000 UTC m=+27.197852922" May 16 05:29:09.869534 kubelet[2711]: I0516 05:29:09.869347 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7cx6b" podStartSLOduration=20.869342185 podStartE2EDuration="20.869342185s" podCreationTimestamp="2025-05-16 05:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:29:09.856295094 +0000 UTC m=+27.184907392" watchObservedRunningTime="2025-05-16 05:29:09.869342185 +0000 UTC m=+27.197954483" May 16 05:29:09.915735 sshd[4044]: Connection closed by 10.0.0.1 port 45048 May 16 05:29:09.916126 sshd-session[4042]: pam_unix(sshd:session): session closed for user core May 16 05:29:09.920875 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:45048.service: Deactivated successfully. May 16 05:29:09.923186 systemd[1]: session-8.scope: Deactivated successfully. May 16 05:29:09.923990 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. May 16 05:29:09.925753 systemd-logind[1573]: Removed session 8. May 16 05:29:10.182717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608564535.mount: Deactivated successfully. May 16 05:29:10.849508 kubelet[2711]: E0516 05:29:10.849428 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:10.849508 kubelet[2711]: E0516 05:29:10.849494 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:11.851238 kubelet[2711]: E0516 05:29:11.851198 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:29:14.931778 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:54722.service - OpenSSH per-connection server daemon (10.0.0.1:54722). May 16 05:29:14.986737 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 54722 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:14.988215 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:14.992488 systemd-logind[1573]: New session 9 of user core. May 16 05:29:15.004261 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 05:29:15.114485 sshd[4074]: Connection closed by 10.0.0.1 port 54722 May 16 05:29:15.114817 sshd-session[4072]: pam_unix(sshd:session): session closed for user core May 16 05:29:15.119331 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:54722.service: Deactivated successfully. May 16 05:29:15.121462 systemd[1]: session-9.scope: Deactivated successfully. May 16 05:29:15.122252 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. May 16 05:29:15.123642 systemd-logind[1573]: Removed session 9. May 16 05:29:20.139626 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:54724.service - OpenSSH per-connection server daemon (10.0.0.1:54724). May 16 05:29:20.198784 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 54724 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:20.201099 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:20.207075 systemd-logind[1573]: New session 10 of user core. May 16 05:29:20.221403 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 05:29:20.339344 sshd[4092]: Connection closed by 10.0.0.1 port 54724 May 16 05:29:20.339634 sshd-session[4090]: pam_unix(sshd:session): session closed for user core May 16 05:29:20.343704 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:54724.service: Deactivated successfully. May 16 05:29:20.345691 systemd[1]: session-10.scope: Deactivated successfully. May 16 05:29:20.346624 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. May 16 05:29:20.347975 systemd-logind[1573]: Removed session 10. May 16 05:29:25.362206 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:50096.service - OpenSSH per-connection server daemon (10.0.0.1:50096). May 16 05:29:25.422795 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 50096 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:25.424071 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:25.428332 systemd-logind[1573]: New session 11 of user core. May 16 05:29:25.438266 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 05:29:25.540804 sshd[4109]: Connection closed by 10.0.0.1 port 50096 May 16 05:29:25.541166 sshd-session[4107]: pam_unix(sshd:session): session closed for user core May 16 05:29:25.558727 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:50096.service: Deactivated successfully. May 16 05:29:25.560766 systemd[1]: session-11.scope: Deactivated successfully. May 16 05:29:25.561542 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. May 16 05:29:25.564593 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:50110.service - OpenSSH per-connection server daemon (10.0.0.1:50110). May 16 05:29:25.565463 systemd-logind[1573]: Removed session 11. May 16 05:29:25.614130 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 50110 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:25.615795 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:25.620314 systemd-logind[1573]: New session 12 of user core. May 16 05:29:25.634272 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 05:29:25.774873 sshd[4125]: Connection closed by 10.0.0.1 port 50110 May 16 05:29:25.775269 sshd-session[4123]: pam_unix(sshd:session): session closed for user core May 16 05:29:25.785898 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:50110.service: Deactivated successfully. May 16 05:29:25.790522 systemd[1]: session-12.scope: Deactivated successfully. May 16 05:29:25.796404 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. May 16 05:29:25.803208 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:50112.service - OpenSSH per-connection server daemon (10.0.0.1:50112). May 16 05:29:25.805810 systemd-logind[1573]: Removed session 12. May 16 05:29:25.858916 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 50112 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:25.860444 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:25.864767 systemd-logind[1573]: New session 13 of user core. May 16 05:29:25.874270 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 05:29:25.979949 sshd[4139]: Connection closed by 10.0.0.1 port 50112 May 16 05:29:25.980267 sshd-session[4137]: pam_unix(sshd:session): session closed for user core May 16 05:29:25.984779 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:50112.service: Deactivated successfully. May 16 05:29:25.986997 systemd[1]: session-13.scope: Deactivated successfully. May 16 05:29:25.987787 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. May 16 05:29:25.989287 systemd-logind[1573]: Removed session 13. May 16 05:29:30.994903 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:50114.service - OpenSSH per-connection server daemon (10.0.0.1:50114). May 16 05:29:31.053682 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 50114 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:31.055231 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:31.059315 systemd-logind[1573]: New session 14 of user core. May 16 05:29:31.070256 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 05:29:31.178356 sshd[4155]: Connection closed by 10.0.0.1 port 50114 May 16 05:29:31.178682 sshd-session[4153]: pam_unix(sshd:session): session closed for user core May 16 05:29:31.182848 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:50114.service: Deactivated successfully. May 16 05:29:31.184745 systemd[1]: session-14.scope: Deactivated successfully. May 16 05:29:31.185707 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. May 16 05:29:31.187037 systemd-logind[1573]: Removed session 14. May 16 05:29:36.191009 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:36138.service - OpenSSH per-connection server daemon (10.0.0.1:36138). May 16 05:29:36.241478 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 36138 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:36.243212 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:36.247947 systemd-logind[1573]: New session 15 of user core. May 16 05:29:36.256367 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 05:29:36.364871 sshd[4171]: Connection closed by 10.0.0.1 port 36138 May 16 05:29:36.365249 sshd-session[4169]: pam_unix(sshd:session): session closed for user core May 16 05:29:36.376814 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:36138.service: Deactivated successfully. May 16 05:29:36.378766 systemd[1]: session-15.scope: Deactivated successfully. May 16 05:29:36.379538 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. May 16 05:29:36.383411 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:36154.service - OpenSSH per-connection server daemon (10.0.0.1:36154). May 16 05:29:36.384008 systemd-logind[1573]: Removed session 15. May 16 05:29:36.434278 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 36154 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:36.435823 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:36.440331 systemd-logind[1573]: New session 16 of user core. May 16 05:29:36.447269 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 05:29:36.622360 sshd[4186]: Connection closed by 10.0.0.1 port 36154 May 16 05:29:36.622754 sshd-session[4184]: pam_unix(sshd:session): session closed for user core May 16 05:29:36.635825 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:36154.service: Deactivated successfully. May 16 05:29:36.637794 systemd[1]: session-16.scope: Deactivated successfully. May 16 05:29:36.638556 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. May 16 05:29:36.641812 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:36166.service - OpenSSH per-connection server daemon (10.0.0.1:36166). May 16 05:29:36.642490 systemd-logind[1573]: Removed session 16. May 16 05:29:36.700780 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 36166 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:36.702090 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:36.706981 systemd-logind[1573]: New session 17 of user core. May 16 05:29:36.714269 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 05:29:37.438720 sshd[4200]: Connection closed by 10.0.0.1 port 36166 May 16 05:29:37.439122 sshd-session[4198]: pam_unix(sshd:session): session closed for user core May 16 05:29:37.453323 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:36166.service: Deactivated successfully. May 16 05:29:37.455676 systemd[1]: session-17.scope: Deactivated successfully. May 16 05:29:37.456590 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. May 16 05:29:37.460444 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:36178.service - OpenSSH per-connection server daemon (10.0.0.1:36178). May 16 05:29:37.461165 systemd-logind[1573]: Removed session 17. May 16 05:29:37.507013 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 36178 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:37.508871 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:37.514107 systemd-logind[1573]: New session 18 of user core. May 16 05:29:37.525275 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 05:29:37.836996 sshd[4220]: Connection closed by 10.0.0.1 port 36178 May 16 05:29:37.837422 sshd-session[4218]: pam_unix(sshd:session): session closed for user core May 16 05:29:37.846040 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:36178.service: Deactivated successfully. May 16 05:29:37.848060 systemd[1]: session-18.scope: Deactivated successfully. May 16 05:29:37.848792 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. May 16 05:29:37.852342 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:36188.service - OpenSSH per-connection server daemon (10.0.0.1:36188). May 16 05:29:37.853024 systemd-logind[1573]: Removed session 18. May 16 05:29:37.911960 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 36188 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:37.913541 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:37.918381 systemd-logind[1573]: New session 19 of user core. May 16 05:29:37.933268 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 05:29:38.139724 sshd[4233]: Connection closed by 10.0.0.1 port 36188 May 16 05:29:38.139979 sshd-session[4231]: pam_unix(sshd:session): session closed for user core May 16 05:29:38.143842 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:36188.service: Deactivated successfully. May 16 05:29:38.146125 systemd[1]: session-19.scope: Deactivated successfully. May 16 05:29:38.147888 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. May 16 05:29:38.149677 systemd-logind[1573]: Removed session 19. May 16 05:29:43.152408 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:36202.service - OpenSSH per-connection server daemon (10.0.0.1:36202). May 16 05:29:43.207573 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 36202 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:43.208969 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:43.213342 systemd-logind[1573]: New session 20 of user core. May 16 05:29:43.224281 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 05:29:43.327658 sshd[4252]: Connection closed by 10.0.0.1 port 36202 May 16 05:29:43.327976 sshd-session[4250]: pam_unix(sshd:session): session closed for user core May 16 05:29:43.332759 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:36202.service: Deactivated successfully. May 16 05:29:43.335024 systemd[1]: session-20.scope: Deactivated successfully. May 16 05:29:43.335903 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. May 16 05:29:43.337454 systemd-logind[1573]: Removed session 20. May 16 05:29:48.340177 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:47282.service - OpenSSH per-connection server daemon (10.0.0.1:47282). May 16 05:29:48.380914 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 47282 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:48.382480 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:48.387168 systemd-logind[1573]: New session 21 of user core. May 16 05:29:48.395280 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 05:29:48.499835 sshd[4268]: Connection closed by 10.0.0.1 port 47282 May 16 05:29:48.500174 sshd-session[4266]: pam_unix(sshd:session): session closed for user core May 16 05:29:48.504446 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:47282.service: Deactivated successfully. May 16 05:29:48.506535 systemd[1]: session-21.scope: Deactivated successfully. May 16 05:29:48.507287 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. May 16 05:29:48.508568 systemd-logind[1573]: Removed session 21. May 16 05:29:53.511626 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:33822.service - OpenSSH per-connection server daemon (10.0.0.1:33822). May 16 05:29:53.566729 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 33822 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:53.568610 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:53.573278 systemd-logind[1573]: New session 22 of user core. May 16 05:29:53.581303 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 05:29:53.685305 sshd[4285]: Connection closed by 10.0.0.1 port 33822 May 16 05:29:53.685600 sshd-session[4283]: pam_unix(sshd:session): session closed for user core May 16 05:29:53.689521 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:33822.service: Deactivated successfully. May 16 05:29:53.691417 systemd[1]: session-22.scope: Deactivated successfully. May 16 05:29:53.692220 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. May 16 05:29:53.693427 systemd-logind[1573]: Removed session 22. May 16 05:29:58.709622 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:33830.service - OpenSSH per-connection server daemon (10.0.0.1:33830). May 16 05:29:58.757659 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 33830 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:58.759076 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:58.763285 systemd-logind[1573]: New session 23 of user core. May 16 05:29:58.772277 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 05:29:58.880295 sshd[4300]: Connection closed by 10.0.0.1 port 33830 May 16 05:29:58.880770 sshd-session[4298]: pam_unix(sshd:session): session closed for user core May 16 05:29:58.894207 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:33830.service: Deactivated successfully. May 16 05:29:58.896094 systemd[1]: session-23.scope: Deactivated successfully. May 16 05:29:58.896976 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. May 16 05:29:58.899780 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:33832.service - OpenSSH per-connection server daemon (10.0.0.1:33832). May 16 05:29:58.900591 systemd-logind[1573]: Removed session 23. May 16 05:29:58.952683 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 33832 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:29:58.954083 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:29:58.958370 systemd-logind[1573]: New session 24 of user core. May 16 05:29:58.969262 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 05:30:00.295767 containerd[1584]: time="2025-05-16T05:30:00.295713230Z" level=info msg="StopContainer for \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" with timeout 30 (s)" May 16 05:30:00.305008 containerd[1584]: time="2025-05-16T05:30:00.304964999Z" level=info msg="Stop container \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" with signal terminated" May 16 05:30:00.317457 systemd[1]: cri-containerd-79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9.scope: Deactivated successfully. May 16 05:30:00.319213 containerd[1584]: time="2025-05-16T05:30:00.318983746Z" level=info msg="received exit event container_id:\"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" id:\"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" pid:3299 exited_at:{seconds:1747373400 nanos:318445564}" May 16 05:30:00.319756 containerd[1584]: time="2025-05-16T05:30:00.319491680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" id:\"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" pid:3299 exited_at:{seconds:1747373400 nanos:318445564}" May 16 05:30:00.332858 containerd[1584]: time="2025-05-16T05:30:00.332800266Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 05:30:00.333684 containerd[1584]: time="2025-05-16T05:30:00.333645006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" id:\"3bc5a786d00aea5e957485fba2a5a894696c29d79b8675584cc7de492d3a1f42\" pid:4342 exited_at:{seconds:1747373400 nanos:333452918}" May 16 05:30:00.335758 containerd[1584]: time="2025-05-16T05:30:00.335734943Z" level=info msg="StopContainer for \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" with timeout 2 (s)" May 16 05:30:00.336092 containerd[1584]: time="2025-05-16T05:30:00.336072490Z" level=info msg="Stop container \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" with signal terminated" May 16 05:30:00.339999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9-rootfs.mount: Deactivated successfully. May 16 05:30:00.343257 systemd-networkd[1488]: lxc_health: Link DOWN May 16 05:30:00.343264 systemd-networkd[1488]: lxc_health: Lost carrier May 16 05:30:00.352605 containerd[1584]: time="2025-05-16T05:30:00.352562015Z" level=info msg="StopContainer for \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" returns successfully" May 16 05:30:00.353360 containerd[1584]: time="2025-05-16T05:30:00.353313095Z" level=info msg="StopPodSandbox for \"7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d\"" May 16 05:30:00.358974 containerd[1584]: time="2025-05-16T05:30:00.358939743Z" level=info msg="Container to stop \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:30:00.360764 systemd[1]: cri-containerd-4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924.scope: Deactivated successfully. May 16 05:30:00.361181 systemd[1]: cri-containerd-4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924.scope: Consumed 6.343s CPU time, 124.6M memory peak, 240K read from disk, 13.3M written to disk. May 16 05:30:00.361580 containerd[1584]: time="2025-05-16T05:30:00.361429607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" id:\"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" pid:3372 exited_at:{seconds:1747373400 nanos:361226547}" May 16 05:30:00.361638 containerd[1584]: time="2025-05-16T05:30:00.361585114Z" level=info msg="received exit event container_id:\"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" id:\"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" pid:3372 exited_at:{seconds:1747373400 nanos:361226547}" May 16 05:30:00.366908 systemd[1]: cri-containerd-7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d.scope: Deactivated successfully. May 16 05:30:00.367778 containerd[1584]: time="2025-05-16T05:30:00.367753251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d\" id:\"7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d\" pid:2990 exit_status:137 exited_at:{seconds:1747373400 nanos:367570952}" May 16 05:30:00.387069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924-rootfs.mount: Deactivated successfully. May 16 05:30:00.395431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d-rootfs.mount: Deactivated successfully. May 16 05:30:00.398794 containerd[1584]: time="2025-05-16T05:30:00.398332709Z" level=info msg="shim disconnected" id=7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d namespace=k8s.io May 16 05:30:00.398794 containerd[1584]: time="2025-05-16T05:30:00.398361706Z" level=warning msg="cleaning up after shim disconnected" id=7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d namespace=k8s.io May 16 05:30:00.416156 containerd[1584]: time="2025-05-16T05:30:00.398369521Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 05:30:00.416332 containerd[1584]: time="2025-05-16T05:30:00.398569203Z" level=info msg="StopContainer for \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" returns successfully" May 16 05:30:00.416931 containerd[1584]: time="2025-05-16T05:30:00.416708101Z" level=info msg="StopPodSandbox for \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\"" May 16 05:30:00.416931 containerd[1584]: time="2025-05-16T05:30:00.416786581Z" level=info msg="Container to stop \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:30:00.416931 containerd[1584]: time="2025-05-16T05:30:00.416798133Z" level=info msg="Container to stop \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:30:00.416931 containerd[1584]: time="2025-05-16T05:30:00.416806520Z" level=info msg="Container to stop \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:30:00.416931 containerd[1584]: time="2025-05-16T05:30:00.416816308Z" level=info msg="Container to stop \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:30:00.416931 containerd[1584]: time="2025-05-16T05:30:00.416825266Z" level=info msg="Container to stop \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:30:00.423571 systemd[1]: cri-containerd-e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236.scope: Deactivated successfully. May 16 05:30:00.444206 containerd[1584]: time="2025-05-16T05:30:00.444169021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" id:\"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" pid:2862 exit_status:137 exited_at:{seconds:1747373400 nanos:424541189}" May 16 05:30:00.445568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236-rootfs.mount: Deactivated successfully. May 16 05:30:00.448063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d-shm.mount: Deactivated successfully. May 16 05:30:00.449955 containerd[1584]: time="2025-05-16T05:30:00.449919927Z" level=info msg="shim disconnected" id=e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236 namespace=k8s.io May 16 05:30:00.449955 containerd[1584]: time="2025-05-16T05:30:00.449954834Z" level=warning msg="cleaning up after shim disconnected" id=e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236 namespace=k8s.io May 16 05:30:00.450087 containerd[1584]: time="2025-05-16T05:30:00.449963610Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 05:30:00.467457 containerd[1584]: time="2025-05-16T05:30:00.467391414Z" level=info msg="TearDown network for sandbox \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" successfully" May 16 05:30:00.467457 containerd[1584]: time="2025-05-16T05:30:00.467450108Z" level=info msg="StopPodSandbox for \"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" returns successfully" May 16 05:30:00.467635 containerd[1584]: time="2025-05-16T05:30:00.467408307Z" level=info msg="TearDown network for sandbox \"7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d\" successfully" May 16 05:30:00.467635 containerd[1584]: time="2025-05-16T05:30:00.467555249Z" level=info msg="StopPodSandbox for \"7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d\" returns successfully" May 16 05:30:00.468074 containerd[1584]: time="2025-05-16T05:30:00.468044547Z" level=info msg="received exit event sandbox_id:\"7cad26c2e212e7883de0e2844825ae2e02974c25b6055933d63e0c5642baa16d\" exit_status:137 exited_at:{seconds:1747373400 nanos:367570952}" May 16 05:30:00.468189 containerd[1584]: time="2025-05-16T05:30:00.468165929Z" level=info msg="received exit event sandbox_id:\"e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236\" exit_status:137 exited_at:{seconds:1747373400 nanos:424541189}" May 16 05:30:00.506086 kubelet[2711]: I0516 05:30:00.506039 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v676j\" (UniqueName: \"kubernetes.io/projected/8520c209-0a41-4078-8256-47e643d3f48e-kube-api-access-v676j\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.506086 kubelet[2711]: I0516 05:30:00.506076 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-etc-cni-netd\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.506086 kubelet[2711]: I0516 05:30:00.506091 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cilium-run\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.506086 kubelet[2711]: I0516 05:30:00.506105 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-host-proc-sys-kernel\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507081 kubelet[2711]: I0516 05:30:00.506121 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8520c209-0a41-4078-8256-47e643d3f48e-clustermesh-secrets\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507081 kubelet[2711]: I0516 05:30:00.506151 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-lib-modules\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507081 kubelet[2711]: I0516 05:30:00.506168 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8520c209-0a41-4078-8256-47e643d3f48e-cilium-config-path\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507081 kubelet[2711]: I0516 05:30:00.506183 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-host-proc-sys-net\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507081 kubelet[2711]: I0516 05:30:00.506190 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:30:00.507081 kubelet[2711]: I0516 05:30:00.506199 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsrr6\" (UniqueName: \"kubernetes.io/projected/9db568bf-2ba8-4d14-87b6-1e4c3322b82c-kube-api-access-zsrr6\") pod \"9db568bf-2ba8-4d14-87b6-1e4c3322b82c\" (UID: \"9db568bf-2ba8-4d14-87b6-1e4c3322b82c\") " May 16 05:30:00.507249 kubelet[2711]: I0516 05:30:00.506246 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9db568bf-2ba8-4d14-87b6-1e4c3322b82c-cilium-config-path\") pod \"9db568bf-2ba8-4d14-87b6-1e4c3322b82c\" (UID: \"9db568bf-2ba8-4d14-87b6-1e4c3322b82c\") " May 16 05:30:00.507249 kubelet[2711]: I0516 05:30:00.506263 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-xtables-lock\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507249 kubelet[2711]: I0516 05:30:00.506280 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cni-path\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507249 kubelet[2711]: I0516 05:30:00.506295 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8520c209-0a41-4078-8256-47e643d3f48e-hubble-tls\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507249 kubelet[2711]: I0516 05:30:00.506326 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-bpf-maps\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507249 kubelet[2711]: I0516 05:30:00.506340 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-hostproc\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507387 kubelet[2711]: I0516 05:30:00.506353 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cilium-cgroup\") pod \"8520c209-0a41-4078-8256-47e643d3f48e\" (UID: \"8520c209-0a41-4078-8256-47e643d3f48e\") " May 16 05:30:00.507387 kubelet[2711]: I0516 05:30:00.506387 2711 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.507387 kubelet[2711]: I0516 05:30:00.506405 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:30:00.507387 kubelet[2711]: I0516 05:30:00.507081 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:30:00.507567 kubelet[2711]: I0516 05:30:00.507529 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:30:00.509425 kubelet[2711]: I0516 05:30:00.509394 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db568bf-2ba8-4d14-87b6-1e4c3322b82c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9db568bf-2ba8-4d14-87b6-1e4c3322b82c" (UID: "9db568bf-2ba8-4d14-87b6-1e4c3322b82c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 05:30:00.509838 kubelet[2711]: I0516 05:30:00.509526 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:30:00.509838 kubelet[2711]: I0516 05:30:00.509503 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:30:00.509838 kubelet[2711]: I0516 05:30:00.509543 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cni-path" (OuterVolumeSpecName: "cni-path") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:30:00.509838 kubelet[2711]: I0516 05:30:00.509618 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:30:00.509838 kubelet[2711]: I0516 05:30:00.509636 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-hostproc" (OuterVolumeSpecName: "hostproc") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:30:00.509986 kubelet[2711]: I0516 05:30:00.509938 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:30:00.510410 kubelet[2711]: I0516 05:30:00.510364 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8520c209-0a41-4078-8256-47e643d3f48e-kube-api-access-v676j" (OuterVolumeSpecName: "kube-api-access-v676j") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "kube-api-access-v676j". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 05:30:00.511168 kubelet[2711]: I0516 05:30:00.511107 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8520c209-0a41-4078-8256-47e643d3f48e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 05:30:00.511331 kubelet[2711]: I0516 05:30:00.511289 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8520c209-0a41-4078-8256-47e643d3f48e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 05:30:00.511960 kubelet[2711]: I0516 05:30:00.511937 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db568bf-2ba8-4d14-87b6-1e4c3322b82c-kube-api-access-zsrr6" (OuterVolumeSpecName: "kube-api-access-zsrr6") pod "9db568bf-2ba8-4d14-87b6-1e4c3322b82c" (UID: "9db568bf-2ba8-4d14-87b6-1e4c3322b82c"). InnerVolumeSpecName "kube-api-access-zsrr6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 05:30:00.512904 kubelet[2711]: I0516 05:30:00.512864 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8520c209-0a41-4078-8256-47e643d3f48e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8520c209-0a41-4078-8256-47e643d3f48e" (UID: "8520c209-0a41-4078-8256-47e643d3f48e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 05:30:00.607262 kubelet[2711]: I0516 05:30:00.607229 2711 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8520c209-0a41-4078-8256-47e643d3f48e-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607262 kubelet[2711]: I0516 05:30:00.607254 2711 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607262 kubelet[2711]: I0516 05:30:00.607262 2711 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607386 kubelet[2711]: I0516 05:30:00.607270 2711 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607386 kubelet[2711]: I0516 05:30:00.607279 2711 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v676j\" (UniqueName: \"kubernetes.io/projected/8520c209-0a41-4078-8256-47e643d3f48e-kube-api-access-v676j\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607386 kubelet[2711]: I0516 05:30:00.607288 2711 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607386 kubelet[2711]: I0516 05:30:00.607295 2711 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607386 kubelet[2711]: I0516 05:30:00.607303 2711 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8520c209-0a41-4078-8256-47e643d3f48e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607386 kubelet[2711]: I0516 05:30:00.607312 2711 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607386 kubelet[2711]: I0516 05:30:00.607320 2711 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607386 kubelet[2711]: I0516 05:30:00.607327 2711 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8520c209-0a41-4078-8256-47e643d3f48e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607563 kubelet[2711]: I0516 05:30:00.607335 2711 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9db568bf-2ba8-4d14-87b6-1e4c3322b82c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607563 kubelet[2711]: I0516 05:30:00.607342 2711 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsrr6\" (UniqueName: \"kubernetes.io/projected/9db568bf-2ba8-4d14-87b6-1e4c3322b82c-kube-api-access-zsrr6\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607563 kubelet[2711]: I0516 05:30:00.607350 2711 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.607563 kubelet[2711]: I0516 05:30:00.607357 2711 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8520c209-0a41-4078-8256-47e643d3f48e-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 05:30:00.761879 systemd[1]: Removed slice kubepods-burstable-pod8520c209_0a41_4078_8256_47e643d3f48e.slice - libcontainer container kubepods-burstable-pod8520c209_0a41_4078_8256_47e643d3f48e.slice. May 16 05:30:00.762001 systemd[1]: kubepods-burstable-pod8520c209_0a41_4078_8256_47e643d3f48e.slice: Consumed 6.451s CPU time, 124.9M memory peak, 252K read from disk, 13.3M written to disk. May 16 05:30:00.763127 systemd[1]: Removed slice kubepods-besteffort-pod9db568bf_2ba8_4d14_87b6_1e4c3322b82c.slice - libcontainer container kubepods-besteffort-pod9db568bf_2ba8_4d14_87b6_1e4c3322b82c.slice. May 16 05:30:00.945422 kubelet[2711]: I0516 05:30:00.944792 2711 scope.go:117] "RemoveContainer" containerID="4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924" May 16 05:30:00.947335 containerd[1584]: time="2025-05-16T05:30:00.947274203Z" level=info msg="RemoveContainer for \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\"" May 16 05:30:00.955729 containerd[1584]: time="2025-05-16T05:30:00.955684668Z" level=info msg="RemoveContainer for \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" returns successfully" May 16 05:30:00.958892 kubelet[2711]: I0516 05:30:00.958860 2711 scope.go:117] "RemoveContainer" containerID="072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447" May 16 05:30:00.960241 containerd[1584]: time="2025-05-16T05:30:00.960207840Z" level=info msg="RemoveContainer for \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\"" May 16 05:30:00.965326 containerd[1584]: time="2025-05-16T05:30:00.965300874Z" level=info msg="RemoveContainer for \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\" returns successfully" May 16 05:30:00.965717 kubelet[2711]: I0516 05:30:00.965567 2711 scope.go:117] "RemoveContainer" containerID="9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1" May 16 05:30:00.968108 containerd[1584]: time="2025-05-16T05:30:00.968072529Z" level=info msg="RemoveContainer for \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\"" May 16 05:30:00.973753 containerd[1584]: time="2025-05-16T05:30:00.973701180Z" level=info msg="RemoveContainer for \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\" returns successfully" May 16 05:30:00.974068 kubelet[2711]: I0516 05:30:00.973980 2711 scope.go:117] "RemoveContainer" containerID="a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85" May 16 05:30:00.977273 containerd[1584]: time="2025-05-16T05:30:00.977232080Z" level=info msg="RemoveContainer for \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\"" May 16 05:30:00.984209 containerd[1584]: time="2025-05-16T05:30:00.983551766Z" level=info msg="RemoveContainer for \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\" returns successfully" May 16 05:30:00.985451 kubelet[2711]: I0516 05:30:00.985349 2711 scope.go:117] "RemoveContainer" containerID="e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d" May 16 05:30:00.989317 containerd[1584]: time="2025-05-16T05:30:00.989283867Z" level=info msg="RemoveContainer for \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\"" May 16 05:30:00.995737 containerd[1584]: time="2025-05-16T05:30:00.995704587Z" level=info msg="RemoveContainer for \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\" returns successfully" May 16 05:30:00.995950 kubelet[2711]: I0516 05:30:00.995921 2711 scope.go:117] "RemoveContainer" containerID="4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924" May 16 05:30:00.996190 containerd[1584]: time="2025-05-16T05:30:00.996114493Z" level=error msg="ContainerStatus for \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\": not found" May 16 05:30:01.002282 kubelet[2711]: E0516 05:30:01.002224 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\": not found" containerID="4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924" May 16 05:30:01.002474 kubelet[2711]: I0516 05:30:01.002261 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924"} err="failed to get container status \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ac0e49ecf6ffd5dd54de770a2372a04b56111d40183e130b82c3929c82dd924\": not found" May 16 05:30:01.002474 kubelet[2711]: I0516 05:30:01.002334 2711 scope.go:117] "RemoveContainer" containerID="072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447" May 16 05:30:01.003432 containerd[1584]: time="2025-05-16T05:30:01.003373688Z" level=error msg="ContainerStatus for \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\": not found" May 16 05:30:01.003691 kubelet[2711]: E0516 05:30:01.003659 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\": not found" containerID="072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447" May 16 05:30:01.003725 kubelet[2711]: I0516 05:30:01.003700 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447"} err="failed to get container status \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\": rpc error: code = NotFound desc = an error occurred when try to find container \"072079c1f6a533adeef45d472a357cb6adc3d2b8d26c293d29584e7928d32447\": not found" May 16 05:30:01.003725 kubelet[2711]: I0516 05:30:01.003728 2711 scope.go:117] "RemoveContainer" containerID="9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1" May 16 05:30:01.005000 containerd[1584]: time="2025-05-16T05:30:01.004901155Z" level=error msg="ContainerStatus for \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\": not found" May 16 05:30:01.005249 kubelet[2711]: E0516 05:30:01.005197 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\": not found" containerID="9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1" May 16 05:30:01.005436 kubelet[2711]: I0516 05:30:01.005254 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1"} err="failed to get container status \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"9855e685828f2591f96ba19657ac365cfcf888fa5206e2941db1c2d66d63e0f1\": not found" May 16 05:30:01.005436 kubelet[2711]: I0516 05:30:01.005281 2711 scope.go:117] "RemoveContainer" containerID="a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85" May 16 05:30:01.005528 containerd[1584]: time="2025-05-16T05:30:01.005482429Z" level=error msg="ContainerStatus for \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\": not found" May 16 05:30:01.005647 kubelet[2711]: E0516 05:30:01.005629 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\": not found" containerID="a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85" May 16 05:30:01.005726 kubelet[2711]: I0516 05:30:01.005708 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85"} err="failed to get container status \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\": rpc error: code = NotFound desc = an error occurred when try to find container \"a448ceb364aec2b858d2ee26f5e447486c0266da66bf6fc0c8eed60a452f9f85\": not found" May 16 05:30:01.005813 kubelet[2711]: I0516 05:30:01.005776 2711 scope.go:117] "RemoveContainer" containerID="e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d" May 16 05:30:01.006060 containerd[1584]: time="2025-05-16T05:30:01.005964513Z" level=error msg="ContainerStatus for \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\": not found" May 16 05:30:01.006243 kubelet[2711]: E0516 05:30:01.006220 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\": not found" containerID="e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d" May 16 05:30:01.006286 kubelet[2711]: I0516 05:30:01.006253 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d"} err="failed to get container status \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e57e04a51b16d70238dc9c5f036ae78d2c2686a8eb320cca7d570bdc34c2a25d\": not found" May 16 05:30:01.006286 kubelet[2711]: I0516 05:30:01.006269 2711 scope.go:117] "RemoveContainer" containerID="79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9" May 16 05:30:01.008376 containerd[1584]: time="2025-05-16T05:30:01.007834035Z" level=info msg="RemoveContainer for \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\"" May 16 05:30:01.011599 containerd[1584]: time="2025-05-16T05:30:01.011548542Z" level=info msg="RemoveContainer for \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" returns successfully" May 16 05:30:01.011780 kubelet[2711]: I0516 05:30:01.011745 2711 scope.go:117] "RemoveContainer" containerID="79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9" May 16 05:30:01.012020 containerd[1584]: time="2025-05-16T05:30:01.011965792Z" level=error msg="ContainerStatus for \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\": not found" May 16 05:30:01.012171 kubelet[2711]: E0516 05:30:01.012117 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\": not found" containerID="79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9" May 16 05:30:01.012213 kubelet[2711]: I0516 05:30:01.012182 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9"} err="failed to get container status \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\": rpc error: code = NotFound desc = an error occurred when try to find container \"79c0bcd55277ad17c675fd89532303e2444dfb7e8465ec910c243098b6e06fd9\": not found" May 16 05:30:01.340005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e297786381cafcd849d6b47ea8c1e0531190fe879187a146c594f8d16c662236-shm.mount: Deactivated successfully. May 16 05:30:01.340132 systemd[1]: var-lib-kubelet-pods-9db568bf\x2d2ba8\x2d4d14\x2d87b6\x2d1e4c3322b82c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzsrr6.mount: Deactivated successfully. May 16 05:30:01.340227 systemd[1]: var-lib-kubelet-pods-8520c209\x2d0a41\x2d4078\x2d8256\x2d47e643d3f48e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv676j.mount: Deactivated successfully. May 16 05:30:01.340302 systemd[1]: var-lib-kubelet-pods-8520c209\x2d0a41\x2d4078\x2d8256\x2d47e643d3f48e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 05:30:01.340379 systemd[1]: var-lib-kubelet-pods-8520c209\x2d0a41\x2d4078\x2d8256\x2d47e643d3f48e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 05:30:02.261096 sshd[4315]: Connection closed by 10.0.0.1 port 33832 May 16 05:30:02.261577 sshd-session[4313]: pam_unix(sshd:session): session closed for user core May 16 05:30:02.278707 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:33832.service: Deactivated successfully. May 16 05:30:02.280454 systemd[1]: session-24.scope: Deactivated successfully. May 16 05:30:02.281204 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. May 16 05:30:02.283944 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:33840.service - OpenSSH per-connection server daemon (10.0.0.1:33840). May 16 05:30:02.284819 systemd-logind[1573]: Removed session 24. May 16 05:30:02.341475 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 33840 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:30:02.342821 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:30:02.346998 systemd-logind[1573]: New session 25 of user core. May 16 05:30:02.356266 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 05:30:02.755647 kubelet[2711]: I0516 05:30:02.755603 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8520c209-0a41-4078-8256-47e643d3f48e" path="/var/lib/kubelet/pods/8520c209-0a41-4078-8256-47e643d3f48e/volumes" May 16 05:30:02.756523 kubelet[2711]: I0516 05:30:02.756501 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db568bf-2ba8-4d14-87b6-1e4c3322b82c" path="/var/lib/kubelet/pods/9db568bf-2ba8-4d14-87b6-1e4c3322b82c/volumes" May 16 05:30:02.775985 sshd[4465]: Connection closed by 10.0.0.1 port 33840 May 16 05:30:02.777123 sshd-session[4463]: pam_unix(sshd:session): session closed for user core May 16 05:30:02.789641 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:33840.service: Deactivated successfully. May 16 05:30:02.792441 systemd[1]: session-25.scope: Deactivated successfully. May 16 05:30:02.795610 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. May 16 05:30:02.797973 kubelet[2711]: E0516 05:30:02.797916 2711 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 05:30:02.798637 kubelet[2711]: I0516 05:30:02.798602 2711 memory_manager.go:355] "RemoveStaleState removing state" podUID="8520c209-0a41-4078-8256-47e643d3f48e" containerName="cilium-agent" May 16 05:30:02.798771 kubelet[2711]: I0516 05:30:02.798715 2711 memory_manager.go:355] "RemoveStaleState removing state" podUID="9db568bf-2ba8-4d14-87b6-1e4c3322b82c" containerName="cilium-operator" May 16 05:30:02.802458 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:33844.service - OpenSSH per-connection server daemon (10.0.0.1:33844). May 16 05:30:02.806188 systemd-logind[1573]: Removed session 25. May 16 05:30:02.818655 kubelet[2711]: I0516 05:30:02.818552 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-cilium-ipsec-secrets\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.819033 kubelet[2711]: I0516 05:30:02.818977 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-cni-path\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.820170 systemd[1]: Created slice kubepods-burstable-pod50c5f49e_5cdd_40f2_af34_1ad4335e3ed2.slice - libcontainer container kubepods-burstable-pod50c5f49e_5cdd_40f2_af34_1ad4335e3ed2.slice. May 16 05:30:02.821369 kubelet[2711]: I0516 05:30:02.819177 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-lib-modules\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822023 kubelet[2711]: I0516 05:30:02.821657 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-hubble-tls\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822023 kubelet[2711]: I0516 05:30:02.821732 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjb2\" (UniqueName: \"kubernetes.io/projected/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-kube-api-access-xqjb2\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822023 kubelet[2711]: I0516 05:30:02.821754 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-cilium-config-path\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822023 kubelet[2711]: I0516 05:30:02.821829 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-xtables-lock\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822023 kubelet[2711]: I0516 05:30:02.821894 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-clustermesh-secrets\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822190 kubelet[2711]: I0516 05:30:02.821909 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-host-proc-sys-kernel\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822190 kubelet[2711]: I0516 05:30:02.821923 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-hostproc\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822190 kubelet[2711]: I0516 05:30:02.821975 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-cilium-cgroup\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822190 kubelet[2711]: I0516 05:30:02.821987 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-host-proc-sys-net\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822383 kubelet[2711]: I0516 05:30:02.822003 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-cilium-run\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822383 kubelet[2711]: I0516 05:30:02.822350 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-etc-cni-netd\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.822991 kubelet[2711]: I0516 05:30:02.822954 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50c5f49e-5cdd-40f2-af34-1ad4335e3ed2-bpf-maps\") pod \"cilium-rpx7t\" (UID: \"50c5f49e-5cdd-40f2-af34-1ad4335e3ed2\") " pod="kube-system/cilium-rpx7t" May 16 05:30:02.860013 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 33844 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:30:02.861813 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:30:02.867249 systemd-logind[1573]: New session 26 of user core. May 16 05:30:02.873282 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 05:30:02.927247 sshd[4479]: Connection closed by 10.0.0.1 port 33844 May 16 05:30:02.928544 sshd-session[4477]: pam_unix(sshd:session): session closed for user core May 16 05:30:02.935721 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:33844.service: Deactivated successfully. May 16 05:30:02.937818 systemd[1]: session-26.scope: Deactivated successfully. May 16 05:30:02.955380 systemd-logind[1573]: Session 26 logged out. Waiting for processes to exit. May 16 05:30:02.958847 systemd[1]: Started sshd@26-10.0.0.134:22-10.0.0.1:33850.service - OpenSSH per-connection server daemon (10.0.0.1:33850). May 16 05:30:02.959589 systemd-logind[1573]: Removed session 26. May 16 05:30:03.007544 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 33850 ssh2: RSA SHA256:iuInQ8i/7DutBmZnzLCWq9YRq8P/GlHPlsag3/cPgmg May 16 05:30:03.009057 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:30:03.015238 systemd-logind[1573]: New session 27 of user core. May 16 05:30:03.019263 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 05:30:03.123931 kubelet[2711]: E0516 05:30:03.123882 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:03.124694 containerd[1584]: time="2025-05-16T05:30:03.124629141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rpx7t,Uid:50c5f49e-5cdd-40f2-af34-1ad4335e3ed2,Namespace:kube-system,Attempt:0,}" May 16 05:30:03.316271 containerd[1584]: time="2025-05-16T05:30:03.316218697Z" level=info msg="connecting to shim 65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb" address="unix:///run/containerd/s/04911098d2a2f73fb7cdb37cdafc372392ac01b018d6605c8ec251a65dd33001" namespace=k8s.io protocol=ttrpc version=3 May 16 05:30:03.348293 systemd[1]: Started cri-containerd-65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb.scope - libcontainer container 65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb. May 16 05:30:03.420688 containerd[1584]: time="2025-05-16T05:30:03.420635537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rpx7t,Uid:50c5f49e-5cdd-40f2-af34-1ad4335e3ed2,Namespace:kube-system,Attempt:0,} returns sandbox id \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\"" May 16 05:30:03.421471 kubelet[2711]: E0516 05:30:03.421443 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:03.423070 containerd[1584]: time="2025-05-16T05:30:03.423039539Z" level=info msg="CreateContainer within sandbox \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 05:30:03.553437 containerd[1584]: time="2025-05-16T05:30:03.553391211Z" level=info msg="Container 294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:03.581413 containerd[1584]: time="2025-05-16T05:30:03.581317755Z" level=info msg="CreateContainer within sandbox \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62\"" May 16 05:30:03.582016 containerd[1584]: time="2025-05-16T05:30:03.581950856Z" level=info msg="StartContainer for \"294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62\"" May 16 05:30:03.582865 containerd[1584]: time="2025-05-16T05:30:03.582840930Z" level=info msg="connecting to shim 294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62" address="unix:///run/containerd/s/04911098d2a2f73fb7cdb37cdafc372392ac01b018d6605c8ec251a65dd33001" protocol=ttrpc version=3 May 16 05:30:03.607380 systemd[1]: Started cri-containerd-294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62.scope - libcontainer container 294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62. May 16 05:30:03.638630 containerd[1584]: time="2025-05-16T05:30:03.638592783Z" level=info msg="StartContainer for \"294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62\" returns successfully" May 16 05:30:03.647941 systemd[1]: cri-containerd-294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62.scope: Deactivated successfully. May 16 05:30:03.649174 containerd[1584]: time="2025-05-16T05:30:03.649117618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62\" id:\"294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62\" pid:4556 exited_at:{seconds:1747373403 nanos:648742931}" May 16 05:30:03.649246 containerd[1584]: time="2025-05-16T05:30:03.649221928Z" level=info msg="received exit event container_id:\"294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62\" id:\"294aff8bac517e8e00278675e2c6db8fd84e6aac77c83eed960837425be21d62\" pid:4556 exited_at:{seconds:1747373403 nanos:648742931}" May 16 05:30:03.955410 kubelet[2711]: E0516 05:30:03.955312 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:03.957882 containerd[1584]: time="2025-05-16T05:30:03.957838441Z" level=info msg="CreateContainer within sandbox \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 05:30:03.965865 containerd[1584]: time="2025-05-16T05:30:03.965802384Z" level=info msg="Container 1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:03.973208 containerd[1584]: time="2025-05-16T05:30:03.973171610Z" level=info msg="CreateContainer within sandbox \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26\"" May 16 05:30:03.973782 containerd[1584]: time="2025-05-16T05:30:03.973731782Z" level=info msg="StartContainer for \"1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26\"" May 16 05:30:03.974846 containerd[1584]: time="2025-05-16T05:30:03.974802662Z" level=info msg="connecting to shim 1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26" address="unix:///run/containerd/s/04911098d2a2f73fb7cdb37cdafc372392ac01b018d6605c8ec251a65dd33001" protocol=ttrpc version=3 May 16 05:30:03.993268 systemd[1]: Started cri-containerd-1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26.scope - libcontainer container 1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26. May 16 05:30:04.021208 containerd[1584]: time="2025-05-16T05:30:04.021129224Z" level=info msg="StartContainer for \"1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26\" returns successfully" May 16 05:30:04.027259 systemd[1]: cri-containerd-1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26.scope: Deactivated successfully. May 16 05:30:04.027929 containerd[1584]: time="2025-05-16T05:30:04.027708250Z" level=info msg="received exit event container_id:\"1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26\" id:\"1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26\" pid:4601 exited_at:{seconds:1747373404 nanos:27451299}" May 16 05:30:04.028129 containerd[1584]: time="2025-05-16T05:30:04.027975190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26\" id:\"1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26\" pid:4601 exited_at:{seconds:1747373404 nanos:27451299}" May 16 05:30:04.046526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fb747cbe15c668fc3234d28717d25ebcaf40bb2608ddf08386db02d0710bc26-rootfs.mount: Deactivated successfully. May 16 05:30:04.790423 kubelet[2711]: I0516 05:30:04.790367 2711 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T05:30:04Z","lastTransitionTime":"2025-05-16T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 05:30:04.958987 kubelet[2711]: E0516 05:30:04.958940 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:04.960844 containerd[1584]: time="2025-05-16T05:30:04.960747185Z" level=info msg="CreateContainer within sandbox \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 05:30:04.970373 containerd[1584]: time="2025-05-16T05:30:04.970323866Z" level=info msg="Container 5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:04.975599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount54376379.mount: Deactivated successfully. May 16 05:30:04.980104 containerd[1584]: time="2025-05-16T05:30:04.980072546Z" level=info msg="CreateContainer within sandbox \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428\"" May 16 05:30:04.980558 containerd[1584]: time="2025-05-16T05:30:04.980535120Z" level=info msg="StartContainer for \"5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428\"" May 16 05:30:04.982157 containerd[1584]: time="2025-05-16T05:30:04.982103060Z" level=info msg="connecting to shim 5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428" address="unix:///run/containerd/s/04911098d2a2f73fb7cdb37cdafc372392ac01b018d6605c8ec251a65dd33001" protocol=ttrpc version=3 May 16 05:30:05.008275 systemd[1]: Started cri-containerd-5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428.scope - libcontainer container 5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428. May 16 05:30:05.048715 systemd[1]: cri-containerd-5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428.scope: Deactivated successfully. May 16 05:30:05.049589 containerd[1584]: time="2025-05-16T05:30:05.048754803Z" level=info msg="StartContainer for \"5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428\" returns successfully" May 16 05:30:05.051073 containerd[1584]: time="2025-05-16T05:30:05.051039391Z" level=info msg="received exit event container_id:\"5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428\" id:\"5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428\" pid:4644 exited_at:{seconds:1747373405 nanos:50830472}" May 16 05:30:05.051127 containerd[1584]: time="2025-05-16T05:30:05.051092924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428\" id:\"5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428\" pid:4644 exited_at:{seconds:1747373405 nanos:50830472}" May 16 05:30:05.072058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5843bed78dbed0ac589aebe8a0c7fbac062659f1d0db3ad67a0d6e7c32484428-rootfs.mount: Deactivated successfully. May 16 05:30:05.963523 kubelet[2711]: E0516 05:30:05.963466 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:05.965304 containerd[1584]: time="2025-05-16T05:30:05.965240720Z" level=info msg="CreateContainer within sandbox \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 05:30:05.974064 containerd[1584]: time="2025-05-16T05:30:05.974016110Z" level=info msg="Container 931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:05.979309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419648563.mount: Deactivated successfully. May 16 05:30:05.987417 containerd[1584]: time="2025-05-16T05:30:05.987376496Z" level=info msg="CreateContainer within sandbox \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a\"" May 16 05:30:05.987842 containerd[1584]: time="2025-05-16T05:30:05.987810556Z" level=info msg="StartContainer for \"931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a\"" May 16 05:30:05.988617 containerd[1584]: time="2025-05-16T05:30:05.988588574Z" level=info msg="connecting to shim 931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a" address="unix:///run/containerd/s/04911098d2a2f73fb7cdb37cdafc372392ac01b018d6605c8ec251a65dd33001" protocol=ttrpc version=3 May 16 05:30:06.010288 systemd[1]: Started cri-containerd-931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a.scope - libcontainer container 931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a. May 16 05:30:06.038217 systemd[1]: cri-containerd-931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a.scope: Deactivated successfully. May 16 05:30:06.039413 containerd[1584]: time="2025-05-16T05:30:06.039346361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a\" id:\"931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a\" pid:4682 exited_at:{seconds:1747373406 nanos:39012815}" May 16 05:30:06.040058 containerd[1584]: time="2025-05-16T05:30:06.040016693Z" level=info msg="received exit event container_id:\"931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a\" id:\"931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a\" pid:4682 exited_at:{seconds:1747373406 nanos:39012815}" May 16 05:30:06.041686 containerd[1584]: time="2025-05-16T05:30:06.041659202Z" level=info msg="StartContainer for \"931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a\" returns successfully" May 16 05:30:06.062797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-931ca9bc6a21f1edde6aaf0f210170a103574ee8940e81d6e8476571d573134a-rootfs.mount: Deactivated successfully. May 16 05:30:06.968330 kubelet[2711]: E0516 05:30:06.968277 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:06.970327 containerd[1584]: time="2025-05-16T05:30:06.970269999Z" level=info msg="CreateContainer within sandbox \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 05:30:06.984345 containerd[1584]: time="2025-05-16T05:30:06.984291046Z" level=info msg="Container 782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983: CDI devices from CRI Config.CDIDevices: []" May 16 05:30:06.993751 containerd[1584]: time="2025-05-16T05:30:06.993692604Z" level=info msg="CreateContainer within sandbox \"65dff619797081d53f48ac0dee312383ffb27d7c1fb36bf0d24e9507d01093fb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983\"" May 16 05:30:06.994302 containerd[1584]: time="2025-05-16T05:30:06.994264366Z" level=info msg="StartContainer for \"782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983\"" May 16 05:30:06.995439 containerd[1584]: time="2025-05-16T05:30:06.995387362Z" level=info msg="connecting to shim 782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983" address="unix:///run/containerd/s/04911098d2a2f73fb7cdb37cdafc372392ac01b018d6605c8ec251a65dd33001" protocol=ttrpc version=3 May 16 05:30:07.016298 systemd[1]: Started cri-containerd-782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983.scope - libcontainer container 782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983. May 16 05:30:07.053506 containerd[1584]: time="2025-05-16T05:30:07.053452997Z" level=info msg="StartContainer for \"782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983\" returns successfully" May 16 05:30:07.120016 containerd[1584]: time="2025-05-16T05:30:07.119958760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983\" id:\"04007868fb11aee1e128ac0e4278b21df4a825734682f08f84d4817ec6849628\" pid:4750 exited_at:{seconds:1747373407 nanos:119644980}" May 16 05:30:07.484178 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 16 05:30:07.974173 kubelet[2711]: E0516 05:30:07.974116 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:07.987989 kubelet[2711]: I0516 05:30:07.987902 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rpx7t" podStartSLOduration=5.987879528 podStartE2EDuration="5.987879528s" podCreationTimestamp="2025-05-16 05:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:30:07.987230908 +0000 UTC m=+85.315843196" watchObservedRunningTime="2025-05-16 05:30:07.987879528 +0000 UTC m=+85.316491826" May 16 05:30:09.125106 kubelet[2711]: E0516 05:30:09.125055 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:09.334473 containerd[1584]: time="2025-05-16T05:30:09.334419641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983\" id:\"9d39820096dc3c5cbcbbcf6c6e9c91450d9c332b45df87ea4f58db658eeb2b6a\" pid:4914 exit_status:1 exited_at:{seconds:1747373409 nanos:334055857}" May 16 05:30:10.502638 systemd-networkd[1488]: lxc_health: Link UP May 16 05:30:10.504511 systemd-networkd[1488]: lxc_health: Gained carrier May 16 05:30:11.125949 kubelet[2711]: E0516 05:30:11.125387 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:11.439708 containerd[1584]: time="2025-05-16T05:30:11.439572855Z" level=info msg="TaskExit event in podsandbox handler container_id:\"782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983\" id:\"5b9cb029994a0cc12a1904541813ff60bf6c963b63d1786e39a39a602294cf61\" pid:5286 exited_at:{seconds:1747373411 nanos:438811083}" May 16 05:30:11.442229 kubelet[2711]: E0516 05:30:11.442182 2711 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:33504->127.0.0.1:35167: write tcp 127.0.0.1:33504->127.0.0.1:35167: write: broken pipe May 16 05:30:11.703431 systemd-networkd[1488]: lxc_health: Gained IPv6LL May 16 05:30:11.982201 kubelet[2711]: E0516 05:30:11.982055 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:12.753124 kubelet[2711]: E0516 05:30:12.753086 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:12.987162 kubelet[2711]: E0516 05:30:12.985801 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:13.531196 containerd[1584]: time="2025-05-16T05:30:13.531042095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983\" id:\"8fdf35669b91ab90285226aa39759f601f26bb1a1c25470291286f823f677f4c\" pid:5317 exited_at:{seconds:1747373413 nanos:530631383}" May 16 05:30:15.625122 containerd[1584]: time="2025-05-16T05:30:15.625067263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983\" id:\"a2fd729d850ff66e3e49ada67652b4342e29246a68a4d9c623af56a3d85a2c7a\" pid:5349 exited_at:{seconds:1747373415 nanos:624524430}" May 16 05:30:15.753229 kubelet[2711]: E0516 05:30:15.753185 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:16.753436 kubelet[2711]: E0516 05:30:16.753390 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:30:17.707961 containerd[1584]: time="2025-05-16T05:30:17.707831181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"782bfcdba5aed1e1629487142b0d80c942d6fc4be45c23ca45b48569be013983\" id:\"52d6c8986cf36c57fc40c644116e30eca5bce5a0b31492a54f7742f66a5c301e\" pid:5372 exited_at:{seconds:1747373417 nanos:706941017}" May 16 05:30:17.714202 sshd[4492]: Connection closed by 10.0.0.1 port 33850 May 16 05:30:17.714641 sshd-session[4489]: pam_unix(sshd:session): session closed for user core May 16 05:30:17.719605 systemd[1]: sshd@26-10.0.0.134:22-10.0.0.1:33850.service: Deactivated successfully. May 16 05:30:17.721906 systemd[1]: session-27.scope: Deactivated successfully. May 16 05:30:17.722782 systemd-logind[1573]: Session 27 logged out. Waiting for processes to exit. May 16 05:30:17.723984 systemd-logind[1573]: Removed session 27.