Jun 21 04:36:47.811589 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 04:36:47.811617 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:36:47.811632 kernel: BIOS-provided physical RAM map: Jun 21 04:36:47.811641 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 21 04:36:47.811649 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jun 21 04:36:47.811658 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jun 21 04:36:47.811669 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jun 21 04:36:47.811678 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jun 21 04:36:47.811689 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jun 21 04:36:47.811698 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jun 21 04:36:47.811707 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jun 21 04:36:47.811715 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jun 21 04:36:47.811724 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jun 21 04:36:47.811742 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jun 21 04:36:47.811756 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jun 21 04:36:47.811766 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jun 21 04:36:47.811776 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jun 21 04:36:47.811785 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jun 21 04:36:47.811795 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jun 21 04:36:47.811805 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jun 21 04:36:47.811814 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jun 21 04:36:47.811824 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jun 21 04:36:47.811834 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jun 21 04:36:47.811843 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 21 04:36:47.811853 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jun 21 04:36:47.811867 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 21 04:36:47.811877 kernel: NX (Execute Disable) protection: active Jun 21 04:36:47.811889 kernel: APIC: Static calls initialized Jun 21 04:36:47.811898 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jun 21 04:36:47.811907 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jun 21 04:36:47.811917 kernel: extended physical RAM map: Jun 21 04:36:47.811926 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 21 04:36:47.811937 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jun 21 04:36:47.811947 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jun 21 04:36:47.811956 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jun 21 04:36:47.811966 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jun 21 04:36:47.811978 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jun 21 04:36:47.811988 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jun 21 04:36:47.811998 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jun 21 04:36:47.812022 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jun 21 04:36:47.812037 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jun 21 04:36:47.812047 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jun 21 04:36:47.812059 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jun 21 04:36:47.812070 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jun 21 04:36:47.812080 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jun 21 04:36:47.812090 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jun 21 04:36:47.812101 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jun 21 04:36:47.812112 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jun 21 04:36:47.812122 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jun 21 04:36:47.812132 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jun 21 04:36:47.812142 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jun 21 04:36:47.812155 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jun 21 04:36:47.812165 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jun 21 04:36:47.812175 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jun 21 04:36:47.812186 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jun 21 04:36:47.812196 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 21 04:36:47.812206 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jun 21 04:36:47.812217 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 21 04:36:47.812227 kernel: efi: EFI v2.7 by EDK II Jun 21 04:36:47.812238 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jun 21 04:36:47.812248 kernel: random: crng init done Jun 21 04:36:47.812258 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jun 21 04:36:47.812269 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jun 21 04:36:47.812281 kernel: secureboot: Secure boot disabled Jun 21 04:36:47.812291 kernel: SMBIOS 2.8 present. Jun 21 04:36:47.812301 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jun 21 04:36:47.812312 kernel: DMI: Memory slots populated: 1/1 Jun 21 04:36:47.812322 kernel: Hypervisor detected: KVM Jun 21 04:36:47.812332 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 21 04:36:47.812342 kernel: kvm-clock: using sched offset of 3594352778 cycles Jun 21 04:36:47.812353 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 21 04:36:47.812364 kernel: tsc: Detected 2794.746 MHz processor Jun 21 04:36:47.812375 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 04:36:47.812385 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 04:36:47.812398 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jun 21 04:36:47.812409 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 21 04:36:47.812420 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 04:36:47.812430 kernel: Using GB pages for direct mapping Jun 21 04:36:47.812446 kernel: ACPI: Early table checksum verification disabled Jun 21 04:36:47.812457 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jun 21 04:36:47.812468 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jun 21 04:36:47.812479 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 04:36:47.812489 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 04:36:47.812502 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jun 21 04:36:47.812513 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 04:36:47.812523 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 04:36:47.812534 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 04:36:47.812545 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 04:36:47.812555 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jun 21 04:36:47.812566 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jun 21 04:36:47.812576 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jun 21 04:36:47.812589 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jun 21 04:36:47.812599 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jun 21 04:36:47.812609 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jun 21 04:36:47.812619 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jun 21 04:36:47.812629 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jun 21 04:36:47.812638 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jun 21 04:36:47.812647 kernel: No NUMA configuration found Jun 21 04:36:47.812657 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jun 21 04:36:47.812666 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jun 21 04:36:47.812676 kernel: Zone ranges: Jun 21 04:36:47.812689 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 04:36:47.812698 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jun 21 04:36:47.812708 kernel: Normal empty Jun 21 04:36:47.812717 kernel: Device empty Jun 21 04:36:47.812735 kernel: Movable zone start for each node Jun 21 04:36:47.812745 kernel: Early memory node ranges Jun 21 04:36:47.812754 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 21 04:36:47.812772 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jun 21 04:36:47.812782 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jun 21 04:36:47.812793 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jun 21 04:36:47.812802 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jun 21 04:36:47.812811 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jun 21 04:36:47.812819 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jun 21 04:36:47.812828 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jun 21 04:36:47.812837 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jun 21 04:36:47.812846 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 04:36:47.812855 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 21 04:36:47.812876 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jun 21 04:36:47.812887 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 04:36:47.812896 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jun 21 04:36:47.812906 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jun 21 04:36:47.812917 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jun 21 04:36:47.812926 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jun 21 04:36:47.812935 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jun 21 04:36:47.812945 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 21 04:36:47.812954 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 21 04:36:47.812965 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 04:36:47.812975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 21 04:36:47.812984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 21 04:36:47.812993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 04:36:47.813003 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 21 04:36:47.813034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 21 04:36:47.813048 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 04:36:47.813066 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 21 04:36:47.813084 kernel: TSC deadline timer available Jun 21 04:36:47.813094 kernel: CPU topo: Max. logical packages: 1 Jun 21 04:36:47.813108 kernel: CPU topo: Max. logical dies: 1 Jun 21 04:36:47.813118 kernel: CPU topo: Max. dies per package: 1 Jun 21 04:36:47.813128 kernel: CPU topo: Max. threads per core: 1 Jun 21 04:36:47.813138 kernel: CPU topo: Num. cores per package: 4 Jun 21 04:36:47.813148 kernel: CPU topo: Num. threads per package: 4 Jun 21 04:36:47.813158 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jun 21 04:36:47.813172 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 21 04:36:47.813182 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 21 04:36:47.813191 kernel: kvm-guest: setup PV sched yield Jun 21 04:36:47.813203 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jun 21 04:36:47.813212 kernel: Booting paravirtualized kernel on KVM Jun 21 04:36:47.813222 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 04:36:47.813231 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 21 04:36:47.813241 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jun 21 04:36:47.813250 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jun 21 04:36:47.813259 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 21 04:36:47.813269 kernel: kvm-guest: PV spinlocks enabled Jun 21 04:36:47.813278 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 21 04:36:47.813290 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:36:47.813300 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 04:36:47.813310 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 04:36:47.813319 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 04:36:47.813328 kernel: Fallback order for Node 0: 0 Jun 21 04:36:47.813337 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jun 21 04:36:47.813347 kernel: Policy zone: DMA32 Jun 21 04:36:47.813356 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 04:36:47.813367 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 21 04:36:47.813377 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 04:36:47.813386 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 04:36:47.813395 kernel: Dynamic Preempt: voluntary Jun 21 04:36:47.813404 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 04:36:47.813419 kernel: rcu: RCU event tracing is enabled. Jun 21 04:36:47.813429 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 21 04:36:47.813438 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 04:36:47.813448 kernel: Rude variant of Tasks RCU enabled. Jun 21 04:36:47.813457 kernel: Tracing variant of Tasks RCU enabled. Jun 21 04:36:47.813468 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 04:36:47.813477 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 21 04:36:47.813487 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 04:36:47.813496 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 04:36:47.813505 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 04:36:47.813515 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 21 04:36:47.813524 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 04:36:47.813533 kernel: Console: colour dummy device 80x25 Jun 21 04:36:47.813542 kernel: printk: legacy console [ttyS0] enabled Jun 21 04:36:47.813553 kernel: ACPI: Core revision 20240827 Jun 21 04:36:47.813563 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 21 04:36:47.813572 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 04:36:47.813581 kernel: x2apic enabled Jun 21 04:36:47.813590 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 04:36:47.813600 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 21 04:36:47.813609 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 21 04:36:47.813618 kernel: kvm-guest: setup PV IPIs Jun 21 04:36:47.813628 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 21 04:36:47.813639 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jun 21 04:36:47.813648 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jun 21 04:36:47.813658 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 21 04:36:47.813667 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 21 04:36:47.813676 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 21 04:36:47.813685 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 04:36:47.813695 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 04:36:47.813704 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 04:36:47.813715 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 21 04:36:47.813724 kernel: RETBleed: Mitigation: untrained return thunk Jun 21 04:36:47.813742 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 21 04:36:47.813752 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 21 04:36:47.813761 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 21 04:36:47.813771 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 21 04:36:47.813780 kernel: x86/bugs: return thunk changed Jun 21 04:36:47.813789 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 21 04:36:47.813799 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 21 04:36:47.813810 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 21 04:36:47.813819 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 21 04:36:47.813829 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 21 04:36:47.813838 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 21 04:36:47.813847 kernel: Freeing SMP alternatives memory: 32K Jun 21 04:36:47.813857 kernel: pid_max: default: 32768 minimum: 301 Jun 21 04:36:47.813866 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 04:36:47.813875 kernel: landlock: Up and running. Jun 21 04:36:47.813884 kernel: SELinux: Initializing. Jun 21 04:36:47.813896 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 04:36:47.813905 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 04:36:47.813914 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 21 04:36:47.813924 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 21 04:36:47.813933 kernel: ... version: 0 Jun 21 04:36:47.813942 kernel: ... bit width: 48 Jun 21 04:36:47.813951 kernel: ... generic registers: 6 Jun 21 04:36:47.813960 kernel: ... value mask: 0000ffffffffffff Jun 21 04:36:47.813969 kernel: ... max period: 00007fffffffffff Jun 21 04:36:47.813981 kernel: ... fixed-purpose events: 0 Jun 21 04:36:47.813990 kernel: ... event mask: 000000000000003f Jun 21 04:36:47.813999 kernel: signal: max sigframe size: 1776 Jun 21 04:36:47.814023 kernel: rcu: Hierarchical SRCU implementation. Jun 21 04:36:47.814032 kernel: rcu: Max phase no-delay instances is 400. Jun 21 04:36:47.814042 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 04:36:47.814051 kernel: smp: Bringing up secondary CPUs ... Jun 21 04:36:47.814060 kernel: smpboot: x86: Booting SMP configuration: Jun 21 04:36:47.814070 kernel: .... node #0, CPUs: #1 #2 #3 Jun 21 04:36:47.814081 kernel: smp: Brought up 1 node, 4 CPUs Jun 21 04:36:47.814090 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jun 21 04:36:47.814100 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 137196K reserved, 0K cma-reserved) Jun 21 04:36:47.814110 kernel: devtmpfs: initialized Jun 21 04:36:47.814119 kernel: x86/mm: Memory block size: 128MB Jun 21 04:36:47.814128 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jun 21 04:36:47.814138 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jun 21 04:36:47.814147 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jun 21 04:36:47.814156 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jun 21 04:36:47.814168 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jun 21 04:36:47.814178 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jun 21 04:36:47.814187 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 04:36:47.814196 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 21 04:36:47.814205 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 04:36:47.814215 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 04:36:47.814224 kernel: audit: initializing netlink subsys (disabled) Jun 21 04:36:47.814234 kernel: audit: type=2000 audit(1750480606.287:1): state=initialized audit_enabled=0 res=1 Jun 21 04:36:47.814246 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 04:36:47.814257 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 04:36:47.814267 kernel: cpuidle: using governor menu Jun 21 04:36:47.814277 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 04:36:47.814287 kernel: dca service started, version 1.12.1 Jun 21 04:36:47.814299 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jun 21 04:36:47.814309 kernel: PCI: Using configuration type 1 for base access Jun 21 04:36:47.814320 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 04:36:47.814331 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 21 04:36:47.814345 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 21 04:36:47.814356 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 04:36:47.814367 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 04:36:47.814378 kernel: ACPI: Added _OSI(Module Device) Jun 21 04:36:47.814388 kernel: ACPI: Added _OSI(Processor Device) Jun 21 04:36:47.814399 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 04:36:47.814410 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 04:36:47.814421 kernel: ACPI: Interpreter enabled Jun 21 04:36:47.814432 kernel: ACPI: PM: (supports S0 S3 S5) Jun 21 04:36:47.814445 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 04:36:47.814456 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 04:36:47.814467 kernel: PCI: Using E820 reservations for host bridge windows Jun 21 04:36:47.814478 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 21 04:36:47.814489 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 04:36:47.814719 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 21 04:36:47.814880 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 21 04:36:47.815037 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 21 04:36:47.815056 kernel: PCI host bridge to bus 0000:00 Jun 21 04:36:47.815204 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 21 04:36:47.815385 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 21 04:36:47.815533 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 21 04:36:47.815664 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jun 21 04:36:47.815803 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jun 21 04:36:47.815934 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jun 21 04:36:47.816090 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 04:36:47.816255 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jun 21 04:36:47.816409 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jun 21 04:36:47.816553 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jun 21 04:36:47.816696 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jun 21 04:36:47.816848 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jun 21 04:36:47.816997 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 21 04:36:47.817171 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 04:36:47.817320 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jun 21 04:36:47.817466 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jun 21 04:36:47.817610 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jun 21 04:36:47.817777 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 21 04:36:47.817925 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jun 21 04:36:47.818092 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jun 21 04:36:47.818240 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jun 21 04:36:47.818396 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 04:36:47.818543 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jun 21 04:36:47.818688 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jun 21 04:36:47.818881 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jun 21 04:36:47.819025 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jun 21 04:36:47.819159 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jun 21 04:36:47.819275 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 21 04:36:47.819403 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jun 21 04:36:47.819518 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jun 21 04:36:47.819631 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jun 21 04:36:47.819764 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jun 21 04:36:47.819885 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jun 21 04:36:47.819896 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 21 04:36:47.819905 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 21 04:36:47.819912 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 21 04:36:47.819920 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 21 04:36:47.819928 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 21 04:36:47.819935 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 21 04:36:47.819943 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 21 04:36:47.819951 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 21 04:36:47.819961 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 21 04:36:47.819968 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 21 04:36:47.819976 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 21 04:36:47.819984 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 21 04:36:47.819991 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 21 04:36:47.819999 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 21 04:36:47.820032 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 21 04:36:47.820040 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 21 04:36:47.820048 kernel: iommu: Default domain type: Translated Jun 21 04:36:47.820058 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 04:36:47.820066 kernel: efivars: Registered efivars operations Jun 21 04:36:47.820073 kernel: PCI: Using ACPI for IRQ routing Jun 21 04:36:47.820081 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 21 04:36:47.820089 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jun 21 04:36:47.820096 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jun 21 04:36:47.820104 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jun 21 04:36:47.820111 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jun 21 04:36:47.820118 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jun 21 04:36:47.820128 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jun 21 04:36:47.820136 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jun 21 04:36:47.820143 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jun 21 04:36:47.820264 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 21 04:36:47.820378 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 21 04:36:47.820490 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 21 04:36:47.820501 kernel: vgaarb: loaded Jun 21 04:36:47.820509 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 21 04:36:47.820520 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 21 04:36:47.820527 kernel: clocksource: Switched to clocksource kvm-clock Jun 21 04:36:47.820535 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 04:36:47.820543 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 04:36:47.820550 kernel: pnp: PnP ACPI init Jun 21 04:36:47.820689 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jun 21 04:36:47.820725 kernel: pnp: PnP ACPI: found 6 devices Jun 21 04:36:47.820750 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 04:36:47.820764 kernel: NET: Registered PF_INET protocol family Jun 21 04:36:47.820775 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 04:36:47.820786 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 21 04:36:47.820797 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 04:36:47.820807 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 04:36:47.820820 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 21 04:36:47.820831 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 21 04:36:47.820841 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 04:36:47.820851 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 04:36:47.820859 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 04:36:47.820867 kernel: NET: Registered PF_XDP protocol family Jun 21 04:36:47.820992 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jun 21 04:36:47.821136 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jun 21 04:36:47.821244 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 21 04:36:47.821348 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 21 04:36:47.821451 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 21 04:36:47.821559 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jun 21 04:36:47.821662 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jun 21 04:36:47.821775 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jun 21 04:36:47.821786 kernel: PCI: CLS 0 bytes, default 64 Jun 21 04:36:47.821795 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jun 21 04:36:47.821803 kernel: Initialise system trusted keyrings Jun 21 04:36:47.821811 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 21 04:36:47.821819 kernel: Key type asymmetric registered Jun 21 04:36:47.821826 kernel: Asymmetric key parser 'x509' registered Jun 21 04:36:47.821837 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 04:36:47.821846 kernel: io scheduler mq-deadline registered Jun 21 04:36:47.821855 kernel: io scheduler kyber registered Jun 21 04:36:47.821864 kernel: io scheduler bfq registered Jun 21 04:36:47.821872 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 04:36:47.821881 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 21 04:36:47.821891 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 21 04:36:47.821899 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jun 21 04:36:47.821907 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 04:36:47.821915 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 04:36:47.821924 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 21 04:36:47.821931 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 21 04:36:47.821939 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 21 04:36:47.822072 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 21 04:36:47.822187 kernel: rtc_cmos 00:04: registered as rtc0 Jun 21 04:36:47.822322 kernel: rtc_cmos 00:04: setting system clock to 2025-06-21T04:36:47 UTC (1750480607) Jun 21 04:36:47.822430 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 21 04:36:47.822441 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 21 04:36:47.822449 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jun 21 04:36:47.822457 kernel: efifb: probing for efifb Jun 21 04:36:47.822466 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jun 21 04:36:47.822474 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jun 21 04:36:47.822482 kernel: efifb: scrolling: redraw Jun 21 04:36:47.822493 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 21 04:36:47.822501 kernel: Console: switching to colour frame buffer device 160x50 Jun 21 04:36:47.822510 kernel: fb0: EFI VGA frame buffer device Jun 21 04:36:47.822518 kernel: pstore: Using crash dump compression: deflate Jun 21 04:36:47.822526 kernel: pstore: Registered efi_pstore as persistent store backend Jun 21 04:36:47.822534 kernel: NET: Registered PF_INET6 protocol family Jun 21 04:36:47.822542 kernel: Segment Routing with IPv6 Jun 21 04:36:47.822549 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 04:36:47.822557 kernel: NET: Registered PF_PACKET protocol family Jun 21 04:36:47.822567 kernel: Key type dns_resolver registered Jun 21 04:36:47.822575 kernel: IPI shorthand broadcast: enabled Jun 21 04:36:47.822583 kernel: sched_clock: Marking stable (2849003041, 158279380)->(3029512812, -22230391) Jun 21 04:36:47.822591 kernel: registered taskstats version 1 Jun 21 04:36:47.822599 kernel: Loading compiled-in X.509 certificates Jun 21 04:36:47.822607 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 04:36:47.822615 kernel: Demotion targets for Node 0: null Jun 21 04:36:47.822623 kernel: Key type .fscrypt registered Jun 21 04:36:47.822631 kernel: Key type fscrypt-provisioning registered Jun 21 04:36:47.822641 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 04:36:47.822649 kernel: ima: Allocated hash algorithm: sha1 Jun 21 04:36:47.822656 kernel: ima: No architecture policies found Jun 21 04:36:47.822664 kernel: clk: Disabling unused clocks Jun 21 04:36:47.822672 kernel: Warning: unable to open an initial console. Jun 21 04:36:47.822680 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 04:36:47.822688 kernel: Write protecting the kernel read-only data: 24576k Jun 21 04:36:47.822696 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 04:36:47.822706 kernel: Run /init as init process Jun 21 04:36:47.822714 kernel: with arguments: Jun 21 04:36:47.822722 kernel: /init Jun 21 04:36:47.822738 kernel: with environment: Jun 21 04:36:47.822746 kernel: HOME=/ Jun 21 04:36:47.822754 kernel: TERM=linux Jun 21 04:36:47.822762 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 04:36:47.822771 systemd[1]: Successfully made /usr/ read-only. Jun 21 04:36:47.822782 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 04:36:47.822793 systemd[1]: Detected virtualization kvm. Jun 21 04:36:47.822801 systemd[1]: Detected architecture x86-64. Jun 21 04:36:47.822810 systemd[1]: Running in initrd. Jun 21 04:36:47.822818 systemd[1]: No hostname configured, using default hostname. Jun 21 04:36:47.822827 systemd[1]: Hostname set to . Jun 21 04:36:47.822835 systemd[1]: Initializing machine ID from VM UUID. Jun 21 04:36:47.822843 systemd[1]: Queued start job for default target initrd.target. Jun 21 04:36:47.822852 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:36:47.822862 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:36:47.822871 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 04:36:47.822880 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 04:36:47.822889 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 04:36:47.822898 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 04:36:47.822908 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 04:36:47.822919 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 04:36:47.822927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:36:47.822936 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:36:47.822944 systemd[1]: Reached target paths.target - Path Units. Jun 21 04:36:47.822952 systemd[1]: Reached target slices.target - Slice Units. Jun 21 04:36:47.822961 systemd[1]: Reached target swap.target - Swaps. Jun 21 04:36:47.822969 systemd[1]: Reached target timers.target - Timer Units. Jun 21 04:36:47.822977 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 04:36:47.822986 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 04:36:47.822996 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 04:36:47.823022 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 04:36:47.823031 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:36:47.823040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 04:36:47.823048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:36:47.823057 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 04:36:47.823065 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 04:36:47.823074 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 04:36:47.823085 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 04:36:47.823094 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 04:36:47.823102 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 04:36:47.823111 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 04:36:47.823119 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 04:36:47.823128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:36:47.823136 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 04:36:47.823147 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:36:47.823156 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 04:36:47.823165 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 04:36:47.823193 systemd-journald[219]: Collecting audit messages is disabled. Jun 21 04:36:47.823216 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:36:47.823225 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 04:36:47.823234 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 04:36:47.823242 systemd-journald[219]: Journal started Jun 21 04:36:47.823263 systemd-journald[219]: Runtime Journal (/run/log/journal/6ca68377df06416793a455d5eb86d5dc) is 6M, max 48.5M, 42.4M free. Jun 21 04:36:47.811282 systemd-modules-load[220]: Inserted module 'overlay' Jun 21 04:36:47.828033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 04:36:47.828057 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 04:36:47.836203 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 04:36:47.842034 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 04:36:47.842843 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 04:36:47.847849 kernel: Bridge firewalling registered Jun 21 04:36:47.844750 systemd-modules-load[220]: Inserted module 'br_netfilter' Jun 21 04:36:47.845967 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 04:36:47.848204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 04:36:47.848256 systemd-tmpfiles[248]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 04:36:47.850133 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:36:47.858183 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:36:47.862121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:36:47.871166 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:36:47.882245 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:36:47.883809 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 04:36:47.926662 systemd-resolved[283]: Positive Trust Anchors: Jun 21 04:36:47.926677 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 04:36:47.926707 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 04:36:47.929162 systemd-resolved[283]: Defaulting to hostname 'linux'. Jun 21 04:36:47.930131 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 04:36:47.938378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:36:47.981036 kernel: SCSI subsystem initialized Jun 21 04:36:47.990028 kernel: Loading iSCSI transport class v2.0-870. Jun 21 04:36:48.000030 kernel: iscsi: registered transport (tcp) Jun 21 04:36:48.021028 kernel: iscsi: registered transport (qla4xxx) Jun 21 04:36:48.021046 kernel: QLogic iSCSI HBA Driver Jun 21 04:36:48.040571 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 04:36:48.060475 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:36:48.064423 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 04:36:48.112927 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 04:36:48.116413 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 04:36:48.168037 kernel: raid6: avx2x4 gen() 30571 MB/s Jun 21 04:36:48.185032 kernel: raid6: avx2x2 gen() 31489 MB/s Jun 21 04:36:48.202118 kernel: raid6: avx2x1 gen() 26063 MB/s Jun 21 04:36:48.202129 kernel: raid6: using algorithm avx2x2 gen() 31489 MB/s Jun 21 04:36:48.220128 kernel: raid6: .... xor() 19852 MB/s, rmw enabled Jun 21 04:36:48.220151 kernel: raid6: using avx2x2 recovery algorithm Jun 21 04:36:48.240036 kernel: xor: automatically using best checksumming function avx Jun 21 04:36:48.402038 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 04:36:48.409500 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 04:36:48.412386 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:36:48.461893 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jun 21 04:36:48.468848 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:36:48.472879 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 04:36:48.494886 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jun 21 04:36:48.521334 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 04:36:48.522903 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 04:36:48.606524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:36:48.610223 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 04:36:48.643024 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 21 04:36:48.647138 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 21 04:36:48.652548 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 04:36:48.652575 kernel: GPT:9289727 != 19775487 Jun 21 04:36:48.652596 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 04:36:48.652617 kernel: GPT:9289727 != 19775487 Jun 21 04:36:48.652644 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 04:36:48.652669 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 04:36:48.660079 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 21 04:36:48.665025 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 04:36:48.682033 kernel: libata version 3.00 loaded. Jun 21 04:36:48.684040 kernel: AES CTR mode by8 optimization enabled Jun 21 04:36:48.685663 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:36:48.686123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:36:48.688912 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:36:48.694763 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:36:48.713055 kernel: ahci 0000:00:1f.2: version 3.0 Jun 21 04:36:48.713300 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 21 04:36:48.713313 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jun 21 04:36:48.714465 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jun 21 04:36:48.714618 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 21 04:36:48.715668 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:36:48.716912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:36:48.721924 kernel: scsi host0: ahci Jun 21 04:36:48.725111 kernel: scsi host1: ahci Jun 21 04:36:48.725280 kernel: scsi host2: ahci Jun 21 04:36:48.725418 kernel: scsi host3: ahci Jun 21 04:36:48.726157 kernel: scsi host4: ahci Jun 21 04:36:48.727672 kernel: scsi host5: ahci Jun 21 04:36:48.727836 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jun 21 04:36:48.727854 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jun 21 04:36:48.729541 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jun 21 04:36:48.729557 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jun 21 04:36:48.730454 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jun 21 04:36:48.732333 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jun 21 04:36:48.738903 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 21 04:36:48.756812 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 21 04:36:48.764943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 04:36:48.771523 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 21 04:36:48.771595 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 21 04:36:48.775814 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 04:36:48.780215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:36:48.798586 disk-uuid[631]: Primary Header is updated. Jun 21 04:36:48.798586 disk-uuid[631]: Secondary Entries is updated. Jun 21 04:36:48.798586 disk-uuid[631]: Secondary Header is updated. Jun 21 04:36:48.803049 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 04:36:48.804861 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:36:48.808304 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 04:36:49.044032 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 21 04:36:49.044084 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jun 21 04:36:49.045021 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 21 04:36:49.045035 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 21 04:36:49.046031 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 21 04:36:49.047034 kernel: ata3.00: applying bridge limits Jun 21 04:36:49.047062 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 21 04:36:49.048035 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 21 04:36:49.049032 kernel: ata3.00: configured for UDMA/100 Jun 21 04:36:49.051028 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 21 04:36:49.098052 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 21 04:36:49.098356 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 21 04:36:49.119058 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 21 04:36:49.595811 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 04:36:49.597561 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 04:36:49.599391 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:36:49.600605 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 04:36:49.603631 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 04:36:49.629421 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 04:36:49.809038 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 04:36:49.809470 disk-uuid[634]: The operation has completed successfully. Jun 21 04:36:49.835323 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 04:36:49.835445 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 04:36:49.873084 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 04:36:49.897984 sh[665]: Success Jun 21 04:36:49.915034 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 04:36:49.915059 kernel: device-mapper: uevent: version 1.0.3 Jun 21 04:36:49.916680 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 04:36:49.926061 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 21 04:36:49.956126 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 04:36:49.959306 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 04:36:49.981457 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 04:36:49.988753 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 04:36:49.988781 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (677) Jun 21 04:36:49.990033 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 04:36:49.990056 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:36:49.991509 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 04:36:49.995686 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 04:36:49.996443 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 04:36:49.998605 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 04:36:49.999515 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 04:36:50.002528 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 04:36:50.026044 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (710) Jun 21 04:36:50.028117 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:36:50.028139 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:36:50.028150 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 04:36:50.036150 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:36:50.036247 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 04:36:50.039707 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 04:36:50.121789 ignition[755]: Ignition 2.21.0 Jun 21 04:36:50.121803 ignition[755]: Stage: fetch-offline Jun 21 04:36:50.121835 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:36:50.121844 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 04:36:50.121928 ignition[755]: parsed url from cmdline: "" Jun 21 04:36:50.121932 ignition[755]: no config URL provided Jun 21 04:36:50.121937 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 04:36:50.121945 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jun 21 04:36:50.121967 ignition[755]: op(1): [started] loading QEMU firmware config module Jun 21 04:36:50.128953 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 04:36:50.121973 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 21 04:36:50.131675 ignition[755]: op(1): [finished] loading QEMU firmware config module Jun 21 04:36:50.133533 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 04:36:50.172945 ignition[755]: parsing config with SHA512: 0272088bcad8f230e94c5683948b851b25c93cbc9fd2cef2b6f4a3ae2a533ae30e86253f1a8dbad61dd443ba5c62006839e1b20c1d5982091c25eb802242ddb4 Jun 21 04:36:50.179246 unknown[755]: fetched base config from "system" Jun 21 04:36:50.179259 unknown[755]: fetched user config from "qemu" Jun 21 04:36:50.179615 ignition[755]: fetch-offline: fetch-offline passed Jun 21 04:36:50.179676 ignition[755]: Ignition finished successfully Jun 21 04:36:50.184946 systemd-networkd[855]: lo: Link UP Jun 21 04:36:50.184959 systemd-networkd[855]: lo: Gained carrier Jun 21 04:36:50.185336 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 04:36:50.186577 systemd-networkd[855]: Enumeration completed Jun 21 04:36:50.186651 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 04:36:50.186924 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:36:50.186929 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:36:50.188183 systemd-networkd[855]: eth0: Link UP Jun 21 04:36:50.188187 systemd-networkd[855]: eth0: Gained carrier Jun 21 04:36:50.188195 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:36:50.189238 systemd[1]: Reached target network.target - Network. Jun 21 04:36:50.190344 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 21 04:36:50.193283 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 04:36:50.210119 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 21 04:36:50.228598 ignition[859]: Ignition 2.21.0 Jun 21 04:36:50.228940 ignition[859]: Stage: kargs Jun 21 04:36:50.230414 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:36:50.230427 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 04:36:50.231254 ignition[859]: kargs: kargs passed Jun 21 04:36:50.231298 ignition[859]: Ignition finished successfully Jun 21 04:36:50.235690 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 04:36:50.237748 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 04:36:50.274970 ignition[868]: Ignition 2.21.0 Jun 21 04:36:50.274984 ignition[868]: Stage: disks Jun 21 04:36:50.275157 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:36:50.275168 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 04:36:50.277338 ignition[868]: disks: disks passed Jun 21 04:36:50.277427 ignition[868]: Ignition finished successfully Jun 21 04:36:50.279897 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 04:36:50.280648 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 04:36:50.282019 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 04:36:50.282511 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 04:36:50.282854 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 04:36:50.289128 systemd[1]: Reached target basic.target - Basic System. Jun 21 04:36:50.291895 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 04:36:50.321827 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 21 04:36:50.329625 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 04:36:50.333688 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 04:36:50.438056 kernel: EXT4-fs (vda9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 04:36:50.438434 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 04:36:50.440752 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 04:36:50.444234 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 04:36:50.446273 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 04:36:50.447423 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 21 04:36:50.447471 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 04:36:50.447498 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 04:36:50.463078 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 04:36:50.465690 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 04:36:50.468615 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (886) Jun 21 04:36:50.470046 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:36:50.470092 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:36:50.471504 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 04:36:50.475559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 04:36:50.502160 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 04:36:50.506304 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Jun 21 04:36:50.511442 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 04:36:50.515545 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 04:36:50.603019 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 04:36:50.606266 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 04:36:50.608812 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 04:36:50.648060 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:36:50.659479 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 04:36:50.671856 ignition[1000]: INFO : Ignition 2.21.0 Jun 21 04:36:50.671856 ignition[1000]: INFO : Stage: mount Jun 21 04:36:50.673765 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:36:50.673765 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 04:36:50.676340 ignition[1000]: INFO : mount: mount passed Jun 21 04:36:50.677163 ignition[1000]: INFO : Ignition finished successfully Jun 21 04:36:50.680714 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 04:36:50.682933 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 04:36:50.988444 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 04:36:50.990508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 04:36:51.015034 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1012) Jun 21 04:36:51.017153 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:36:51.017166 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:36:51.017176 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 04:36:51.021202 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 04:36:51.049506 ignition[1029]: INFO : Ignition 2.21.0 Jun 21 04:36:51.049506 ignition[1029]: INFO : Stage: files Jun 21 04:36:51.051214 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:36:51.051214 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 04:36:51.051214 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Jun 21 04:36:51.054760 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 04:36:51.054760 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 04:36:51.057748 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 04:36:51.057748 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 04:36:51.057748 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 04:36:51.057748 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 21 04:36:51.056259 unknown[1029]: wrote ssh authorized keys file for user: core Jun 21 04:36:51.065026 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 21 04:36:51.096330 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 04:36:51.238414 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 21 04:36:51.240658 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 04:36:51.240658 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 21 04:36:51.389141 systemd-networkd[855]: eth0: Gained IPv6LL Jun 21 04:36:51.714584 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 21 04:36:51.778122 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 04:36:51.780056 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 21 04:36:51.780056 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 04:36:51.780056 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 04:36:51.780056 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 04:36:51.780056 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 04:36:51.780056 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 04:36:51.780056 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 04:36:51.780056 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 04:36:51.794272 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 04:36:51.794272 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 04:36:51.794272 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 04:36:51.794272 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 04:36:51.794272 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 04:36:51.794272 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 21 04:36:52.443643 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 21 04:36:52.716202 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 04:36:52.716202 ignition[1029]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 21 04:36:52.720133 ignition[1029]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 04:36:52.722235 ignition[1029]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 04:36:52.722235 ignition[1029]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 21 04:36:52.722235 ignition[1029]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 21 04:36:52.727344 ignition[1029]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 21 04:36:52.727344 ignition[1029]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 21 04:36:52.727344 ignition[1029]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 21 04:36:52.727344 ignition[1029]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 21 04:36:52.741265 ignition[1029]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 21 04:36:52.745592 ignition[1029]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 21 04:36:52.747436 ignition[1029]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 21 04:36:52.747436 ignition[1029]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 21 04:36:52.751068 ignition[1029]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 04:36:52.751068 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 04:36:52.751068 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 04:36:52.751068 ignition[1029]: INFO : files: files passed Jun 21 04:36:52.751068 ignition[1029]: INFO : Ignition finished successfully Jun 21 04:36:52.757968 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 04:36:52.762512 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 04:36:52.764986 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 04:36:52.780701 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 04:36:52.780856 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 04:36:52.784525 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Jun 21 04:36:52.786249 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:36:52.786249 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:36:52.789537 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:36:52.790553 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 04:36:52.791557 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 04:36:52.794494 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 04:36:52.854322 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 04:36:52.854463 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 04:36:52.857215 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 04:36:52.859569 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 04:36:52.861782 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 04:36:52.864632 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 04:36:52.898061 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 04:36:52.900656 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 04:36:52.921171 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:36:52.921327 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:36:52.924665 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 04:36:52.926679 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 04:36:52.926789 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 04:36:52.927913 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 04:36:52.928427 systemd[1]: Stopped target basic.target - Basic System. Jun 21 04:36:52.928771 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 04:36:52.929276 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 04:36:52.929612 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 04:36:52.929940 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 04:36:52.930448 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 04:36:52.930785 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 04:36:52.931302 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 04:36:52.931643 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 04:36:52.931970 systemd[1]: Stopped target swap.target - Swaps. Jun 21 04:36:52.932447 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 04:36:52.932552 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 04:36:52.951343 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:36:52.951735 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:36:52.952040 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 04:36:52.952140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:36:52.952542 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 04:36:52.952650 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 04:36:52.953347 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 04:36:52.953449 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 04:36:52.953789 systemd[1]: Stopped target paths.target - Path Units. Jun 21 04:36:52.954052 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 04:36:52.958067 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:36:52.966199 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 04:36:52.966522 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 04:36:52.966862 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 04:36:52.966948 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 04:36:52.971705 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 04:36:52.971788 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 04:36:52.974003 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 04:36:52.974133 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 04:36:52.975774 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 04:36:52.975875 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 04:36:52.982188 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 04:36:52.983839 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 04:36:52.987352 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 04:36:52.987517 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:36:52.990374 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 04:36:52.990517 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 04:36:52.994805 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 04:36:53.000148 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 04:36:53.017602 ignition[1084]: INFO : Ignition 2.21.0 Jun 21 04:36:53.017602 ignition[1084]: INFO : Stage: umount Jun 21 04:36:53.019865 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:36:53.019865 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 04:36:53.019865 ignition[1084]: INFO : umount: umount passed Jun 21 04:36:53.019865 ignition[1084]: INFO : Ignition finished successfully Jun 21 04:36:53.021514 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 04:36:53.022111 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 04:36:53.022222 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 04:36:53.023057 systemd[1]: Stopped target network.target - Network. Jun 21 04:36:53.026487 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 04:36:53.026546 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 04:36:53.027615 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 04:36:53.027662 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 04:36:53.029902 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 04:36:53.029950 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 04:36:53.032108 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 04:36:53.032151 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 04:36:53.034293 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 04:36:53.036577 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 04:36:53.046380 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 04:36:53.046517 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 04:36:53.052656 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 04:36:53.052913 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 04:36:53.052957 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:36:53.058687 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 04:36:53.058959 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 04:36:53.059087 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 04:36:53.065125 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 04:36:53.066785 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 04:36:53.069336 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 04:36:53.069397 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:36:53.070661 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 04:36:53.073124 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 04:36:53.073178 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 04:36:53.078090 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 04:36:53.078147 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:36:53.082716 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 04:36:53.082791 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 04:36:53.086273 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:36:53.088570 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 04:36:53.100532 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 04:36:53.100676 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 04:36:53.110942 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 04:36:53.111210 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:36:53.114974 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 04:36:53.115060 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 04:36:53.118321 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 04:36:53.118365 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:36:53.120525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 04:36:53.120598 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 04:36:53.122106 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 04:36:53.122159 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 04:36:53.122862 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 04:36:53.122922 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 04:36:53.124531 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 04:36:53.130757 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 04:36:53.130813 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:36:53.135322 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 04:36:53.135374 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:36:53.139183 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 21 04:36:53.139227 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 04:36:53.142948 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 04:36:53.142994 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:36:53.144442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:36:53.144487 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:36:53.158005 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 04:36:53.158164 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 04:36:53.255481 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 04:36:53.255623 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 04:36:53.257873 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 04:36:53.259731 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 04:36:53.259787 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 04:36:53.262761 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 04:36:53.292533 systemd[1]: Switching root. Jun 21 04:36:53.342347 systemd-journald[219]: Journal stopped Jun 21 04:36:54.553894 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jun 21 04:36:54.553956 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 04:36:54.553970 kernel: SELinux: policy capability open_perms=1 Jun 21 04:36:54.553981 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 04:36:54.554019 kernel: SELinux: policy capability always_check_network=0 Jun 21 04:36:54.554031 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 04:36:54.554042 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 04:36:54.554066 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 04:36:54.554077 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 04:36:54.554088 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 04:36:54.554099 kernel: audit: type=1403 audit(1750480613.790:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 04:36:54.554111 systemd[1]: Successfully loaded SELinux policy in 46.917ms. Jun 21 04:36:54.554131 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.162ms. Jun 21 04:36:54.554147 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 04:36:54.554159 systemd[1]: Detected virtualization kvm. Jun 21 04:36:54.554170 systemd[1]: Detected architecture x86-64. Jun 21 04:36:54.554187 systemd[1]: Detected first boot. Jun 21 04:36:54.554199 systemd[1]: Initializing machine ID from VM UUID. Jun 21 04:36:54.554210 zram_generator::config[1130]: No configuration found. Jun 21 04:36:54.554224 kernel: Guest personality initialized and is inactive Jun 21 04:36:54.554235 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 21 04:36:54.554248 kernel: Initialized host personality Jun 21 04:36:54.554259 kernel: NET: Registered PF_VSOCK protocol family Jun 21 04:36:54.554270 systemd[1]: Populated /etc with preset unit settings. Jun 21 04:36:54.554282 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 04:36:54.554294 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 04:36:54.554306 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 04:36:54.554317 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 04:36:54.554330 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 04:36:54.554342 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 04:36:54.554356 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 04:36:54.554368 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 04:36:54.554380 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 04:36:54.554392 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 04:36:54.554404 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 04:36:54.554415 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 04:36:54.554427 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:36:54.554439 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:36:54.554451 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 04:36:54.554465 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 04:36:54.554483 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 04:36:54.554495 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 04:36:54.554507 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 04:36:54.554519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:36:54.554530 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:36:54.554542 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 04:36:54.554560 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 04:36:54.554574 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 04:36:54.554586 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 04:36:54.554599 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:36:54.554611 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 04:36:54.554623 systemd[1]: Reached target slices.target - Slice Units. Jun 21 04:36:54.554635 systemd[1]: Reached target swap.target - Swaps. Jun 21 04:36:54.554647 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 04:36:54.554659 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 04:36:54.554671 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 04:36:54.554684 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:36:54.554696 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 04:36:54.554708 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:36:54.554720 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 04:36:54.554732 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 04:36:54.554744 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 04:36:54.554756 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 04:36:54.554768 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:36:54.554780 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 04:36:54.554794 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 04:36:54.554805 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 04:36:54.554818 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 04:36:54.554830 systemd[1]: Reached target machines.target - Containers. Jun 21 04:36:54.554841 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 04:36:54.554854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:36:54.554866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 04:36:54.554878 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 04:36:54.554892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:36:54.554903 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 04:36:54.554915 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:36:54.554926 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 04:36:54.554938 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:36:54.554950 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 04:36:54.554962 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 04:36:54.554973 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 04:36:54.554987 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 04:36:54.554999 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 04:36:54.555024 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:36:54.555036 kernel: fuse: init (API version 7.41) Jun 21 04:36:54.555047 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 04:36:54.555059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 04:36:54.555073 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 04:36:54.555085 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 04:36:54.555097 kernel: ACPI: bus type drm_connector registered Jun 21 04:36:54.555108 kernel: loop: module loaded Jun 21 04:36:54.555121 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 04:36:54.555133 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 04:36:54.555145 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 04:36:54.555156 systemd[1]: Stopped verity-setup.service. Jun 21 04:36:54.555171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:36:54.555183 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 04:36:54.555195 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 04:36:54.555209 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 04:36:54.555221 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 04:36:54.555235 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 04:36:54.555247 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 04:36:54.555277 systemd-journald[1205]: Collecting audit messages is disabled. Jun 21 04:36:54.555299 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 04:36:54.555311 systemd-journald[1205]: Journal started Jun 21 04:36:54.555333 systemd-journald[1205]: Runtime Journal (/run/log/journal/6ca68377df06416793a455d5eb86d5dc) is 6M, max 48.5M, 42.4M free. Jun 21 04:36:54.295889 systemd[1]: Queued start job for default target multi-user.target. Jun 21 04:36:54.321821 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 21 04:36:54.322295 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 04:36:54.557028 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 04:36:54.558754 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:36:54.560307 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 04:36:54.560531 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 04:36:54.562073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:36:54.562282 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:36:54.563733 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 04:36:54.563946 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 04:36:54.565323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:36:54.565533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:36:54.567164 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 04:36:54.567394 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 04:36:54.568800 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:36:54.569071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:36:54.570528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 04:36:54.571970 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:36:54.573527 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 04:36:54.575220 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 04:36:54.589848 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 04:36:54.592586 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 04:36:54.594960 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 04:36:54.596306 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 04:36:54.596340 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 04:36:54.598449 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 04:36:54.604110 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 04:36:54.605424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:36:54.606787 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 04:36:54.610791 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 04:36:54.612134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 04:36:54.613106 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 04:36:54.613199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 04:36:54.614408 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:36:54.618126 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 04:36:54.619443 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 04:36:54.624169 systemd-journald[1205]: Time spent on flushing to /var/log/journal/6ca68377df06416793a455d5eb86d5dc is 23.085ms for 1067 entries. Jun 21 04:36:54.624169 systemd-journald[1205]: System Journal (/var/log/journal/6ca68377df06416793a455d5eb86d5dc) is 8M, max 195.6M, 187.6M free. Jun 21 04:36:54.654487 systemd-journald[1205]: Received client request to flush runtime journal. Jun 21 04:36:54.654527 kernel: loop0: detected capacity change from 0 to 146240 Jun 21 04:36:54.622342 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 04:36:54.622484 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 04:36:54.629813 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:36:54.641052 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 04:36:54.643853 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 04:36:54.647218 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 04:36:54.658475 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 04:36:54.661475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:36:54.662931 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jun 21 04:36:54.662951 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jun 21 04:36:54.671067 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 04:36:54.674912 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 04:36:54.682040 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 04:36:54.686641 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 04:36:54.700542 kernel: loop1: detected capacity change from 0 to 113872 Jun 21 04:36:54.716437 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 04:36:54.719053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 04:36:54.721103 kernel: loop2: detected capacity change from 0 to 221472 Jun 21 04:36:54.746422 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jun 21 04:36:54.746441 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jun 21 04:36:54.750907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:36:54.755030 kernel: loop3: detected capacity change from 0 to 146240 Jun 21 04:36:54.766041 kernel: loop4: detected capacity change from 0 to 113872 Jun 21 04:36:54.775039 kernel: loop5: detected capacity change from 0 to 221472 Jun 21 04:36:54.781235 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 21 04:36:54.781779 (sd-merge)[1274]: Merged extensions into '/usr'. Jun 21 04:36:54.786256 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 04:36:54.786399 systemd[1]: Reloading... Jun 21 04:36:54.842040 zram_generator::config[1297]: No configuration found. Jun 21 04:36:54.933196 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 04:36:54.947978 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:36:55.027638 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 04:36:55.028139 systemd[1]: Reloading finished in 241 ms. Jun 21 04:36:55.060269 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 04:36:55.062200 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 04:36:55.086386 systemd[1]: Starting ensure-sysext.service... Jun 21 04:36:55.088173 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 04:36:55.099962 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Jun 21 04:36:55.099981 systemd[1]: Reloading... Jun 21 04:36:55.111222 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 04:36:55.111259 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 04:36:55.111545 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 04:36:55.111792 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 04:36:55.113000 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 04:36:55.113369 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Jun 21 04:36:55.113491 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Jun 21 04:36:55.117574 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 04:36:55.117586 systemd-tmpfiles[1339]: Skipping /boot Jun 21 04:36:55.130385 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 04:36:55.130398 systemd-tmpfiles[1339]: Skipping /boot Jun 21 04:36:55.154042 zram_generator::config[1366]: No configuration found. Jun 21 04:36:55.242633 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:36:55.323059 systemd[1]: Reloading finished in 222 ms. Jun 21 04:36:55.350671 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 04:36:55.374669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:36:55.383583 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 04:36:55.385928 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 04:36:55.388549 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 04:36:55.401347 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 04:36:55.404336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:36:55.408617 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 04:36:55.414033 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:36:55.414352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:36:55.419911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:36:55.423479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:36:55.426982 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:36:55.429427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:36:55.429552 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:36:55.432678 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 04:36:55.433795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:36:55.435561 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 04:36:55.437403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:36:55.437700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:36:55.439626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:36:55.439842 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:36:55.441946 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:36:55.442210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:36:55.453957 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:36:55.455092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:36:55.456915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:36:55.457443 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Jun 21 04:36:55.462040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:36:55.469301 augenrules[1441]: No rules Jun 21 04:36:55.472707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:36:55.474095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:36:55.474205 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:36:55.477203 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 04:36:55.478300 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:36:55.479959 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 04:36:55.480327 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 04:36:55.482288 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 04:36:55.484513 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 04:36:55.486462 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 04:36:55.488586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:36:55.488802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:36:55.490615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:36:55.490820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:36:55.492821 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:36:55.497165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:36:55.498779 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:36:55.500827 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 04:36:55.525894 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:36:55.528445 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 04:36:55.529632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:36:55.534113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:36:55.545022 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 04:36:55.549129 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:36:55.552230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:36:55.553382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:36:55.553424 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:36:55.556875 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 04:36:55.559093 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 04:36:55.559134 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:36:55.560039 systemd[1]: Finished ensure-sysext.service. Jun 21 04:36:55.561430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:36:55.561677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:36:55.577180 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 04:36:55.578764 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:36:55.578987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:36:55.580535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:36:55.580746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:36:55.583789 augenrules[1486]: /sbin/augenrules: No change Jun 21 04:36:55.584831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 04:36:55.584895 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 04:36:55.588275 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 04:36:55.592236 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 04:36:55.598198 augenrules[1514]: No rules Jun 21 04:36:55.602398 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 04:36:55.603242 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 04:36:55.604831 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 04:36:55.612439 systemd-resolved[1408]: Positive Trust Anchors: Jun 21 04:36:55.612456 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 04:36:55.612487 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 04:36:55.618385 systemd-resolved[1408]: Defaulting to hostname 'linux'. Jun 21 04:36:55.621177 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 04:36:55.623660 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:36:55.662040 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 04:36:55.662823 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 04:36:55.678544 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 04:36:55.681035 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jun 21 04:36:55.686035 kernel: ACPI: button: Power Button [PWRF] Jun 21 04:36:55.702573 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 04:36:55.720337 systemd-networkd[1492]: lo: Link UP Jun 21 04:36:55.720350 systemd-networkd[1492]: lo: Gained carrier Jun 21 04:36:55.721944 systemd-networkd[1492]: Enumeration completed Jun 21 04:36:55.722040 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 04:36:55.722479 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:36:55.722492 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:36:55.723037 systemd-networkd[1492]: eth0: Link UP Jun 21 04:36:55.723213 systemd-networkd[1492]: eth0: Gained carrier Jun 21 04:36:55.723235 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:36:55.723458 systemd[1]: Reached target network.target - Network. Jun 21 04:36:55.727906 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 04:36:55.734443 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jun 21 04:36:55.751218 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 21 04:36:55.751403 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 21 04:36:55.736288 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 21 04:36:55.741573 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 04:36:55.752991 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 04:36:55.754411 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 04:36:55.755666 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 04:36:55.756997 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 04:36:55.758342 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 04:36:55.759547 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 04:36:55.760836 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 04:36:55.760869 systemd[1]: Reached target paths.target - Path Units. Jun 21 04:36:55.761819 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 04:36:55.763083 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 04:36:55.764628 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 04:36:57.446231 systemd-resolved[1408]: Clock change detected. Flushing caches. Jun 21 04:36:57.446336 systemd-timesyncd[1503]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 21 04:36:57.446392 systemd-timesyncd[1503]: Initial clock synchronization to Sat 2025-06-21 04:36:57.446184 UTC. Jun 21 04:36:57.446552 systemd[1]: Reached target timers.target - Timer Units. Jun 21 04:36:57.449671 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 04:36:57.452582 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 04:36:57.456161 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 04:36:57.460328 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 04:36:57.462487 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 04:36:57.468332 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 04:36:57.470065 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 04:36:57.472633 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 04:36:57.475822 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 04:36:57.480136 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 04:36:57.481243 systemd[1]: Reached target basic.target - Basic System. Jun 21 04:36:57.482539 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 04:36:57.482563 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 04:36:57.485571 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 04:36:57.489521 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 04:36:57.495159 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 04:36:57.498590 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 04:36:57.502039 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 04:36:57.503129 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 04:36:57.510692 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 04:36:57.513630 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 04:36:57.516605 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 04:36:57.520561 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 04:36:57.522543 jq[1558]: false Jun 21 04:36:57.526521 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 04:36:57.532748 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 04:36:57.534671 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 04:36:57.535199 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 04:36:57.539616 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 04:36:57.541974 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jun 21 04:36:57.542208 extend-filesystems[1559]: Found /dev/vda6 Jun 21 04:36:57.541136 oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jun 21 04:36:57.547962 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 04:36:57.553990 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting Jun 21 04:36:57.553990 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 04:36:57.553990 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache Jun 21 04:36:57.553488 oslogin_cache_refresh[1562]: Failure getting users, quitting Jun 21 04:36:57.553510 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 04:36:57.553567 oslogin_cache_refresh[1562]: Refreshing group entry cache Jun 21 04:36:57.558363 extend-filesystems[1559]: Found /dev/vda9 Jun 21 04:36:57.556943 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 04:36:57.559362 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 04:36:57.559658 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 04:36:57.562276 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 04:36:57.563519 extend-filesystems[1559]: Checking size of /dev/vda9 Jun 21 04:36:57.567573 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting Jun 21 04:36:57.567573 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 04:36:57.567552 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 04:36:57.565151 oslogin_cache_refresh[1562]: Failure getting groups, quitting Jun 21 04:36:57.565163 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 04:36:57.569836 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 04:36:57.570102 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 04:36:57.576220 jq[1574]: true Jun 21 04:36:57.581440 update_engine[1571]: I20250621 04:36:57.579497 1571 main.cc:92] Flatcar Update Engine starting Jun 21 04:36:57.590894 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 04:36:57.591190 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 04:36:57.593716 extend-filesystems[1559]: Resized partition /dev/vda9 Jun 21 04:36:57.596783 extend-filesystems[1599]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 04:36:57.601440 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 21 04:36:57.605754 tar[1582]: linux-amd64/helm Jun 21 04:36:57.626439 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 21 04:36:57.643208 (ntainerd)[1587]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 04:36:57.647736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:36:57.649949 extend-filesystems[1599]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 21 04:36:57.649949 extend-filesystems[1599]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 21 04:36:57.649949 extend-filesystems[1599]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 21 04:36:57.660887 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Jun 21 04:36:57.661891 jq[1592]: true Jun 21 04:36:57.652894 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 04:36:57.653162 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 04:36:57.673207 kernel: kvm_amd: TSC scaling supported Jun 21 04:36:57.673267 kernel: kvm_amd: Nested Virtualization enabled Jun 21 04:36:57.673281 kernel: kvm_amd: Nested Paging enabled Jun 21 04:36:57.673293 kernel: kvm_amd: LBR virtualization supported Jun 21 04:36:57.678448 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 21 04:36:57.678495 kernel: kvm_amd: Virtual GIF supported Jun 21 04:36:57.692839 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:36:57.693106 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:36:57.698672 dbus-daemon[1556]: [system] SELinux support is enabled Jun 21 04:36:57.703862 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:36:57.705084 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 04:36:57.709566 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 04:36:57.709587 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 04:36:57.711003 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 04:36:57.711019 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 04:36:57.714089 update_engine[1571]: I20250621 04:36:57.714049 1571 update_check_scheduler.cc:74] Next update check in 5m31s Jun 21 04:36:57.714262 systemd[1]: Started update-engine.service - Update Engine. Jun 21 04:36:57.717316 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 04:36:57.759686 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (Power Button) Jun 21 04:36:57.761262 bash[1629]: Updated "/home/core/.ssh/authorized_keys" Jun 21 04:36:57.761599 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 04:36:57.764948 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 04:36:57.767155 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 21 04:36:57.769547 systemd-logind[1570]: New seat seat0. Jun 21 04:36:57.770613 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 04:36:57.775367 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 04:36:57.805252 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 04:36:57.808032 kernel: EDAC MC: Ver: 3.0.0 Jun 21 04:36:57.809692 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 04:36:57.816131 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 04:36:57.828142 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 04:36:57.829347 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 04:36:57.832911 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 04:36:57.834632 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:36:57.851501 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 04:36:57.854841 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 04:36:57.857908 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 04:36:57.859842 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 04:36:57.864749 containerd[1587]: time="2025-06-21T04:36:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 04:36:57.866172 containerd[1587]: time="2025-06-21T04:36:57.866123150Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 04:36:57.873714 containerd[1587]: time="2025-06-21T04:36:57.873670469Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.836µs" Jun 21 04:36:57.873714 containerd[1587]: time="2025-06-21T04:36:57.873697339Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 04:36:57.873714 containerd[1587]: time="2025-06-21T04:36:57.873715373Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 04:36:57.873890 containerd[1587]: time="2025-06-21T04:36:57.873861848Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 04:36:57.873890 containerd[1587]: time="2025-06-21T04:36:57.873882948Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 04:36:57.873938 containerd[1587]: time="2025-06-21T04:36:57.873904308Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 04:36:57.873990 containerd[1587]: time="2025-06-21T04:36:57.873969540Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 04:36:57.873990 containerd[1587]: time="2025-06-21T04:36:57.873983947Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 04:36:57.874284 containerd[1587]: time="2025-06-21T04:36:57.874253844Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 04:36:57.874284 containerd[1587]: time="2025-06-21T04:36:57.874271597Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 04:36:57.874284 containerd[1587]: time="2025-06-21T04:36:57.874281155Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 04:36:57.874347 containerd[1587]: time="2025-06-21T04:36:57.874289661Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 04:36:57.874403 containerd[1587]: time="2025-06-21T04:36:57.874376985Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 04:36:57.874635 containerd[1587]: time="2025-06-21T04:36:57.874606084Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 04:36:57.874658 containerd[1587]: time="2025-06-21T04:36:57.874639237Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 04:36:57.874658 containerd[1587]: time="2025-06-21T04:36:57.874650618Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 04:36:57.874706 containerd[1587]: time="2025-06-21T04:36:57.874682929Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 04:36:57.875333 containerd[1587]: time="2025-06-21T04:36:57.875298032Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 04:36:57.875431 containerd[1587]: time="2025-06-21T04:36:57.875376009Z" level=info msg="metadata content store policy set" policy=shared Jun 21 04:36:57.881679 containerd[1587]: time="2025-06-21T04:36:57.881643677Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 04:36:57.881716 containerd[1587]: time="2025-06-21T04:36:57.881688532Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 04:36:57.881716 containerd[1587]: time="2025-06-21T04:36:57.881702127Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 04:36:57.881716 containerd[1587]: time="2025-06-21T04:36:57.881714029Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 04:36:57.881786 containerd[1587]: time="2025-06-21T04:36:57.881726332Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 04:36:57.881786 containerd[1587]: time="2025-06-21T04:36:57.881736652Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 04:36:57.881786 containerd[1587]: time="2025-06-21T04:36:57.881748143Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 04:36:57.881786 containerd[1587]: time="2025-06-21T04:36:57.881759905Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 04:36:57.881786 containerd[1587]: time="2025-06-21T04:36:57.881774032Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 04:36:57.881786 containerd[1587]: time="2025-06-21T04:36:57.881783700Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 04:36:57.881892 containerd[1587]: time="2025-06-21T04:36:57.881792587Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 04:36:57.881892 containerd[1587]: time="2025-06-21T04:36:57.881805381Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 04:36:57.881940 containerd[1587]: time="2025-06-21T04:36:57.881903575Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 04:36:57.881940 containerd[1587]: time="2025-06-21T04:36:57.881929113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 04:36:57.881975 containerd[1587]: time="2025-06-21T04:36:57.881942779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 04:36:57.881975 containerd[1587]: time="2025-06-21T04:36:57.881954531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 04:36:57.881975 containerd[1587]: time="2025-06-21T04:36:57.881968887Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 04:36:57.882031 containerd[1587]: time="2025-06-21T04:36:57.881983144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 04:36:57.882031 containerd[1587]: time="2025-06-21T04:36:57.881993474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 04:36:57.882031 containerd[1587]: time="2025-06-21T04:36:57.882002440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 04:36:57.882031 containerd[1587]: time="2025-06-21T04:36:57.882012569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 04:36:57.882031 containerd[1587]: time="2025-06-21T04:36:57.882022799Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 04:36:57.882031 containerd[1587]: time="2025-06-21T04:36:57.882031986Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 04:36:57.882141 containerd[1587]: time="2025-06-21T04:36:57.882087119Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 04:36:57.882141 containerd[1587]: time="2025-06-21T04:36:57.882103881Z" level=info msg="Start snapshots syncer" Jun 21 04:36:57.882141 containerd[1587]: time="2025-06-21T04:36:57.882128627Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 04:36:57.882396 containerd[1587]: time="2025-06-21T04:36:57.882354281Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 04:36:57.882518 containerd[1587]: time="2025-06-21T04:36:57.882401088Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 04:36:57.882518 containerd[1587]: time="2025-06-21T04:36:57.882486248Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 04:36:57.882615 containerd[1587]: time="2025-06-21T04:36:57.882587889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 04:36:57.882639 containerd[1587]: time="2025-06-21T04:36:57.882615170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 04:36:57.882639 containerd[1587]: time="2025-06-21T04:36:57.882625920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 04:36:57.882639 containerd[1587]: time="2025-06-21T04:36:57.882635749Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 04:36:57.882709 containerd[1587]: time="2025-06-21T04:36:57.882649965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 04:36:57.882709 containerd[1587]: time="2025-06-21T04:36:57.882660214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 04:36:57.882709 containerd[1587]: time="2025-06-21T04:36:57.882671035Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 04:36:57.882709 containerd[1587]: time="2025-06-21T04:36:57.882691984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 04:36:57.882709 containerd[1587]: time="2025-06-21T04:36:57.882702243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 04:36:57.882812 containerd[1587]: time="2025-06-21T04:36:57.882717151Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 04:36:57.883346 containerd[1587]: time="2025-06-21T04:36:57.883313921Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 04:36:57.883346 containerd[1587]: time="2025-06-21T04:36:57.883334800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 04:36:57.883346 containerd[1587]: time="2025-06-21T04:36:57.883343737Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 04:36:57.883410 containerd[1587]: time="2025-06-21T04:36:57.883353595Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 04:36:57.883410 containerd[1587]: time="2025-06-21T04:36:57.883362282Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 04:36:57.883410 containerd[1587]: time="2025-06-21T04:36:57.883382970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 04:36:57.883410 containerd[1587]: time="2025-06-21T04:36:57.883394181Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 04:36:57.883507 containerd[1587]: time="2025-06-21T04:36:57.883411033Z" level=info msg="runtime interface created" Jun 21 04:36:57.883507 containerd[1587]: time="2025-06-21T04:36:57.883439707Z" level=info msg="created NRI interface" Jun 21 04:36:57.883507 containerd[1587]: time="2025-06-21T04:36:57.883447542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 04:36:57.883507 containerd[1587]: time="2025-06-21T04:36:57.883458572Z" level=info msg="Connect containerd service" Jun 21 04:36:57.883507 containerd[1587]: time="2025-06-21T04:36:57.883479181Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 04:36:57.884159 containerd[1587]: time="2025-06-21T04:36:57.884128539Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 04:36:57.970136 containerd[1587]: time="2025-06-21T04:36:57.969866169Z" level=info msg="Start subscribing containerd event" Jun 21 04:36:57.970136 containerd[1587]: time="2025-06-21T04:36:57.969951800Z" level=info msg="Start recovering state" Jun 21 04:36:57.970136 containerd[1587]: time="2025-06-21T04:36:57.970034244Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 04:36:57.970136 containerd[1587]: time="2025-06-21T04:36:57.970117290Z" level=info msg="Start event monitor" Jun 21 04:36:57.970136 containerd[1587]: time="2025-06-21T04:36:57.970136015Z" level=info msg="Start cni network conf syncer for default" Jun 21 04:36:57.970355 containerd[1587]: time="2025-06-21T04:36:57.970147978Z" level=info msg="Start streaming server" Jun 21 04:36:57.970355 containerd[1587]: time="2025-06-21T04:36:57.970182252Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 04:36:57.970355 containerd[1587]: time="2025-06-21T04:36:57.970191760Z" level=info msg="runtime interface starting up..." Jun 21 04:36:57.970355 containerd[1587]: time="2025-06-21T04:36:57.970197621Z" level=info msg="starting plugins..." Jun 21 04:36:57.970355 containerd[1587]: time="2025-06-21T04:36:57.970213831Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 04:36:57.971429 containerd[1587]: time="2025-06-21T04:36:57.970547597Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 04:36:57.971529 containerd[1587]: time="2025-06-21T04:36:57.971498722Z" level=info msg="containerd successfully booted in 0.107391s" Jun 21 04:36:57.971634 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 04:36:58.104484 tar[1582]: linux-amd64/LICENSE Jun 21 04:36:58.104608 tar[1582]: linux-amd64/README.md Jun 21 04:36:58.124767 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 04:36:59.085656 systemd-networkd[1492]: eth0: Gained IPv6LL Jun 21 04:36:59.088814 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 04:36:59.091034 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 04:36:59.094133 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 21 04:36:59.097020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:36:59.105738 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 04:36:59.127831 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 21 04:36:59.128131 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 21 04:36:59.130069 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 04:36:59.132493 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 04:36:59.839498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:36:59.841212 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 04:36:59.843107 systemd[1]: Startup finished in 2.907s (kernel) + 6.165s (initrd) + 4.417s (userspace) = 13.490s. Jun 21 04:36:59.845275 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:37:00.269044 kubelet[1700]: E0621 04:37:00.268968 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:37:00.272787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:37:00.273016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:37:00.273467 systemd[1]: kubelet.service: Consumed 997ms CPU time, 265.8M memory peak. Jun 21 04:37:02.803522 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 04:37:02.804930 systemd[1]: Started sshd@0-10.0.0.30:22-10.0.0.1:36300.service - OpenSSH per-connection server daemon (10.0.0.1:36300). Jun 21 04:37:02.876155 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 36300 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:37:02.877926 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:37:02.884520 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 04:37:02.885662 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 04:37:02.892262 systemd-logind[1570]: New session 1 of user core. Jun 21 04:37:03.026751 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 04:37:03.029010 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 04:37:03.052904 (systemd)[1717]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 04:37:03.055186 systemd-logind[1570]: New session c1 of user core. Jun 21 04:37:03.211807 systemd[1717]: Queued start job for default target default.target. Jun 21 04:37:03.226711 systemd[1717]: Created slice app.slice - User Application Slice. Jun 21 04:37:03.226736 systemd[1717]: Reached target paths.target - Paths. Jun 21 04:37:03.226782 systemd[1717]: Reached target timers.target - Timers. Jun 21 04:37:03.228251 systemd[1717]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 04:37:03.238672 systemd[1717]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 04:37:03.238794 systemd[1717]: Reached target sockets.target - Sockets. Jun 21 04:37:03.238844 systemd[1717]: Reached target basic.target - Basic System. Jun 21 04:37:03.238896 systemd[1717]: Reached target default.target - Main User Target. Jun 21 04:37:03.238950 systemd[1717]: Startup finished in 177ms. Jun 21 04:37:03.239314 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 04:37:03.258555 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 04:37:03.323158 systemd[1]: Started sshd@1-10.0.0.30:22-10.0.0.1:36304.service - OpenSSH per-connection server daemon (10.0.0.1:36304). Jun 21 04:37:03.376585 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 36304 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:37:03.378017 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:37:03.382192 systemd-logind[1570]: New session 2 of user core. Jun 21 04:37:03.391555 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 04:37:03.444575 sshd[1730]: Connection closed by 10.0.0.1 port 36304 Jun 21 04:37:03.444953 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jun 21 04:37:03.462142 systemd[1]: sshd@1-10.0.0.30:22-10.0.0.1:36304.service: Deactivated successfully. Jun 21 04:37:03.463861 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 04:37:03.464720 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Jun 21 04:37:03.467692 systemd[1]: Started sshd@2-10.0.0.30:22-10.0.0.1:36320.service - OpenSSH per-connection server daemon (10.0.0.1:36320). Jun 21 04:37:03.468296 systemd-logind[1570]: Removed session 2. Jun 21 04:37:03.517754 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 36320 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:37:03.519288 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:37:03.523467 systemd-logind[1570]: New session 3 of user core. Jun 21 04:37:03.538541 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 04:37:03.588468 sshd[1738]: Connection closed by 10.0.0.1 port 36320 Jun 21 04:37:03.589000 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jun 21 04:37:03.600839 systemd[1]: sshd@2-10.0.0.30:22-10.0.0.1:36320.service: Deactivated successfully. Jun 21 04:37:03.602347 systemd[1]: session-3.scope: Deactivated successfully. Jun 21 04:37:03.603120 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Jun 21 04:37:03.605740 systemd[1]: Started sshd@3-10.0.0.30:22-10.0.0.1:36334.service - OpenSSH per-connection server daemon (10.0.0.1:36334). Jun 21 04:37:03.606393 systemd-logind[1570]: Removed session 3. Jun 21 04:37:03.670252 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 36334 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:37:03.672133 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:37:03.676737 systemd-logind[1570]: New session 4 of user core. Jun 21 04:37:03.691555 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 04:37:03.745907 sshd[1746]: Connection closed by 10.0.0.1 port 36334 Jun 21 04:37:03.746286 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Jun 21 04:37:03.759458 systemd[1]: sshd@3-10.0.0.30:22-10.0.0.1:36334.service: Deactivated successfully. Jun 21 04:37:03.761349 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 04:37:03.762151 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Jun 21 04:37:03.765640 systemd[1]: Started sshd@4-10.0.0.30:22-10.0.0.1:36338.service - OpenSSH per-connection server daemon (10.0.0.1:36338). Jun 21 04:37:03.766159 systemd-logind[1570]: Removed session 4. Jun 21 04:37:03.828671 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 36338 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:37:03.830125 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:37:03.834027 systemd-logind[1570]: New session 5 of user core. Jun 21 04:37:03.843529 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 04:37:03.900454 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 04:37:03.900765 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:37:03.921353 sudo[1755]: pam_unix(sudo:session): session closed for user root Jun 21 04:37:03.923186 sshd[1754]: Connection closed by 10.0.0.1 port 36338 Jun 21 04:37:03.923551 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jun 21 04:37:03.942073 systemd[1]: sshd@4-10.0.0.30:22-10.0.0.1:36338.service: Deactivated successfully. Jun 21 04:37:03.943795 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 04:37:03.944619 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Jun 21 04:37:03.947449 systemd[1]: Started sshd@5-10.0.0.30:22-10.0.0.1:36346.service - OpenSSH per-connection server daemon (10.0.0.1:36346). Jun 21 04:37:03.948174 systemd-logind[1570]: Removed session 5. Jun 21 04:37:03.997842 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 36346 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:37:03.999090 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:37:04.003397 systemd-logind[1570]: New session 6 of user core. Jun 21 04:37:04.013656 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 04:37:04.067090 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 04:37:04.067394 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:37:04.083992 sudo[1765]: pam_unix(sudo:session): session closed for user root Jun 21 04:37:04.090181 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 04:37:04.090492 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:37:04.101473 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 04:37:04.150967 augenrules[1787]: No rules Jun 21 04:37:04.153289 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 04:37:04.153650 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 04:37:04.154909 sudo[1764]: pam_unix(sudo:session): session closed for user root Jun 21 04:37:04.156434 sshd[1763]: Connection closed by 10.0.0.1 port 36346 Jun 21 04:37:04.156711 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jun 21 04:37:04.178321 systemd[1]: sshd@5-10.0.0.30:22-10.0.0.1:36346.service: Deactivated successfully. Jun 21 04:37:04.180063 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 04:37:04.180882 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Jun 21 04:37:04.183870 systemd[1]: Started sshd@6-10.0.0.30:22-10.0.0.1:36360.service - OpenSSH per-connection server daemon (10.0.0.1:36360). Jun 21 04:37:04.184474 systemd-logind[1570]: Removed session 6. Jun 21 04:37:04.241594 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 36360 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:37:04.242849 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:37:04.247409 systemd-logind[1570]: New session 7 of user core. Jun 21 04:37:04.254537 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 04:37:04.309282 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 04:37:04.309717 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:37:04.663180 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 04:37:04.681896 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 04:37:04.959661 dockerd[1820]: time="2025-06-21T04:37:04.959598987Z" level=info msg="Starting up" Jun 21 04:37:04.960433 dockerd[1820]: time="2025-06-21T04:37:04.960385072Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 04:37:05.297197 dockerd[1820]: time="2025-06-21T04:37:05.297083973Z" level=info msg="Loading containers: start." Jun 21 04:37:05.307457 kernel: Initializing XFRM netlink socket Jun 21 04:37:05.543314 systemd-networkd[1492]: docker0: Link UP Jun 21 04:37:05.548957 dockerd[1820]: time="2025-06-21T04:37:05.548860634Z" level=info msg="Loading containers: done." Jun 21 04:37:05.561233 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2341673941-merged.mount: Deactivated successfully. Jun 21 04:37:05.563088 dockerd[1820]: time="2025-06-21T04:37:05.563031028Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 04:37:05.563198 dockerd[1820]: time="2025-06-21T04:37:05.563119835Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 04:37:05.563248 dockerd[1820]: time="2025-06-21T04:37:05.563223730Z" level=info msg="Initializing buildkit" Jun 21 04:37:05.593254 dockerd[1820]: time="2025-06-21T04:37:05.593219023Z" level=info msg="Completed buildkit initialization" Jun 21 04:37:05.600001 dockerd[1820]: time="2025-06-21T04:37:05.599959659Z" level=info msg="Daemon has completed initialization" Jun 21 04:37:05.600093 dockerd[1820]: time="2025-06-21T04:37:05.600043907Z" level=info msg="API listen on /run/docker.sock" Jun 21 04:37:05.600157 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 04:37:06.519757 containerd[1587]: time="2025-06-21T04:37:06.519718082Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 21 04:37:07.103310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521547012.mount: Deactivated successfully. Jun 21 04:37:08.275813 containerd[1587]: time="2025-06-21T04:37:08.275754722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:08.276723 containerd[1587]: time="2025-06-21T04:37:08.276675008Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jun 21 04:37:08.278067 containerd[1587]: time="2025-06-21T04:37:08.278014541Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:08.280797 containerd[1587]: time="2025-06-21T04:37:08.280745304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:08.281682 containerd[1587]: time="2025-06-21T04:37:08.281658197Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.761901864s" Jun 21 04:37:08.281720 containerd[1587]: time="2025-06-21T04:37:08.281686119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 21 04:37:08.282395 containerd[1587]: time="2025-06-21T04:37:08.282359322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 21 04:37:09.662296 containerd[1587]: time="2025-06-21T04:37:09.662233405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:09.663356 containerd[1587]: time="2025-06-21T04:37:09.663049195Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jun 21 04:37:09.664481 containerd[1587]: time="2025-06-21T04:37:09.664441407Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:09.666767 containerd[1587]: time="2025-06-21T04:37:09.666726073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:09.667854 containerd[1587]: time="2025-06-21T04:37:09.667625150Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.385237725s" Jun 21 04:37:09.667854 containerd[1587]: time="2025-06-21T04:37:09.667851575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 21 04:37:09.669101 containerd[1587]: time="2025-06-21T04:37:09.668709224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 21 04:37:10.419025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 04:37:10.420496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:37:10.627447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:37:10.631229 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:37:10.664454 kubelet[2095]: E0621 04:37:10.664384 2095 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:37:10.670996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:37:10.671185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:37:10.671758 systemd[1]: kubelet.service: Consumed 208ms CPU time, 110.1M memory peak. Jun 21 04:37:11.648248 containerd[1587]: time="2025-06-21T04:37:11.648187496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:11.649180 containerd[1587]: time="2025-06-21T04:37:11.649117281Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jun 21 04:37:11.650323 containerd[1587]: time="2025-06-21T04:37:11.650285643Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:11.652731 containerd[1587]: time="2025-06-21T04:37:11.652700263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:11.653846 containerd[1587]: time="2025-06-21T04:37:11.653815956Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.985069191s" Jun 21 04:37:11.653895 containerd[1587]: time="2025-06-21T04:37:11.653850270Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 21 04:37:11.654464 containerd[1587]: time="2025-06-21T04:37:11.654410721Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 21 04:37:12.637284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138955543.mount: Deactivated successfully. Jun 21 04:37:12.950931 containerd[1587]: time="2025-06-21T04:37:12.950883597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:12.951685 containerd[1587]: time="2025-06-21T04:37:12.951660074Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jun 21 04:37:12.952874 containerd[1587]: time="2025-06-21T04:37:12.952847572Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:12.954701 containerd[1587]: time="2025-06-21T04:37:12.954659451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:12.955115 containerd[1587]: time="2025-06-21T04:37:12.955090319Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.300620296s" Jun 21 04:37:12.955148 containerd[1587]: time="2025-06-21T04:37:12.955116679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 21 04:37:12.955635 containerd[1587]: time="2025-06-21T04:37:12.955606357Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 04:37:13.487528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount44475419.mount: Deactivated successfully. Jun 21 04:37:14.519030 containerd[1587]: time="2025-06-21T04:37:14.518980479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:14.519831 containerd[1587]: time="2025-06-21T04:37:14.519804845Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 21 04:37:14.521031 containerd[1587]: time="2025-06-21T04:37:14.521005328Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:14.523343 containerd[1587]: time="2025-06-21T04:37:14.523317225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:14.524159 containerd[1587]: time="2025-06-21T04:37:14.524138906Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.568504998s" Jun 21 04:37:14.524215 containerd[1587]: time="2025-06-21T04:37:14.524163533Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 21 04:37:14.524761 containerd[1587]: time="2025-06-21T04:37:14.524562812Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 04:37:15.024364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206734594.mount: Deactivated successfully. Jun 21 04:37:15.030125 containerd[1587]: time="2025-06-21T04:37:15.030082032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:37:15.030922 containerd[1587]: time="2025-06-21T04:37:15.030892653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 21 04:37:15.031952 containerd[1587]: time="2025-06-21T04:37:15.031918748Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:37:15.034012 containerd[1587]: time="2025-06-21T04:37:15.033982149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:37:15.034615 containerd[1587]: time="2025-06-21T04:37:15.034580251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 509.9595ms" Jun 21 04:37:15.034615 containerd[1587]: time="2025-06-21T04:37:15.034612341Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 04:37:15.035142 containerd[1587]: time="2025-06-21T04:37:15.035105646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 21 04:37:15.575706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1776552641.mount: Deactivated successfully. Jun 21 04:37:17.148444 containerd[1587]: time="2025-06-21T04:37:17.148376824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:17.149100 containerd[1587]: time="2025-06-21T04:37:17.149083149Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jun 21 04:37:17.150332 containerd[1587]: time="2025-06-21T04:37:17.150285034Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:17.152720 containerd[1587]: time="2025-06-21T04:37:17.152686569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:17.153610 containerd[1587]: time="2025-06-21T04:37:17.153583773Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.118445004s" Jun 21 04:37:17.153639 containerd[1587]: time="2025-06-21T04:37:17.153609771Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 21 04:37:19.585991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:37:19.586234 systemd[1]: kubelet.service: Consumed 208ms CPU time, 110.1M memory peak. Jun 21 04:37:19.588676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:37:19.612463 systemd[1]: Reload requested from client PID 2256 ('systemctl') (unit session-7.scope)... Jun 21 04:37:19.612482 systemd[1]: Reloading... Jun 21 04:37:19.709438 zram_generator::config[2305]: No configuration found. Jun 21 04:37:19.872926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:37:19.988167 systemd[1]: Reloading finished in 375 ms. Jun 21 04:37:20.058997 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 04:37:20.059091 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 04:37:20.059386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:37:20.059450 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.3M memory peak. Jun 21 04:37:20.060931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:37:20.219348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:37:20.223260 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 04:37:20.261239 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:37:20.261239 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 04:37:20.261239 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:37:20.261665 kubelet[2347]: I0621 04:37:20.261310 2347 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 04:37:20.357131 kubelet[2347]: I0621 04:37:20.357090 2347 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 04:37:20.357131 kubelet[2347]: I0621 04:37:20.357122 2347 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 04:37:20.357357 kubelet[2347]: I0621 04:37:20.357334 2347 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 04:37:20.379684 kubelet[2347]: E0621 04:37:20.379374 2347 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:37:20.381215 kubelet[2347]: I0621 04:37:20.381177 2347 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 04:37:20.388271 kubelet[2347]: I0621 04:37:20.388239 2347 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 04:37:20.393898 kubelet[2347]: I0621 04:37:20.393874 2347 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 04:37:20.393986 kubelet[2347]: I0621 04:37:20.393964 2347 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 04:37:20.394118 kubelet[2347]: I0621 04:37:20.394088 2347 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 04:37:20.394264 kubelet[2347]: I0621 04:37:20.394108 2347 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 04:37:20.394362 kubelet[2347]: I0621 04:37:20.394267 2347 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 04:37:20.394362 kubelet[2347]: I0621 04:37:20.394275 2347 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 04:37:20.394408 kubelet[2347]: I0621 04:37:20.394389 2347 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:37:20.396126 kubelet[2347]: I0621 04:37:20.396109 2347 kubelet.go:408] "Attempting to sync node with API server" Jun 21 04:37:20.396126 kubelet[2347]: I0621 04:37:20.396126 2347 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 04:37:20.396185 kubelet[2347]: I0621 04:37:20.396155 2347 kubelet.go:314] "Adding apiserver pod source" Jun 21 04:37:20.396185 kubelet[2347]: I0621 04:37:20.396182 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 04:37:20.398087 kubelet[2347]: I0621 04:37:20.398046 2347 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 04:37:20.398727 kubelet[2347]: I0621 04:37:20.398373 2347 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 04:37:20.398727 kubelet[2347]: W0621 04:37:20.398442 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 04:37:20.399497 kubelet[2347]: W0621 04:37:20.399434 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jun 21 04:37:20.399497 kubelet[2347]: E0621 04:37:20.399490 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:37:20.399595 kubelet[2347]: W0621 04:37:20.399508 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jun 21 04:37:20.399595 kubelet[2347]: E0621 04:37:20.399557 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:37:20.400271 kubelet[2347]: I0621 04:37:20.399974 2347 server.go:1274] "Started kubelet" Jun 21 04:37:20.405232 kubelet[2347]: I0621 04:37:20.405200 2347 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 04:37:20.406277 kubelet[2347]: I0621 04:37:20.406250 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 04:37:20.406323 kubelet[2347]: I0621 04:37:20.406301 2347 server.go:449] "Adding debug handlers to kubelet server" Jun 21 04:37:20.408380 kubelet[2347]: E0621 04:37:20.407503 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.30:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184af4de353ccf6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-21 04:37:20.399953775 +0000 UTC m=+0.173237077,LastTimestamp:2025-06-21 04:37:20.399953775 +0000 UTC m=+0.173237077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 21 04:37:20.408621 kubelet[2347]: I0621 04:37:20.408595 2347 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 04:37:20.409244 kubelet[2347]: I0621 04:37:20.409205 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 04:37:20.409452 kubelet[2347]: I0621 04:37:20.409431 2347 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 04:37:20.409809 kubelet[2347]: I0621 04:37:20.409788 2347 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 04:37:20.409969 kubelet[2347]: I0621 04:37:20.409949 2347 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 04:37:20.410039 kubelet[2347]: I0621 04:37:20.410020 2347 reconciler.go:26] "Reconciler: start to sync state" Jun 21 04:37:20.410473 kubelet[2347]: W0621 04:37:20.410399 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jun 21 04:37:20.410511 kubelet[2347]: E0621 04:37:20.410480 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:37:20.411104 kubelet[2347]: E0621 04:37:20.411080 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 04:37:20.411532 kubelet[2347]: E0621 04:37:20.411479 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="200ms" Jun 21 04:37:20.411869 kubelet[2347]: I0621 04:37:20.411827 2347 factory.go:221] Registration of the systemd container factory successfully Jun 21 04:37:20.411964 kubelet[2347]: I0621 04:37:20.411936 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 04:37:20.412683 kubelet[2347]: E0621 04:37:20.412657 2347 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 04:37:20.413956 kubelet[2347]: I0621 04:37:20.413825 2347 factory.go:221] Registration of the containerd container factory successfully Jun 21 04:37:20.428152 kubelet[2347]: I0621 04:37:20.428010 2347 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 04:37:20.428152 kubelet[2347]: I0621 04:37:20.428027 2347 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 04:37:20.428152 kubelet[2347]: I0621 04:37:20.428040 2347 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:37:20.428152 kubelet[2347]: I0621 04:37:20.428034 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 04:37:20.429407 kubelet[2347]: I0621 04:37:20.429387 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 04:37:20.429483 kubelet[2347]: I0621 04:37:20.429411 2347 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 04:37:20.429483 kubelet[2347]: I0621 04:37:20.429459 2347 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 04:37:20.429539 kubelet[2347]: E0621 04:37:20.429492 2347 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 04:37:20.511375 kubelet[2347]: E0621 04:37:20.511283 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 04:37:20.530556 kubelet[2347]: E0621 04:37:20.530520 2347 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 21 04:37:20.611815 kubelet[2347]: E0621 04:37:20.611784 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 04:37:20.612135 kubelet[2347]: E0621 04:37:20.612104 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="400ms" Jun 21 04:37:20.712467 kubelet[2347]: E0621 04:37:20.712435 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 04:37:20.731602 kubelet[2347]: E0621 04:37:20.731560 2347 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 21 04:37:20.813325 kubelet[2347]: E0621 04:37:20.813202 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 04:37:20.838737 kubelet[2347]: W0621 04:37:20.838677 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jun 21 04:37:20.838782 kubelet[2347]: E0621 04:37:20.838733 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:37:20.838932 kubelet[2347]: I0621 04:37:20.838902 2347 policy_none.go:49] "None policy: Start" Jun 21 04:37:20.839551 kubelet[2347]: I0621 04:37:20.839529 2347 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 04:37:20.839551 kubelet[2347]: I0621 04:37:20.839551 2347 state_mem.go:35] "Initializing new in-memory state store" Jun 21 04:37:20.845913 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 04:37:20.863224 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 04:37:20.866642 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 04:37:20.882233 kubelet[2347]: I0621 04:37:20.882204 2347 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 04:37:20.882428 kubelet[2347]: I0621 04:37:20.882382 2347 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 04:37:20.882461 kubelet[2347]: I0621 04:37:20.882403 2347 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 04:37:20.882626 kubelet[2347]: I0621 04:37:20.882577 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 04:37:20.883539 kubelet[2347]: E0621 04:37:20.883498 2347 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 21 04:37:20.984710 kubelet[2347]: I0621 04:37:20.984692 2347 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 21 04:37:20.985036 kubelet[2347]: E0621 04:37:20.984995 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Jun 21 04:37:21.013479 kubelet[2347]: E0621 04:37:21.013406 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="800ms" Jun 21 04:37:21.139178 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jun 21 04:37:21.152638 systemd[1]: Created slice kubepods-burstable-pod205a5f719c9af629970c649e79280f75.slice - libcontainer container kubepods-burstable-pod205a5f719c9af629970c649e79280f75.slice. Jun 21 04:37:21.167008 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jun 21 04:37:21.186347 kubelet[2347]: I0621 04:37:21.186309 2347 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 21 04:37:21.186588 kubelet[2347]: E0621 04:37:21.186558 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Jun 21 04:37:21.214900 kubelet[2347]: I0621 04:37:21.214871 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jun 21 04:37:21.214900 kubelet[2347]: I0621 04:37:21.214895 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/205a5f719c9af629970c649e79280f75-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"205a5f719c9af629970c649e79280f75\") " pod="kube-system/kube-apiserver-localhost" Jun 21 04:37:21.214975 kubelet[2347]: I0621 04:37:21.214927 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 04:37:21.214975 kubelet[2347]: I0621 04:37:21.214945 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 04:37:21.214975 kubelet[2347]: I0621 04:37:21.214961 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 04:37:21.214975 kubelet[2347]: I0621 04:37:21.214975 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 04:37:21.215073 kubelet[2347]: I0621 04:37:21.214990 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/205a5f719c9af629970c649e79280f75-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"205a5f719c9af629970c649e79280f75\") " pod="kube-system/kube-apiserver-localhost" Jun 21 04:37:21.215073 kubelet[2347]: I0621 04:37:21.215005 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/205a5f719c9af629970c649e79280f75-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"205a5f719c9af629970c649e79280f75\") " pod="kube-system/kube-apiserver-localhost" Jun 21 04:37:21.215073 kubelet[2347]: I0621 04:37:21.215018 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 04:37:21.451502 kubelet[2347]: E0621 04:37:21.451474 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:21.451969 containerd[1587]: time="2025-06-21T04:37:21.451930598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jun 21 04:37:21.465184 kubelet[2347]: E0621 04:37:21.465145 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:21.465591 containerd[1587]: time="2025-06-21T04:37:21.465439221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:205a5f719c9af629970c649e79280f75,Namespace:kube-system,Attempt:0,}" Jun 21 04:37:21.469743 kubelet[2347]: E0621 04:37:21.469717 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:21.470157 containerd[1587]: time="2025-06-21T04:37:21.470133288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jun 21 04:37:21.473571 containerd[1587]: time="2025-06-21T04:37:21.473551771Z" level=info msg="connecting to shim e925517528de7b073a7f3ec74097f7eb1eff85339ae0f0d1d53d664de0f58fca" address="unix:///run/containerd/s/98d1ec731cfbe0177c7dcb96595fd2dfe6b80be0e19df6e99d9ddcbb65158a22" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:37:21.499445 containerd[1587]: time="2025-06-21T04:37:21.499387821Z" level=info msg="connecting to shim c313cc2680673e685a11bfa5c7e2d14481b692b138823051d9105ef5a1bd006d" address="unix:///run/containerd/s/5256cc43ced411373b5b389c72c1acc324ed0cff932fa6d3492d410784c9a993" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:37:21.500714 systemd[1]: Started cri-containerd-e925517528de7b073a7f3ec74097f7eb1eff85339ae0f0d1d53d664de0f58fca.scope - libcontainer container e925517528de7b073a7f3ec74097f7eb1eff85339ae0f0d1d53d664de0f58fca. Jun 21 04:37:21.507295 containerd[1587]: time="2025-06-21T04:37:21.507241986Z" level=info msg="connecting to shim 902a3a51d17852f77c43850035a55f3fe2f0c0ef0b74b267cf89208dbedb3d5d" address="unix:///run/containerd/s/a51cdc328b7e3d4b802aff5f7faa59a934686607a670f2101ae42c9217022ab5" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:37:21.532537 systemd[1]: Started cri-containerd-c313cc2680673e685a11bfa5c7e2d14481b692b138823051d9105ef5a1bd006d.scope - libcontainer container c313cc2680673e685a11bfa5c7e2d14481b692b138823051d9105ef5a1bd006d. Jun 21 04:37:21.536153 systemd[1]: Started cri-containerd-902a3a51d17852f77c43850035a55f3fe2f0c0ef0b74b267cf89208dbedb3d5d.scope - libcontainer container 902a3a51d17852f77c43850035a55f3fe2f0c0ef0b74b267cf89208dbedb3d5d. Jun 21 04:37:21.552714 containerd[1587]: time="2025-06-21T04:37:21.552640808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"e925517528de7b073a7f3ec74097f7eb1eff85339ae0f0d1d53d664de0f58fca\"" Jun 21 04:37:21.554106 kubelet[2347]: E0621 04:37:21.554077 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:21.560780 containerd[1587]: time="2025-06-21T04:37:21.560744401Z" level=info msg="CreateContainer within sandbox \"e925517528de7b073a7f3ec74097f7eb1eff85339ae0f0d1d53d664de0f58fca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 04:37:21.573102 containerd[1587]: time="2025-06-21T04:37:21.573010292Z" level=info msg="Container 8c8577f3f943ddcb0b0c84a287cd876616fd94bc2eac4217c6e5b18bfdab506e: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:21.581028 containerd[1587]: time="2025-06-21T04:37:21.580995744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:205a5f719c9af629970c649e79280f75,Namespace:kube-system,Attempt:0,} returns sandbox id \"c313cc2680673e685a11bfa5c7e2d14481b692b138823051d9105ef5a1bd006d\"" Jun 21 04:37:21.581721 containerd[1587]: time="2025-06-21T04:37:21.581699454Z" level=info msg="CreateContainer within sandbox \"e925517528de7b073a7f3ec74097f7eb1eff85339ae0f0d1d53d664de0f58fca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8c8577f3f943ddcb0b0c84a287cd876616fd94bc2eac4217c6e5b18bfdab506e\"" Jun 21 04:37:21.582476 containerd[1587]: time="2025-06-21T04:37:21.582231111Z" level=info msg="StartContainer for \"8c8577f3f943ddcb0b0c84a287cd876616fd94bc2eac4217c6e5b18bfdab506e\"" Jun 21 04:37:21.583374 containerd[1587]: time="2025-06-21T04:37:21.583213103Z" level=info msg="connecting to shim 8c8577f3f943ddcb0b0c84a287cd876616fd94bc2eac4217c6e5b18bfdab506e" address="unix:///run/containerd/s/98d1ec731cfbe0177c7dcb96595fd2dfe6b80be0e19df6e99d9ddcbb65158a22" protocol=ttrpc version=3 Jun 21 04:37:21.583562 kubelet[2347]: E0621 04:37:21.583505 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:21.586490 containerd[1587]: time="2025-06-21T04:37:21.586470685Z" level=info msg="CreateContainer within sandbox \"c313cc2680673e685a11bfa5c7e2d14481b692b138823051d9105ef5a1bd006d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 04:37:21.588816 containerd[1587]: time="2025-06-21T04:37:21.588783484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"902a3a51d17852f77c43850035a55f3fe2f0c0ef0b74b267cf89208dbedb3d5d\"" Jun 21 04:37:21.589334 kubelet[2347]: I0621 04:37:21.589304 2347 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 21 04:37:21.589642 kubelet[2347]: E0621 04:37:21.589615 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:21.590145 kubelet[2347]: E0621 04:37:21.590112 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Jun 21 04:37:21.591234 containerd[1587]: time="2025-06-21T04:37:21.591161866Z" level=info msg="CreateContainer within sandbox \"902a3a51d17852f77c43850035a55f3fe2f0c0ef0b74b267cf89208dbedb3d5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 04:37:21.595627 containerd[1587]: time="2025-06-21T04:37:21.595597167Z" level=info msg="Container e670438c7b21c4b70680e64d525e3e5212697229e00d1a350856c39171951085: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:21.603798 containerd[1587]: time="2025-06-21T04:37:21.603765662Z" level=info msg="Container f7b89d94eeb9c46879328111dba9349ac4233c22996d6d45c777a6a848ce660e: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:21.604583 systemd[1]: Started cri-containerd-8c8577f3f943ddcb0b0c84a287cd876616fd94bc2eac4217c6e5b18bfdab506e.scope - libcontainer container 8c8577f3f943ddcb0b0c84a287cd876616fd94bc2eac4217c6e5b18bfdab506e. Jun 21 04:37:21.611049 containerd[1587]: time="2025-06-21T04:37:21.611023368Z" level=info msg="CreateContainer within sandbox \"c313cc2680673e685a11bfa5c7e2d14481b692b138823051d9105ef5a1bd006d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e670438c7b21c4b70680e64d525e3e5212697229e00d1a350856c39171951085\"" Jun 21 04:37:21.611577 containerd[1587]: time="2025-06-21T04:37:21.611559243Z" level=info msg="StartContainer for \"e670438c7b21c4b70680e64d525e3e5212697229e00d1a350856c39171951085\"" Jun 21 04:37:21.612122 containerd[1587]: time="2025-06-21T04:37:21.612085751Z" level=info msg="CreateContainer within sandbox \"902a3a51d17852f77c43850035a55f3fe2f0c0ef0b74b267cf89208dbedb3d5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f7b89d94eeb9c46879328111dba9349ac4233c22996d6d45c777a6a848ce660e\"" Jun 21 04:37:21.612479 containerd[1587]: time="2025-06-21T04:37:21.612461365Z" level=info msg="StartContainer for \"f7b89d94eeb9c46879328111dba9349ac4233c22996d6d45c777a6a848ce660e\"" Jun 21 04:37:21.612669 containerd[1587]: time="2025-06-21T04:37:21.612505809Z" level=info msg="connecting to shim e670438c7b21c4b70680e64d525e3e5212697229e00d1a350856c39171951085" address="unix:///run/containerd/s/5256cc43ced411373b5b389c72c1acc324ed0cff932fa6d3492d410784c9a993" protocol=ttrpc version=3 Jun 21 04:37:21.613362 containerd[1587]: time="2025-06-21T04:37:21.613339002Z" level=info msg="connecting to shim f7b89d94eeb9c46879328111dba9349ac4233c22996d6d45c777a6a848ce660e" address="unix:///run/containerd/s/a51cdc328b7e3d4b802aff5f7faa59a934686607a670f2101ae42c9217022ab5" protocol=ttrpc version=3 Jun 21 04:37:21.636588 systemd[1]: Started cri-containerd-e670438c7b21c4b70680e64d525e3e5212697229e00d1a350856c39171951085.scope - libcontainer container e670438c7b21c4b70680e64d525e3e5212697229e00d1a350856c39171951085. Jun 21 04:37:21.640222 systemd[1]: Started cri-containerd-f7b89d94eeb9c46879328111dba9349ac4233c22996d6d45c777a6a848ce660e.scope - libcontainer container f7b89d94eeb9c46879328111dba9349ac4233c22996d6d45c777a6a848ce660e. Jun 21 04:37:21.656140 containerd[1587]: time="2025-06-21T04:37:21.656018091Z" level=info msg="StartContainer for \"8c8577f3f943ddcb0b0c84a287cd876616fd94bc2eac4217c6e5b18bfdab506e\" returns successfully" Jun 21 04:37:21.692332 containerd[1587]: time="2025-06-21T04:37:21.692202626Z" level=info msg="StartContainer for \"e670438c7b21c4b70680e64d525e3e5212697229e00d1a350856c39171951085\" returns successfully" Jun 21 04:37:21.695700 kubelet[2347]: W0621 04:37:21.695655 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jun 21 04:37:21.696005 kubelet[2347]: E0621 04:37:21.695807 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:37:21.699310 containerd[1587]: time="2025-06-21T04:37:21.699285413Z" level=info msg="StartContainer for \"f7b89d94eeb9c46879328111dba9349ac4233c22996d6d45c777a6a848ce660e\" returns successfully" Jun 21 04:37:22.392249 kubelet[2347]: I0621 04:37:22.391856 2347 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 21 04:37:22.441451 kubelet[2347]: E0621 04:37:22.440651 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:22.442972 kubelet[2347]: E0621 04:37:22.442939 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:22.448054 kubelet[2347]: E0621 04:37:22.448019 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:22.565986 kubelet[2347]: E0621 04:37:22.565938 2347 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 21 04:37:22.666738 kubelet[2347]: I0621 04:37:22.666071 2347 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jun 21 04:37:23.398895 kubelet[2347]: I0621 04:37:23.398849 2347 apiserver.go:52] "Watching apiserver" Jun 21 04:37:23.410849 kubelet[2347]: I0621 04:37:23.410825 2347 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 04:37:23.452110 kubelet[2347]: E0621 04:37:23.452057 2347 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jun 21 04:37:23.452282 kubelet[2347]: E0621 04:37:23.452255 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:24.657815 systemd[1]: Reload requested from client PID 2627 ('systemctl') (unit session-7.scope)... Jun 21 04:37:24.657830 systemd[1]: Reloading... Jun 21 04:37:24.737449 zram_generator::config[2670]: No configuration found. Jun 21 04:37:25.096077 kubelet[2347]: E0621 04:37:25.096044 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:25.145280 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:37:25.272785 systemd[1]: Reloading finished in 614 ms. Jun 21 04:37:25.303091 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:37:25.326665 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 04:37:25.326936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:37:25.326985 systemd[1]: kubelet.service: Consumed 611ms CPU time, 132.2M memory peak. Jun 21 04:37:25.328776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:37:25.530569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:37:25.534481 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 04:37:25.570643 kubelet[2715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:37:25.570643 kubelet[2715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 04:37:25.570643 kubelet[2715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:37:25.571018 kubelet[2715]: I0621 04:37:25.570683 2715 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 04:37:25.578361 kubelet[2715]: I0621 04:37:25.578331 2715 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 04:37:25.578491 kubelet[2715]: I0621 04:37:25.578473 2715 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 04:37:25.578739 kubelet[2715]: I0621 04:37:25.578719 2715 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 04:37:25.579974 kubelet[2715]: I0621 04:37:25.579952 2715 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 04:37:25.581748 kubelet[2715]: I0621 04:37:25.581732 2715 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 04:37:25.585572 kubelet[2715]: I0621 04:37:25.585234 2715 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 04:37:25.590126 kubelet[2715]: I0621 04:37:25.590109 2715 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 04:37:25.590290 kubelet[2715]: I0621 04:37:25.590278 2715 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 04:37:25.590475 kubelet[2715]: I0621 04:37:25.590454 2715 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 04:37:25.590682 kubelet[2715]: I0621 04:37:25.590530 2715 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 04:37:25.590796 kubelet[2715]: I0621 04:37:25.590786 2715 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 04:37:25.590841 kubelet[2715]: I0621 04:37:25.590834 2715 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 04:37:25.590901 kubelet[2715]: I0621 04:37:25.590893 2715 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:37:25.591043 kubelet[2715]: I0621 04:37:25.591033 2715 kubelet.go:408] "Attempting to sync node with API server" Jun 21 04:37:25.591100 kubelet[2715]: I0621 04:37:25.591091 2715 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 04:37:25.591166 kubelet[2715]: I0621 04:37:25.591158 2715 kubelet.go:314] "Adding apiserver pod source" Jun 21 04:37:25.591229 kubelet[2715]: I0621 04:37:25.591220 2715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 04:37:25.591691 kubelet[2715]: I0621 04:37:25.591677 2715 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 04:37:25.592113 kubelet[2715]: I0621 04:37:25.592100 2715 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 04:37:25.592540 kubelet[2715]: I0621 04:37:25.592527 2715 server.go:1274] "Started kubelet" Jun 21 04:37:25.594432 kubelet[2715]: I0621 04:37:25.592993 2715 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 04:37:25.594432 kubelet[2715]: I0621 04:37:25.592995 2715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 04:37:25.594432 kubelet[2715]: I0621 04:37:25.593408 2715 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 04:37:25.595289 kubelet[2715]: I0621 04:37:25.595255 2715 server.go:449] "Adding debug handlers to kubelet server" Jun 21 04:37:25.598606 kubelet[2715]: I0621 04:37:25.598583 2715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 04:37:25.598746 kubelet[2715]: I0621 04:37:25.598728 2715 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 04:37:25.598829 kubelet[2715]: I0621 04:37:25.598817 2715 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 04:37:25.598951 kubelet[2715]: I0621 04:37:25.598935 2715 reconciler.go:26] "Reconciler: start to sync state" Jun 21 04:37:25.600050 kubelet[2715]: I0621 04:37:25.598974 2715 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 04:37:25.602869 kubelet[2715]: E0621 04:37:25.602619 2715 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 04:37:25.602869 kubelet[2715]: E0621 04:37:25.602751 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 04:37:25.603220 kubelet[2715]: I0621 04:37:25.603203 2715 factory.go:221] Registration of the systemd container factory successfully Jun 21 04:37:25.603450 kubelet[2715]: I0621 04:37:25.603344 2715 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 04:37:25.605056 kubelet[2715]: I0621 04:37:25.605034 2715 factory.go:221] Registration of the containerd container factory successfully Jun 21 04:37:25.611404 kubelet[2715]: I0621 04:37:25.611275 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 04:37:25.613083 kubelet[2715]: I0621 04:37:25.613062 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 04:37:25.613131 kubelet[2715]: I0621 04:37:25.613089 2715 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 04:37:25.613131 kubelet[2715]: I0621 04:37:25.613105 2715 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 04:37:25.613189 kubelet[2715]: E0621 04:37:25.613143 2715 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 04:37:25.634892 kubelet[2715]: I0621 04:37:25.634844 2715 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 04:37:25.634892 kubelet[2715]: I0621 04:37:25.634860 2715 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 04:37:25.634892 kubelet[2715]: I0621 04:37:25.634883 2715 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:37:25.635094 kubelet[2715]: I0621 04:37:25.635003 2715 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 04:37:25.635094 kubelet[2715]: I0621 04:37:25.635013 2715 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 04:37:25.635094 kubelet[2715]: I0621 04:37:25.635030 2715 policy_none.go:49] "None policy: Start" Jun 21 04:37:25.635513 kubelet[2715]: I0621 04:37:25.635493 2715 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 04:37:25.635513 kubelet[2715]: I0621 04:37:25.635513 2715 state_mem.go:35] "Initializing new in-memory state store" Jun 21 04:37:25.635640 kubelet[2715]: I0621 04:37:25.635624 2715 state_mem.go:75] "Updated machine memory state" Jun 21 04:37:25.640311 kubelet[2715]: I0621 04:37:25.640115 2715 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 04:37:25.640311 kubelet[2715]: I0621 04:37:25.640295 2715 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 04:37:25.640311 kubelet[2715]: I0621 04:37:25.640305 2715 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 04:37:25.640615 kubelet[2715]: I0621 04:37:25.640576 2715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 04:37:25.659985 sudo[2750]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 21 04:37:25.660311 sudo[2750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 21 04:37:25.722938 kubelet[2715]: E0621 04:37:25.722877 2715 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 21 04:37:25.744923 kubelet[2715]: I0621 04:37:25.744882 2715 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 21 04:37:25.750443 kubelet[2715]: I0621 04:37:25.749949 2715 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jun 21 04:37:25.750443 kubelet[2715]: I0621 04:37:25.750004 2715 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jun 21 04:37:25.899719 kubelet[2715]: I0621 04:37:25.899603 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/205a5f719c9af629970c649e79280f75-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"205a5f719c9af629970c649e79280f75\") " pod="kube-system/kube-apiserver-localhost" Jun 21 04:37:25.899719 kubelet[2715]: I0621 04:37:25.899636 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 04:37:25.899719 kubelet[2715]: I0621 04:37:25.899656 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 04:37:25.899719 kubelet[2715]: I0621 04:37:25.899671 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 04:37:25.899719 kubelet[2715]: I0621 04:37:25.899686 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/205a5f719c9af629970c649e79280f75-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"205a5f719c9af629970c649e79280f75\") " pod="kube-system/kube-apiserver-localhost" Jun 21 04:37:25.899931 kubelet[2715]: I0621 04:37:25.899699 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/205a5f719c9af629970c649e79280f75-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"205a5f719c9af629970c649e79280f75\") " pod="kube-system/kube-apiserver-localhost" Jun 21 04:37:25.899931 kubelet[2715]: I0621 04:37:25.899711 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jun 21 04:37:25.899931 kubelet[2715]: I0621 04:37:25.899723 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 04:37:25.899931 kubelet[2715]: I0621 04:37:25.899739 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 04:37:26.019984 kubelet[2715]: E0621 04:37:26.019947 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:26.024192 kubelet[2715]: E0621 04:37:26.023932 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:26.024192 kubelet[2715]: E0621 04:37:26.024011 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:26.125688 sudo[2750]: pam_unix(sudo:session): session closed for user root Jun 21 04:37:26.592477 kubelet[2715]: I0621 04:37:26.592436 2715 apiserver.go:52] "Watching apiserver" Jun 21 04:37:26.598961 kubelet[2715]: I0621 04:37:26.598928 2715 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 04:37:26.623365 kubelet[2715]: E0621 04:37:26.623344 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:26.623547 kubelet[2715]: E0621 04:37:26.623511 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:26.623781 kubelet[2715]: E0621 04:37:26.623758 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:26.639783 kubelet[2715]: I0621 04:37:26.639727 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.639713279 podStartE2EDuration="1.639713279s" podCreationTimestamp="2025-06-21 04:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:37:26.639248797 +0000 UTC m=+1.101397481" watchObservedRunningTime="2025-06-21 04:37:26.639713279 +0000 UTC m=+1.101861963" Jun 21 04:37:26.650409 kubelet[2715]: I0621 04:37:26.650322 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.650311672 podStartE2EDuration="1.650311672s" podCreationTimestamp="2025-06-21 04:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:37:26.650123549 +0000 UTC m=+1.112272223" watchObservedRunningTime="2025-06-21 04:37:26.650311672 +0000 UTC m=+1.112460356" Jun 21 04:37:26.650507 kubelet[2715]: I0621 04:37:26.650428 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.650409486 podStartE2EDuration="1.650409486s" podCreationTimestamp="2025-06-21 04:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:37:26.64455379 +0000 UTC m=+1.106702474" watchObservedRunningTime="2025-06-21 04:37:26.650409486 +0000 UTC m=+1.112558170" Jun 21 04:37:27.417443 sudo[1799]: pam_unix(sudo:session): session closed for user root Jun 21 04:37:27.418690 sshd[1798]: Connection closed by 10.0.0.1 port 36360 Jun 21 04:37:27.419003 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jun 21 04:37:27.422410 systemd[1]: sshd@6-10.0.0.30:22-10.0.0.1:36360.service: Deactivated successfully. Jun 21 04:37:27.424548 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 04:37:27.424817 systemd[1]: session-7.scope: Consumed 4.232s CPU time, 262.8M memory peak. Jun 21 04:37:27.426111 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Jun 21 04:37:27.427287 systemd-logind[1570]: Removed session 7. Jun 21 04:37:27.624572 kubelet[2715]: E0621 04:37:27.624518 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:27.624572 kubelet[2715]: E0621 04:37:27.624564 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:28.625672 kubelet[2715]: E0621 04:37:28.625626 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:30.655171 kubelet[2715]: I0621 04:37:30.655136 2715 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 04:37:30.655618 kubelet[2715]: I0621 04:37:30.655585 2715 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 04:37:30.655648 containerd[1587]: time="2025-06-21T04:37:30.655400590Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 04:37:31.461449 systemd[1]: Created slice kubepods-besteffort-poddc6bf36d_f39e_4215_9137_99c6af3df7f8.slice - libcontainer container kubepods-besteffort-poddc6bf36d_f39e_4215_9137_99c6af3df7f8.slice. Jun 21 04:37:31.482709 systemd[1]: Created slice kubepods-burstable-podcdf68a01_59a3_42da_8481_9d5017e34364.slice - libcontainer container kubepods-burstable-podcdf68a01_59a3_42da_8481_9d5017e34364.slice. Jun 21 04:37:31.534492 kubelet[2715]: I0621 04:37:31.534436 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l4zl\" (UniqueName: \"kubernetes.io/projected/cdf68a01-59a3-42da-8481-9d5017e34364-kube-api-access-6l4zl\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.534492 kubelet[2715]: I0621 04:37:31.534481 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbqcz\" (UniqueName: \"kubernetes.io/projected/dc6bf36d-f39e-4215-9137-99c6af3df7f8-kube-api-access-jbqcz\") pod \"kube-proxy-k7blc\" (UID: \"dc6bf36d-f39e-4215-9137-99c6af3df7f8\") " pod="kube-system/kube-proxy-k7blc" Jun 21 04:37:31.534492 kubelet[2715]: I0621 04:37:31.534506 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-hostproc\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.534737 kubelet[2715]: I0621 04:37:31.534522 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-cgroup\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.534737 kubelet[2715]: I0621 04:37:31.534537 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdf68a01-59a3-42da-8481-9d5017e34364-hubble-tls\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.534737 kubelet[2715]: I0621 04:37:31.534551 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-bpf-maps\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.534737 kubelet[2715]: I0621 04:37:31.534565 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-host-proc-sys-net\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.534737 kubelet[2715]: I0621 04:37:31.534582 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc6bf36d-f39e-4215-9137-99c6af3df7f8-xtables-lock\") pod \"kube-proxy-k7blc\" (UID: \"dc6bf36d-f39e-4215-9137-99c6af3df7f8\") " pod="kube-system/kube-proxy-k7blc" Jun 21 04:37:31.534737 kubelet[2715]: I0621 04:37:31.534598 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc6bf36d-f39e-4215-9137-99c6af3df7f8-lib-modules\") pod \"kube-proxy-k7blc\" (UID: \"dc6bf36d-f39e-4215-9137-99c6af3df7f8\") " pod="kube-system/kube-proxy-k7blc" Jun 21 04:37:31.534917 kubelet[2715]: I0621 04:37:31.534613 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-xtables-lock\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.534917 kubelet[2715]: I0621 04:37:31.534629 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-config-path\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.534917 kubelet[2715]: I0621 04:37:31.534644 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-lib-modules\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.534917 kubelet[2715]: I0621 04:37:31.534660 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dc6bf36d-f39e-4215-9137-99c6af3df7f8-kube-proxy\") pod \"kube-proxy-k7blc\" (UID: \"dc6bf36d-f39e-4215-9137-99c6af3df7f8\") " pod="kube-system/kube-proxy-k7blc" Jun 21 04:37:31.534917 kubelet[2715]: I0621 04:37:31.534673 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-etc-cni-netd\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.534917 kubelet[2715]: I0621 04:37:31.534689 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-host-proc-sys-kernel\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.535116 kubelet[2715]: I0621 04:37:31.534732 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-run\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.535116 kubelet[2715]: I0621 04:37:31.534763 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cni-path\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.535116 kubelet[2715]: I0621 04:37:31.534779 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdf68a01-59a3-42da-8481-9d5017e34364-clustermesh-secrets\") pod \"cilium-l87bf\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " pod="kube-system/cilium-l87bf" Jun 21 04:37:31.673756 systemd[1]: Created slice kubepods-besteffort-podbc820e12_b1cf_4d89_b20e_50f0dc5643a5.slice - libcontainer container kubepods-besteffort-podbc820e12_b1cf_4d89_b20e_50f0dc5643a5.slice. Jun 21 04:37:31.736387 kubelet[2715]: I0621 04:37:31.736242 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc820e12-b1cf-4d89-b20e-50f0dc5643a5-cilium-config-path\") pod \"cilium-operator-5d85765b45-lhh2s\" (UID: \"bc820e12-b1cf-4d89-b20e-50f0dc5643a5\") " pod="kube-system/cilium-operator-5d85765b45-lhh2s" Jun 21 04:37:31.736387 kubelet[2715]: I0621 04:37:31.736289 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckhvt\" (UniqueName: \"kubernetes.io/projected/bc820e12-b1cf-4d89-b20e-50f0dc5643a5-kube-api-access-ckhvt\") pod \"cilium-operator-5d85765b45-lhh2s\" (UID: \"bc820e12-b1cf-4d89-b20e-50f0dc5643a5\") " pod="kube-system/cilium-operator-5d85765b45-lhh2s" Jun 21 04:37:31.780709 kubelet[2715]: E0621 04:37:31.780656 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:31.781572 containerd[1587]: time="2025-06-21T04:37:31.781524780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7blc,Uid:dc6bf36d-f39e-4215-9137-99c6af3df7f8,Namespace:kube-system,Attempt:0,}" Jun 21 04:37:31.788653 kubelet[2715]: E0621 04:37:31.788618 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:31.789293 containerd[1587]: time="2025-06-21T04:37:31.789229808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l87bf,Uid:cdf68a01-59a3-42da-8481-9d5017e34364,Namespace:kube-system,Attempt:0,}" Jun 21 04:37:31.966938 containerd[1587]: time="2025-06-21T04:37:31.966827930Z" level=info msg="connecting to shim dc4e62d16ccbafc12c2aa73022d50540508b3c287b1935e482a9ee5782558034" address="unix:///run/containerd/s/3f69f36f7d0024f33d60fd1735b1310efa35e439b95ba0f985638f6169b26fd0" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:37:31.968166 containerd[1587]: time="2025-06-21T04:37:31.968130191Z" level=info msg="connecting to shim 69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1" address="unix:///run/containerd/s/ae780532330d0348cf8769754f0a2ea42ac79fe6c1108583bfd1a42df70b5817" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:37:31.977549 kubelet[2715]: E0621 04:37:31.977509 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:31.978318 containerd[1587]: time="2025-06-21T04:37:31.978277329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lhh2s,Uid:bc820e12-b1cf-4d89-b20e-50f0dc5643a5,Namespace:kube-system,Attempt:0,}" Jun 21 04:37:31.994557 systemd[1]: Started cri-containerd-69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1.scope - libcontainer container 69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1. Jun 21 04:37:31.997592 systemd[1]: Started cri-containerd-dc4e62d16ccbafc12c2aa73022d50540508b3c287b1935e482a9ee5782558034.scope - libcontainer container dc4e62d16ccbafc12c2aa73022d50540508b3c287b1935e482a9ee5782558034. Jun 21 04:37:31.999316 containerd[1587]: time="2025-06-21T04:37:31.999280405Z" level=info msg="connecting to shim f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79" address="unix:///run/containerd/s/d0a2a5845c78771ff86e8cc9afa91112d5ab5e477caa23648fe3b7a04d2d4f1f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:37:32.025230 systemd[1]: Started cri-containerd-f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79.scope - libcontainer container f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79. Jun 21 04:37:32.029787 containerd[1587]: time="2025-06-21T04:37:32.029749288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l87bf,Uid:cdf68a01-59a3-42da-8481-9d5017e34364,Namespace:kube-system,Attempt:0,} returns sandbox id \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\"" Jun 21 04:37:32.030492 kubelet[2715]: E0621 04:37:32.030470 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:32.031750 containerd[1587]: time="2025-06-21T04:37:32.031721381Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 21 04:37:32.033080 containerd[1587]: time="2025-06-21T04:37:32.033047073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7blc,Uid:dc6bf36d-f39e-4215-9137-99c6af3df7f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc4e62d16ccbafc12c2aa73022d50540508b3c287b1935e482a9ee5782558034\"" Jun 21 04:37:32.034282 kubelet[2715]: E0621 04:37:32.034268 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:32.036242 containerd[1587]: time="2025-06-21T04:37:32.036217422Z" level=info msg="CreateContainer within sandbox \"dc4e62d16ccbafc12c2aa73022d50540508b3c287b1935e482a9ee5782558034\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 04:37:32.051461 containerd[1587]: time="2025-06-21T04:37:32.050588762Z" level=info msg="Container 092da2c955b9ce9de77b7bc8a16c6ae4f3745e324aeedc459cba26315d4481c0: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:32.059145 containerd[1587]: time="2025-06-21T04:37:32.059100777Z" level=info msg="CreateContainer within sandbox \"dc4e62d16ccbafc12c2aa73022d50540508b3c287b1935e482a9ee5782558034\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"092da2c955b9ce9de77b7bc8a16c6ae4f3745e324aeedc459cba26315d4481c0\"" Jun 21 04:37:32.060286 containerd[1587]: time="2025-06-21T04:37:32.060263636Z" level=info msg="StartContainer for \"092da2c955b9ce9de77b7bc8a16c6ae4f3745e324aeedc459cba26315d4481c0\"" Jun 21 04:37:32.061934 containerd[1587]: time="2025-06-21T04:37:32.061912178Z" level=info msg="connecting to shim 092da2c955b9ce9de77b7bc8a16c6ae4f3745e324aeedc459cba26315d4481c0" address="unix:///run/containerd/s/3f69f36f7d0024f33d60fd1735b1310efa35e439b95ba0f985638f6169b26fd0" protocol=ttrpc version=3 Jun 21 04:37:32.070312 containerd[1587]: time="2025-06-21T04:37:32.070275596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lhh2s,Uid:bc820e12-b1cf-4d89-b20e-50f0dc5643a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79\"" Jun 21 04:37:32.071297 kubelet[2715]: E0621 04:37:32.071277 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:32.095546 systemd[1]: Started cri-containerd-092da2c955b9ce9de77b7bc8a16c6ae4f3745e324aeedc459cba26315d4481c0.scope - libcontainer container 092da2c955b9ce9de77b7bc8a16c6ae4f3745e324aeedc459cba26315d4481c0. Jun 21 04:37:32.139138 containerd[1587]: time="2025-06-21T04:37:32.139102095Z" level=info msg="StartContainer for \"092da2c955b9ce9de77b7bc8a16c6ae4f3745e324aeedc459cba26315d4481c0\" returns successfully" Jun 21 04:37:32.632746 kubelet[2715]: E0621 04:37:32.632695 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:32.642238 kubelet[2715]: I0621 04:37:32.642092 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k7blc" podStartSLOduration=1.642073766 podStartE2EDuration="1.642073766s" podCreationTimestamp="2025-06-21 04:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:37:32.641749333 +0000 UTC m=+7.103898017" watchObservedRunningTime="2025-06-21 04:37:32.642073766 +0000 UTC m=+7.104222450" Jun 21 04:37:34.226442 kubelet[2715]: E0621 04:37:34.225178 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:34.638477 kubelet[2715]: E0621 04:37:34.638361 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:35.565066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038271803.mount: Deactivated successfully. Jun 21 04:37:37.335882 kubelet[2715]: E0621 04:37:37.335847 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:38.199530 kubelet[2715]: E0621 04:37:38.199440 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:38.644368 kubelet[2715]: E0621 04:37:38.644252 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:38.883369 containerd[1587]: time="2025-06-21T04:37:38.883315575Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:38.884059 containerd[1587]: time="2025-06-21T04:37:38.884031709Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 21 04:37:38.885206 containerd[1587]: time="2025-06-21T04:37:38.885178081Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:38.886463 containerd[1587]: time="2025-06-21T04:37:38.886439734Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.854687155s" Jun 21 04:37:38.886512 containerd[1587]: time="2025-06-21T04:37:38.886463941Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 21 04:37:38.887267 containerd[1587]: time="2025-06-21T04:37:38.887203840Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 21 04:37:38.888127 containerd[1587]: time="2025-06-21T04:37:38.888094275Z" level=info msg="CreateContainer within sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 04:37:38.897339 containerd[1587]: time="2025-06-21T04:37:38.897248878Z" level=info msg="Container df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:38.903510 containerd[1587]: time="2025-06-21T04:37:38.903472661Z" level=info msg="CreateContainer within sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\"" Jun 21 04:37:38.904136 containerd[1587]: time="2025-06-21T04:37:38.903916366Z" level=info msg="StartContainer for \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\"" Jun 21 04:37:38.904840 containerd[1587]: time="2025-06-21T04:37:38.904812884Z" level=info msg="connecting to shim df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f" address="unix:///run/containerd/s/ae780532330d0348cf8769754f0a2ea42ac79fe6c1108583bfd1a42df70b5817" protocol=ttrpc version=3 Jun 21 04:37:38.959546 systemd[1]: Started cri-containerd-df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f.scope - libcontainer container df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f. Jun 21 04:37:38.998896 systemd[1]: cri-containerd-df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f.scope: Deactivated successfully. Jun 21 04:37:39.000530 containerd[1587]: time="2025-06-21T04:37:39.000493511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\" id:\"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\" pid:3136 exited_at:{seconds:1750480658 nanos:999990373}" Jun 21 04:37:39.045511 containerd[1587]: time="2025-06-21T04:37:39.045458432Z" level=info msg="received exit event container_id:\"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\" id:\"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\" pid:3136 exited_at:{seconds:1750480658 nanos:999990373}" Jun 21 04:37:39.046393 containerd[1587]: time="2025-06-21T04:37:39.046358454Z" level=info msg="StartContainer for \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\" returns successfully" Jun 21 04:37:39.067215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f-rootfs.mount: Deactivated successfully. Jun 21 04:37:39.646677 kubelet[2715]: E0621 04:37:39.646630 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:39.649056 containerd[1587]: time="2025-06-21T04:37:39.648969635Z" level=info msg="CreateContainer within sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 04:37:39.666838 containerd[1587]: time="2025-06-21T04:37:39.666774922Z" level=info msg="Container f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:39.677040 containerd[1587]: time="2025-06-21T04:37:39.676955601Z" level=info msg="CreateContainer within sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\"" Jun 21 04:37:39.678985 containerd[1587]: time="2025-06-21T04:37:39.678961537Z" level=info msg="StartContainer for \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\"" Jun 21 04:37:39.681679 containerd[1587]: time="2025-06-21T04:37:39.681481742Z" level=info msg="connecting to shim f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae" address="unix:///run/containerd/s/ae780532330d0348cf8769754f0a2ea42ac79fe6c1108583bfd1a42df70b5817" protocol=ttrpc version=3 Jun 21 04:37:39.705574 systemd[1]: Started cri-containerd-f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae.scope - libcontainer container f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae. Jun 21 04:37:39.736540 containerd[1587]: time="2025-06-21T04:37:39.736359160Z" level=info msg="StartContainer for \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\" returns successfully" Jun 21 04:37:39.749848 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 04:37:39.750168 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:37:39.750372 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:37:39.751922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:37:39.753895 systemd[1]: cri-containerd-f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae.scope: Deactivated successfully. Jun 21 04:37:39.754836 containerd[1587]: time="2025-06-21T04:37:39.754799064Z" level=info msg="received exit event container_id:\"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\" id:\"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\" pid:3182 exited_at:{seconds:1750480659 nanos:754275978}" Jun 21 04:37:39.754937 containerd[1587]: time="2025-06-21T04:37:39.754896398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\" id:\"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\" pid:3182 exited_at:{seconds:1750480659 nanos:754275978}" Jun 21 04:37:39.787322 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:37:40.650812 kubelet[2715]: E0621 04:37:40.650660 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:40.657382 containerd[1587]: time="2025-06-21T04:37:40.657345630Z" level=info msg="CreateContainer within sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 04:37:40.658321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561038303.mount: Deactivated successfully. Jun 21 04:37:40.686289 containerd[1587]: time="2025-06-21T04:37:40.686241608Z" level=info msg="Container 4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:40.689960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101069302.mount: Deactivated successfully. Jun 21 04:37:40.699131 containerd[1587]: time="2025-06-21T04:37:40.699092447Z" level=info msg="CreateContainer within sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\"" Jun 21 04:37:40.699650 containerd[1587]: time="2025-06-21T04:37:40.699608828Z" level=info msg="StartContainer for \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\"" Jun 21 04:37:40.701739 containerd[1587]: time="2025-06-21T04:37:40.701707377Z" level=info msg="connecting to shim 4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a" address="unix:///run/containerd/s/ae780532330d0348cf8769754f0a2ea42ac79fe6c1108583bfd1a42df70b5817" protocol=ttrpc version=3 Jun 21 04:37:40.727659 systemd[1]: Started cri-containerd-4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a.scope - libcontainer container 4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a. Jun 21 04:37:40.765950 systemd[1]: cri-containerd-4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a.scope: Deactivated successfully. Jun 21 04:37:40.766745 containerd[1587]: time="2025-06-21T04:37:40.766655669Z" level=info msg="received exit event container_id:\"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\" id:\"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\" pid:3238 exited_at:{seconds:1750480660 nanos:766537073}" Jun 21 04:37:40.766848 containerd[1587]: time="2025-06-21T04:37:40.766815723Z" level=info msg="StartContainer for \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\" returns successfully" Jun 21 04:37:40.766963 containerd[1587]: time="2025-06-21T04:37:40.766913639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\" id:\"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\" pid:3238 exited_at:{seconds:1750480660 nanos:766537073}" Jun 21 04:37:41.121038 containerd[1587]: time="2025-06-21T04:37:41.120983720Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:41.121767 containerd[1587]: time="2025-06-21T04:37:41.121714889Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 21 04:37:41.122761 containerd[1587]: time="2025-06-21T04:37:41.122729246Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:37:41.123831 containerd[1587]: time="2025-06-21T04:37:41.123786154Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.236539873s" Jun 21 04:37:41.123831 containerd[1587]: time="2025-06-21T04:37:41.123823925Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 21 04:37:41.125556 containerd[1587]: time="2025-06-21T04:37:41.125532831Z" level=info msg="CreateContainer within sandbox \"f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 21 04:37:41.132231 containerd[1587]: time="2025-06-21T04:37:41.132203541Z" level=info msg="Container 43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:41.141173 containerd[1587]: time="2025-06-21T04:37:41.141128763Z" level=info msg="CreateContainer within sandbox \"f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\"" Jun 21 04:37:41.142247 containerd[1587]: time="2025-06-21T04:37:41.141547919Z" level=info msg="StartContainer for \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\"" Jun 21 04:37:41.142247 containerd[1587]: time="2025-06-21T04:37:41.142212020Z" level=info msg="connecting to shim 43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a" address="unix:///run/containerd/s/d0a2a5845c78771ff86e8cc9afa91112d5ab5e477caa23648fe3b7a04d2d4f1f" protocol=ttrpc version=3 Jun 21 04:37:41.166593 systemd[1]: Started cri-containerd-43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a.scope - libcontainer container 43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a. Jun 21 04:37:41.196973 containerd[1587]: time="2025-06-21T04:37:41.196927555Z" level=info msg="StartContainer for \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" returns successfully" Jun 21 04:37:41.659678 kubelet[2715]: E0621 04:37:41.659592 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:41.662700 kubelet[2715]: E0621 04:37:41.662653 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:41.663363 containerd[1587]: time="2025-06-21T04:37:41.662982152Z" level=info msg="CreateContainer within sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 04:37:41.675737 containerd[1587]: time="2025-06-21T04:37:41.675616509Z" level=info msg="Container 32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:41.685082 containerd[1587]: time="2025-06-21T04:37:41.684958883Z" level=info msg="CreateContainer within sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\"" Jun 21 04:37:41.685879 containerd[1587]: time="2025-06-21T04:37:41.685846830Z" level=info msg="StartContainer for \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\"" Jun 21 04:37:41.687516 containerd[1587]: time="2025-06-21T04:37:41.687182377Z" level=info msg="connecting to shim 32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c" address="unix:///run/containerd/s/ae780532330d0348cf8769754f0a2ea42ac79fe6c1108583bfd1a42df70b5817" protocol=ttrpc version=3 Jun 21 04:37:41.725697 systemd[1]: Started cri-containerd-32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c.scope - libcontainer container 32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c. Jun 21 04:37:41.784461 systemd[1]: cri-containerd-32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c.scope: Deactivated successfully. Jun 21 04:37:41.785789 containerd[1587]: time="2025-06-21T04:37:41.785742578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\" id:\"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\" pid:3321 exited_at:{seconds:1750480661 nanos:785182956}" Jun 21 04:37:41.788583 containerd[1587]: time="2025-06-21T04:37:41.788557956Z" level=info msg="received exit event container_id:\"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\" id:\"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\" pid:3321 exited_at:{seconds:1750480661 nanos:785182956}" Jun 21 04:37:41.797820 containerd[1587]: time="2025-06-21T04:37:41.797786194Z" level=info msg="StartContainer for \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\" returns successfully" Jun 21 04:37:42.668787 kubelet[2715]: E0621 04:37:42.668731 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:42.669301 kubelet[2715]: E0621 04:37:42.668864 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:42.675081 containerd[1587]: time="2025-06-21T04:37:42.675039458Z" level=info msg="CreateContainer within sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 04:37:42.689246 containerd[1587]: time="2025-06-21T04:37:42.689054016Z" level=info msg="Container 83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:42.690888 kubelet[2715]: I0621 04:37:42.690718 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-lhh2s" podStartSLOduration=2.638058662 podStartE2EDuration="11.690696384s" podCreationTimestamp="2025-06-21 04:37:31 +0000 UTC" firstStartedPulling="2025-06-21 04:37:32.071770483 +0000 UTC m=+6.533919167" lastFinishedPulling="2025-06-21 04:37:41.124408205 +0000 UTC m=+15.586556889" observedRunningTime="2025-06-21 04:37:41.707485069 +0000 UTC m=+16.169633773" watchObservedRunningTime="2025-06-21 04:37:42.690696384 +0000 UTC m=+17.152845068" Jun 21 04:37:42.697002 containerd[1587]: time="2025-06-21T04:37:42.696964189Z" level=info msg="CreateContainer within sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\"" Jun 21 04:37:42.697452 containerd[1587]: time="2025-06-21T04:37:42.697430835Z" level=info msg="StartContainer for \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\"" Jun 21 04:37:42.698388 containerd[1587]: time="2025-06-21T04:37:42.698357022Z" level=info msg="connecting to shim 83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772" address="unix:///run/containerd/s/ae780532330d0348cf8769754f0a2ea42ac79fe6c1108583bfd1a42df70b5817" protocol=ttrpc version=3 Jun 21 04:37:42.717538 systemd[1]: Started cri-containerd-83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772.scope - libcontainer container 83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772. Jun 21 04:37:42.754244 containerd[1587]: time="2025-06-21T04:37:42.754207261Z" level=info msg="StartContainer for \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" returns successfully" Jun 21 04:37:42.823532 containerd[1587]: time="2025-06-21T04:37:42.823400233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" id:\"2221ee55c01cb8409887833d54b7c113828385a5729d9467045273b2bc6dfadf\" pid:3389 exited_at:{seconds:1750480662 nanos:823105283}" Jun 21 04:37:42.871969 kubelet[2715]: I0621 04:37:42.871938 2715 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 21 04:37:42.902814 kubelet[2715]: I0621 04:37:42.902555 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b634e2-e92e-4548-8f1a-5f758d0fbcb0-config-volume\") pod \"coredns-7c65d6cfc9-qrt45\" (UID: \"63b634e2-e92e-4548-8f1a-5f758d0fbcb0\") " pod="kube-system/coredns-7c65d6cfc9-qrt45" Jun 21 04:37:42.902814 kubelet[2715]: I0621 04:37:42.902735 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcm48\" (UniqueName: \"kubernetes.io/projected/63b634e2-e92e-4548-8f1a-5f758d0fbcb0-kube-api-access-pcm48\") pod \"coredns-7c65d6cfc9-qrt45\" (UID: \"63b634e2-e92e-4548-8f1a-5f758d0fbcb0\") " pod="kube-system/coredns-7c65d6cfc9-qrt45" Jun 21 04:37:42.902988 kubelet[2715]: I0621 04:37:42.902899 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckqt7\" (UniqueName: \"kubernetes.io/projected/772e6c0e-3469-45b0-90d6-5e2cb28cee46-kube-api-access-ckqt7\") pod \"coredns-7c65d6cfc9-sl29g\" (UID: \"772e6c0e-3469-45b0-90d6-5e2cb28cee46\") " pod="kube-system/coredns-7c65d6cfc9-sl29g" Jun 21 04:37:42.902988 kubelet[2715]: I0621 04:37:42.902925 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/772e6c0e-3469-45b0-90d6-5e2cb28cee46-config-volume\") pod \"coredns-7c65d6cfc9-sl29g\" (UID: \"772e6c0e-3469-45b0-90d6-5e2cb28cee46\") " pod="kube-system/coredns-7c65d6cfc9-sl29g" Jun 21 04:37:42.912288 systemd[1]: Created slice kubepods-burstable-pod772e6c0e_3469_45b0_90d6_5e2cb28cee46.slice - libcontainer container kubepods-burstable-pod772e6c0e_3469_45b0_90d6_5e2cb28cee46.slice. Jun 21 04:37:42.918459 systemd[1]: Created slice kubepods-burstable-pod63b634e2_e92e_4548_8f1a_5f758d0fbcb0.slice - libcontainer container kubepods-burstable-pod63b634e2_e92e_4548_8f1a_5f758d0fbcb0.slice. Jun 21 04:37:43.216312 kubelet[2715]: E0621 04:37:43.216278 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:43.221967 kubelet[2715]: E0621 04:37:43.221533 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:43.222540 containerd[1587]: time="2025-06-21T04:37:43.222503763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qrt45,Uid:63b634e2-e92e-4548-8f1a-5f758d0fbcb0,Namespace:kube-system,Attempt:0,}" Jun 21 04:37:43.222739 containerd[1587]: time="2025-06-21T04:37:43.222699514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sl29g,Uid:772e6c0e-3469-45b0-90d6-5e2cb28cee46,Namespace:kube-system,Attempt:0,}" Jun 21 04:37:43.353149 update_engine[1571]: I20250621 04:37:43.353073 1571 update_attempter.cc:509] Updating boot flags... Jun 21 04:37:43.679215 kubelet[2715]: E0621 04:37:43.679128 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:43.694010 kubelet[2715]: I0621 04:37:43.693940 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l87bf" podStartSLOduration=5.838163737 podStartE2EDuration="12.693920534s" podCreationTimestamp="2025-06-21 04:37:31 +0000 UTC" firstStartedPulling="2025-06-21 04:37:32.031284683 +0000 UTC m=+6.493433357" lastFinishedPulling="2025-06-21 04:37:38.88704147 +0000 UTC m=+13.349190154" observedRunningTime="2025-06-21 04:37:43.693887722 +0000 UTC m=+18.156036416" watchObservedRunningTime="2025-06-21 04:37:43.693920534 +0000 UTC m=+18.156069218" Jun 21 04:37:44.681276 kubelet[2715]: E0621 04:37:44.681233 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:44.960272 systemd-networkd[1492]: cilium_host: Link UP Jun 21 04:37:44.960505 systemd-networkd[1492]: cilium_net: Link UP Jun 21 04:37:44.960748 systemd-networkd[1492]: cilium_net: Gained carrier Jun 21 04:37:44.962209 systemd-networkd[1492]: cilium_host: Gained carrier Jun 21 04:37:45.056593 systemd-networkd[1492]: cilium_vxlan: Link UP Jun 21 04:37:45.056604 systemd-networkd[1492]: cilium_vxlan: Gained carrier Jun 21 04:37:45.257490 kernel: NET: Registered PF_ALG protocol family Jun 21 04:37:45.413631 systemd-networkd[1492]: cilium_net: Gained IPv6LL Jun 21 04:37:45.437641 systemd-networkd[1492]: cilium_host: Gained IPv6LL Jun 21 04:37:45.682845 kubelet[2715]: E0621 04:37:45.682809 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:45.857282 systemd-networkd[1492]: lxc_health: Link UP Jun 21 04:37:45.869575 systemd-networkd[1492]: lxc_health: Gained carrier Jun 21 04:37:46.279985 systemd-networkd[1492]: lxcb7321ece6d1c: Link UP Jun 21 04:37:46.280664 kernel: eth0: renamed from tmp422b9 Jun 21 04:37:46.282371 systemd-networkd[1492]: lxcb7321ece6d1c: Gained carrier Jun 21 04:37:46.306445 kernel: eth0: renamed from tmp1d631 Jun 21 04:37:46.307260 systemd-networkd[1492]: lxc6e4ea0fcb747: Link UP Jun 21 04:37:46.307654 systemd-networkd[1492]: lxc6e4ea0fcb747: Gained carrier Jun 21 04:37:46.765641 systemd-networkd[1492]: cilium_vxlan: Gained IPv6LL Jun 21 04:37:46.958610 systemd-networkd[1492]: lxc_health: Gained IPv6LL Jun 21 04:37:47.790439 kubelet[2715]: E0621 04:37:47.790298 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:47.853586 systemd-networkd[1492]: lxc6e4ea0fcb747: Gained IPv6LL Jun 21 04:37:48.173734 systemd-networkd[1492]: lxcb7321ece6d1c: Gained IPv6LL Jun 21 04:37:48.687481 kubelet[2715]: E0621 04:37:48.687407 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:49.591762 containerd[1587]: time="2025-06-21T04:37:49.591709690Z" level=info msg="connecting to shim 1d631f32f13d73de74e489ab88f8725d60077be77ff691c68b9a7e4cf082d8a2" address="unix:///run/containerd/s/34ea370dff0185aeafa0da785c033960ec6d2c8cb744fcda1a13898ee3312324" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:37:49.593208 containerd[1587]: time="2025-06-21T04:37:49.593164139Z" level=info msg="connecting to shim 422b9017b55c9de76d75b0aecba6cf7ec4c9131555ed61e695d4f33ff156f15d" address="unix:///run/containerd/s/f1755300aa38e974ad23a1b431e91c87499ed72b8aca1bb355e9fa5cfa88b7a7" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:37:49.617551 systemd[1]: Started cri-containerd-1d631f32f13d73de74e489ab88f8725d60077be77ff691c68b9a7e4cf082d8a2.scope - libcontainer container 1d631f32f13d73de74e489ab88f8725d60077be77ff691c68b9a7e4cf082d8a2. Jun 21 04:37:49.620985 systemd[1]: Started cri-containerd-422b9017b55c9de76d75b0aecba6cf7ec4c9131555ed61e695d4f33ff156f15d.scope - libcontainer container 422b9017b55c9de76d75b0aecba6cf7ec4c9131555ed61e695d4f33ff156f15d. Jun 21 04:37:49.630572 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 04:37:49.635507 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 04:37:49.659179 containerd[1587]: time="2025-06-21T04:37:49.659143789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sl29g,Uid:772e6c0e-3469-45b0-90d6-5e2cb28cee46,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d631f32f13d73de74e489ab88f8725d60077be77ff691c68b9a7e4cf082d8a2\"" Jun 21 04:37:49.663034 kubelet[2715]: E0621 04:37:49.662999 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:49.664696 containerd[1587]: time="2025-06-21T04:37:49.664656788Z" level=info msg="CreateContainer within sandbox \"1d631f32f13d73de74e489ab88f8725d60077be77ff691c68b9a7e4cf082d8a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 04:37:49.671842 containerd[1587]: time="2025-06-21T04:37:49.671811199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qrt45,Uid:63b634e2-e92e-4548-8f1a-5f758d0fbcb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"422b9017b55c9de76d75b0aecba6cf7ec4c9131555ed61e695d4f33ff156f15d\"" Jun 21 04:37:49.672561 kubelet[2715]: E0621 04:37:49.672543 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:49.674535 containerd[1587]: time="2025-06-21T04:37:49.674290385Z" level=info msg="CreateContainer within sandbox \"422b9017b55c9de76d75b0aecba6cf7ec4c9131555ed61e695d4f33ff156f15d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 04:37:49.680890 containerd[1587]: time="2025-06-21T04:37:49.680851966Z" level=info msg="Container 26e94ae018bf7f7e5ea3b7e5442ee273fba6b2d313c77c2782c4a5b08e12b54c: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:49.688231 containerd[1587]: time="2025-06-21T04:37:49.688174315Z" level=info msg="CreateContainer within sandbox \"1d631f32f13d73de74e489ab88f8725d60077be77ff691c68b9a7e4cf082d8a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26e94ae018bf7f7e5ea3b7e5442ee273fba6b2d313c77c2782c4a5b08e12b54c\"" Jun 21 04:37:49.688497 containerd[1587]: time="2025-06-21T04:37:49.688479412Z" level=info msg="StartContainer for \"26e94ae018bf7f7e5ea3b7e5442ee273fba6b2d313c77c2782c4a5b08e12b54c\"" Jun 21 04:37:49.689379 containerd[1587]: time="2025-06-21T04:37:49.689356711Z" level=info msg="connecting to shim 26e94ae018bf7f7e5ea3b7e5442ee273fba6b2d313c77c2782c4a5b08e12b54c" address="unix:///run/containerd/s/34ea370dff0185aeafa0da785c033960ec6d2c8cb744fcda1a13898ee3312324" protocol=ttrpc version=3 Jun 21 04:37:49.691790 kubelet[2715]: E0621 04:37:49.691767 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:49.692556 containerd[1587]: time="2025-06-21T04:37:49.692528004Z" level=info msg="Container 79fee59e5f3ebf2dbad1dc830ca97be414990afb9f525a0a7667095d1077c491: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:37:49.700237 containerd[1587]: time="2025-06-21T04:37:49.700196057Z" level=info msg="CreateContainer within sandbox \"422b9017b55c9de76d75b0aecba6cf7ec4c9131555ed61e695d4f33ff156f15d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79fee59e5f3ebf2dbad1dc830ca97be414990afb9f525a0a7667095d1077c491\"" Jun 21 04:37:49.701085 containerd[1587]: time="2025-06-21T04:37:49.701017839Z" level=info msg="StartContainer for \"79fee59e5f3ebf2dbad1dc830ca97be414990afb9f525a0a7667095d1077c491\"" Jun 21 04:37:49.702055 containerd[1587]: time="2025-06-21T04:37:49.702028549Z" level=info msg="connecting to shim 79fee59e5f3ebf2dbad1dc830ca97be414990afb9f525a0a7667095d1077c491" address="unix:///run/containerd/s/f1755300aa38e974ad23a1b431e91c87499ed72b8aca1bb355e9fa5cfa88b7a7" protocol=ttrpc version=3 Jun 21 04:37:49.719565 systemd[1]: Started cri-containerd-26e94ae018bf7f7e5ea3b7e5442ee273fba6b2d313c77c2782c4a5b08e12b54c.scope - libcontainer container 26e94ae018bf7f7e5ea3b7e5442ee273fba6b2d313c77c2782c4a5b08e12b54c. Jun 21 04:37:49.722511 systemd[1]: Started cri-containerd-79fee59e5f3ebf2dbad1dc830ca97be414990afb9f525a0a7667095d1077c491.scope - libcontainer container 79fee59e5f3ebf2dbad1dc830ca97be414990afb9f525a0a7667095d1077c491. Jun 21 04:37:49.753967 containerd[1587]: time="2025-06-21T04:37:49.753930002Z" level=info msg="StartContainer for \"26e94ae018bf7f7e5ea3b7e5442ee273fba6b2d313c77c2782c4a5b08e12b54c\" returns successfully" Jun 21 04:37:49.763190 containerd[1587]: time="2025-06-21T04:37:49.763157401Z" level=info msg="StartContainer for \"79fee59e5f3ebf2dbad1dc830ca97be414990afb9f525a0a7667095d1077c491\" returns successfully" Jun 21 04:37:50.694528 kubelet[2715]: E0621 04:37:50.694493 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:50.696496 kubelet[2715]: E0621 04:37:50.696473 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:50.705439 kubelet[2715]: I0621 04:37:50.704477 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sl29g" podStartSLOduration=19.704460824999998 podStartE2EDuration="19.704460825s" podCreationTimestamp="2025-06-21 04:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:37:50.703892492 +0000 UTC m=+25.166041216" watchObservedRunningTime="2025-06-21 04:37:50.704460825 +0000 UTC m=+25.166609499" Jun 21 04:37:50.714551 kubelet[2715]: I0621 04:37:50.714489 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qrt45" podStartSLOduration=19.714469573 podStartE2EDuration="19.714469573s" podCreationTimestamp="2025-06-21 04:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:37:50.71361041 +0000 UTC m=+25.175759094" watchObservedRunningTime="2025-06-21 04:37:50.714469573 +0000 UTC m=+25.176618267" Jun 21 04:37:51.576074 systemd[1]: Started sshd@7-10.0.0.30:22-10.0.0.1:51400.service - OpenSSH per-connection server daemon (10.0.0.1:51400). Jun 21 04:37:51.637710 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 51400 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:37:51.639236 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:37:51.643360 systemd-logind[1570]: New session 8 of user core. Jun 21 04:37:51.650557 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 04:37:51.709545 kubelet[2715]: E0621 04:37:51.709513 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:51.709904 kubelet[2715]: E0621 04:37:51.709586 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:51.772334 sshd[4049]: Connection closed by 10.0.0.1 port 51400 Jun 21 04:37:51.772627 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Jun 21 04:37:51.776151 systemd[1]: sshd@7-10.0.0.30:22-10.0.0.1:51400.service: Deactivated successfully. Jun 21 04:37:51.777981 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 04:37:51.778688 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Jun 21 04:37:51.779892 systemd-logind[1570]: Removed session 8. Jun 21 04:37:52.711787 kubelet[2715]: E0621 04:37:52.711515 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:52.711787 kubelet[2715]: E0621 04:37:52.711622 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:37:56.785203 systemd[1]: Started sshd@8-10.0.0.30:22-10.0.0.1:38560.service - OpenSSH per-connection server daemon (10.0.0.1:38560). Jun 21 04:37:56.845856 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 38560 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:37:56.847569 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:37:56.851891 systemd-logind[1570]: New session 9 of user core. Jun 21 04:37:56.861593 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 04:37:56.967842 sshd[4066]: Connection closed by 10.0.0.1 port 38560 Jun 21 04:37:56.968160 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Jun 21 04:37:56.971804 systemd[1]: sshd@8-10.0.0.30:22-10.0.0.1:38560.service: Deactivated successfully. Jun 21 04:37:56.973525 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 04:37:56.974288 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Jun 21 04:37:56.975406 systemd-logind[1570]: Removed session 9. Jun 21 04:38:01.978309 systemd[1]: Started sshd@9-10.0.0.30:22-10.0.0.1:38576.service - OpenSSH per-connection server daemon (10.0.0.1:38576). Jun 21 04:38:02.038920 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 38576 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:02.040449 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:02.045340 systemd-logind[1570]: New session 10 of user core. Jun 21 04:38:02.052543 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 04:38:02.164443 sshd[4082]: Connection closed by 10.0.0.1 port 38576 Jun 21 04:38:02.164732 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:02.168942 systemd[1]: sshd@9-10.0.0.30:22-10.0.0.1:38576.service: Deactivated successfully. Jun 21 04:38:02.170704 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 04:38:02.171452 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Jun 21 04:38:02.172502 systemd-logind[1570]: Removed session 10. Jun 21 04:38:07.179618 systemd[1]: Started sshd@10-10.0.0.30:22-10.0.0.1:48668.service - OpenSSH per-connection server daemon (10.0.0.1:48668). Jun 21 04:38:07.233068 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 48668 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:07.234478 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:07.239238 systemd-logind[1570]: New session 11 of user core. Jun 21 04:38:07.249548 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 04:38:07.359101 sshd[4102]: Connection closed by 10.0.0.1 port 48668 Jun 21 04:38:07.359560 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:07.373016 systemd[1]: sshd@10-10.0.0.30:22-10.0.0.1:48668.service: Deactivated successfully. Jun 21 04:38:07.374923 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 04:38:07.375819 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Jun 21 04:38:07.378351 systemd[1]: Started sshd@11-10.0.0.30:22-10.0.0.1:48672.service - OpenSSH per-connection server daemon (10.0.0.1:48672). Jun 21 04:38:07.379093 systemd-logind[1570]: Removed session 11. Jun 21 04:38:07.433938 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 48672 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:07.435288 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:07.439606 systemd-logind[1570]: New session 12 of user core. Jun 21 04:38:07.446543 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 04:38:07.586388 sshd[4118]: Connection closed by 10.0.0.1 port 48672 Jun 21 04:38:07.586723 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:07.597469 systemd[1]: sshd@11-10.0.0.30:22-10.0.0.1:48672.service: Deactivated successfully. Jun 21 04:38:07.599322 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 04:38:07.601988 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Jun 21 04:38:07.607450 systemd[1]: Started sshd@12-10.0.0.30:22-10.0.0.1:48678.service - OpenSSH per-connection server daemon (10.0.0.1:48678). Jun 21 04:38:07.609359 systemd-logind[1570]: Removed session 12. Jun 21 04:38:07.659732 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 48678 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:07.661169 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:07.665384 systemd-logind[1570]: New session 13 of user core. Jun 21 04:38:07.678537 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 04:38:07.792593 sshd[4131]: Connection closed by 10.0.0.1 port 48678 Jun 21 04:38:07.792851 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:07.796806 systemd[1]: sshd@12-10.0.0.30:22-10.0.0.1:48678.service: Deactivated successfully. Jun 21 04:38:07.799079 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 04:38:07.799897 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Jun 21 04:38:07.801219 systemd-logind[1570]: Removed session 13. Jun 21 04:38:12.805132 systemd[1]: Started sshd@13-10.0.0.30:22-10.0.0.1:48686.service - OpenSSH per-connection server daemon (10.0.0.1:48686). Jun 21 04:38:12.861648 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 48686 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:12.863314 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:12.867830 systemd-logind[1570]: New session 14 of user core. Jun 21 04:38:12.878571 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 04:38:12.990742 sshd[4148]: Connection closed by 10.0.0.1 port 48686 Jun 21 04:38:12.991121 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:12.996554 systemd[1]: sshd@13-10.0.0.30:22-10.0.0.1:48686.service: Deactivated successfully. Jun 21 04:38:12.998840 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 04:38:12.999868 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Jun 21 04:38:13.001221 systemd-logind[1570]: Removed session 14. Jun 21 04:38:18.007403 systemd[1]: Started sshd@14-10.0.0.30:22-10.0.0.1:52698.service - OpenSSH per-connection server daemon (10.0.0.1:52698). Jun 21 04:38:18.051048 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 52698 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:18.052408 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:18.056425 systemd-logind[1570]: New session 15 of user core. Jun 21 04:38:18.067536 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 04:38:18.177615 sshd[4164]: Connection closed by 10.0.0.1 port 52698 Jun 21 04:38:18.178037 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:18.191673 systemd[1]: sshd@14-10.0.0.30:22-10.0.0.1:52698.service: Deactivated successfully. Jun 21 04:38:18.193722 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 04:38:18.194462 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Jun 21 04:38:18.197689 systemd[1]: Started sshd@15-10.0.0.30:22-10.0.0.1:52712.service - OpenSSH per-connection server daemon (10.0.0.1:52712). Jun 21 04:38:18.198308 systemd-logind[1570]: Removed session 15. Jun 21 04:38:18.271578 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 52712 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:18.273337 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:18.279863 systemd-logind[1570]: New session 16 of user core. Jun 21 04:38:18.292574 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 04:38:18.519786 sshd[4180]: Connection closed by 10.0.0.1 port 52712 Jun 21 04:38:18.520110 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:18.538398 systemd[1]: sshd@15-10.0.0.30:22-10.0.0.1:52712.service: Deactivated successfully. Jun 21 04:38:18.540152 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 04:38:18.540990 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Jun 21 04:38:18.543716 systemd[1]: Started sshd@16-10.0.0.30:22-10.0.0.1:52720.service - OpenSSH per-connection server daemon (10.0.0.1:52720). Jun 21 04:38:18.544863 systemd-logind[1570]: Removed session 16. Jun 21 04:38:18.608900 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 52720 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:18.610681 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:18.614905 systemd-logind[1570]: New session 17 of user core. Jun 21 04:38:18.623543 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 04:38:20.746388 sshd[4193]: Connection closed by 10.0.0.1 port 52720 Jun 21 04:38:20.746868 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:20.759190 systemd[1]: sshd@16-10.0.0.30:22-10.0.0.1:52720.service: Deactivated successfully. Jun 21 04:38:20.761325 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 04:38:20.762207 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Jun 21 04:38:20.765392 systemd[1]: Started sshd@17-10.0.0.30:22-10.0.0.1:52732.service - OpenSSH per-connection server daemon (10.0.0.1:52732). Jun 21 04:38:20.766179 systemd-logind[1570]: Removed session 17. Jun 21 04:38:20.823489 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 52732 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:20.824982 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:20.829606 systemd-logind[1570]: New session 18 of user core. Jun 21 04:38:20.839543 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 04:38:21.105063 sshd[4214]: Connection closed by 10.0.0.1 port 52732 Jun 21 04:38:21.106827 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:21.116239 systemd[1]: sshd@17-10.0.0.30:22-10.0.0.1:52732.service: Deactivated successfully. Jun 21 04:38:21.118234 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 04:38:21.119002 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Jun 21 04:38:21.122118 systemd[1]: Started sshd@18-10.0.0.30:22-10.0.0.1:52742.service - OpenSSH per-connection server daemon (10.0.0.1:52742). Jun 21 04:38:21.123252 systemd-logind[1570]: Removed session 18. Jun 21 04:38:21.175194 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 52742 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:21.177178 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:21.182048 systemd-logind[1570]: New session 19 of user core. Jun 21 04:38:21.191722 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 04:38:21.305223 sshd[4228]: Connection closed by 10.0.0.1 port 52742 Jun 21 04:38:21.305573 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:21.310234 systemd[1]: sshd@18-10.0.0.30:22-10.0.0.1:52742.service: Deactivated successfully. Jun 21 04:38:21.312286 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 04:38:21.313015 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Jun 21 04:38:21.314225 systemd-logind[1570]: Removed session 19. Jun 21 04:38:26.318652 systemd[1]: Started sshd@19-10.0.0.30:22-10.0.0.1:39038.service - OpenSSH per-connection server daemon (10.0.0.1:39038). Jun 21 04:38:26.379870 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 39038 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:26.381407 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:26.386068 systemd-logind[1570]: New session 20 of user core. Jun 21 04:38:26.396588 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 04:38:26.511447 sshd[4246]: Connection closed by 10.0.0.1 port 39038 Jun 21 04:38:26.513131 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:26.518444 systemd[1]: sshd@19-10.0.0.30:22-10.0.0.1:39038.service: Deactivated successfully. Jun 21 04:38:26.520264 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 04:38:26.521193 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Jun 21 04:38:26.522354 systemd-logind[1570]: Removed session 20. Jun 21 04:38:31.523738 systemd[1]: Started sshd@20-10.0.0.30:22-10.0.0.1:39048.service - OpenSSH per-connection server daemon (10.0.0.1:39048). Jun 21 04:38:31.575227 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 39048 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:31.576726 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:31.581281 systemd-logind[1570]: New session 21 of user core. Jun 21 04:38:31.585580 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 04:38:31.697402 sshd[4265]: Connection closed by 10.0.0.1 port 39048 Jun 21 04:38:31.697747 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:31.702137 systemd[1]: sshd@20-10.0.0.30:22-10.0.0.1:39048.service: Deactivated successfully. Jun 21 04:38:31.704475 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 04:38:31.705315 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Jun 21 04:38:31.706722 systemd-logind[1570]: Removed session 21. Jun 21 04:38:36.714515 systemd[1]: Started sshd@21-10.0.0.30:22-10.0.0.1:41078.service - OpenSSH per-connection server daemon (10.0.0.1:41078). Jun 21 04:38:36.756546 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 41078 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:36.757826 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:36.762044 systemd-logind[1570]: New session 22 of user core. Jun 21 04:38:36.771570 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 04:38:36.884667 sshd[4283]: Connection closed by 10.0.0.1 port 41078 Jun 21 04:38:36.884976 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:36.888821 systemd[1]: sshd@21-10.0.0.30:22-10.0.0.1:41078.service: Deactivated successfully. Jun 21 04:38:36.890954 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 04:38:36.891813 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Jun 21 04:38:36.893522 systemd-logind[1570]: Removed session 22. Jun 21 04:38:41.901593 systemd[1]: Started sshd@22-10.0.0.30:22-10.0.0.1:41086.service - OpenSSH per-connection server daemon (10.0.0.1:41086). Jun 21 04:38:41.954301 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 41086 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:41.955704 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:41.959861 systemd-logind[1570]: New session 23 of user core. Jun 21 04:38:41.970559 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 04:38:42.070661 sshd[4298]: Connection closed by 10.0.0.1 port 41086 Jun 21 04:38:42.070942 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:42.082994 systemd[1]: sshd@22-10.0.0.30:22-10.0.0.1:41086.service: Deactivated successfully. Jun 21 04:38:42.084843 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 04:38:42.085586 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Jun 21 04:38:42.088375 systemd[1]: Started sshd@23-10.0.0.30:22-10.0.0.1:41088.service - OpenSSH per-connection server daemon (10.0.0.1:41088). Jun 21 04:38:42.089173 systemd-logind[1570]: Removed session 23. Jun 21 04:38:42.145847 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 41088 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:42.147117 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:42.151166 systemd-logind[1570]: New session 24 of user core. Jun 21 04:38:42.166532 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 04:38:43.491898 containerd[1587]: time="2025-06-21T04:38:43.491775042Z" level=info msg="StopContainer for \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" with timeout 30 (s)" Jun 21 04:38:43.499397 containerd[1587]: time="2025-06-21T04:38:43.499359803Z" level=info msg="Stop container \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" with signal terminated" Jun 21 04:38:43.510024 systemd[1]: cri-containerd-43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a.scope: Deactivated successfully. Jun 21 04:38:43.511232 containerd[1587]: time="2025-06-21T04:38:43.511181434Z" level=info msg="received exit event container_id:\"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" id:\"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" pid:3285 exited_at:{seconds:1750480723 nanos:510866959}" Jun 21 04:38:43.511338 containerd[1587]: time="2025-06-21T04:38:43.511313628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" id:\"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" pid:3285 exited_at:{seconds:1750480723 nanos:510866959}" Jun 21 04:38:43.522128 containerd[1587]: time="2025-06-21T04:38:43.522089557Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 04:38:43.528807 containerd[1587]: time="2025-06-21T04:38:43.528751523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" id:\"853bfa541790195b8094f3b7a7599430d5f607c2a8f2ce8ca45832720e2a4c73\" pid:4341 exited_at:{seconds:1750480723 nanos:528322068}" Jun 21 04:38:43.530636 containerd[1587]: time="2025-06-21T04:38:43.530404724Z" level=info msg="StopContainer for \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" with timeout 2 (s)" Jun 21 04:38:43.530809 containerd[1587]: time="2025-06-21T04:38:43.530772490Z" level=info msg="Stop container \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" with signal terminated" Jun 21 04:38:43.535792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a-rootfs.mount: Deactivated successfully. Jun 21 04:38:43.538097 systemd-networkd[1492]: lxc_health: Link DOWN Jun 21 04:38:43.538105 systemd-networkd[1492]: lxc_health: Lost carrier Jun 21 04:38:43.552445 containerd[1587]: time="2025-06-21T04:38:43.552388744Z" level=info msg="StopContainer for \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" returns successfully" Jun 21 04:38:43.552977 containerd[1587]: time="2025-06-21T04:38:43.552954973Z" level=info msg="StopPodSandbox for \"f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79\"" Jun 21 04:38:43.553039 containerd[1587]: time="2025-06-21T04:38:43.553010229Z" level=info msg="Container to stop \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:38:43.560010 systemd[1]: cri-containerd-83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772.scope: Deactivated successfully. Jun 21 04:38:43.560444 systemd[1]: cri-containerd-83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772.scope: Consumed 6.351s CPU time, 127.5M memory peak, 724K read from disk, 13.3M written to disk. Jun 21 04:38:43.561880 containerd[1587]: time="2025-06-21T04:38:43.561782765Z" level=info msg="received exit event container_id:\"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" id:\"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" pid:3358 exited_at:{seconds:1750480723 nanos:561607317}" Jun 21 04:38:43.561880 containerd[1587]: time="2025-06-21T04:38:43.561846718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" id:\"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" pid:3358 exited_at:{seconds:1750480723 nanos:561607317}" Jun 21 04:38:43.565195 systemd[1]: cri-containerd-f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79.scope: Deactivated successfully. Jun 21 04:38:43.566398 containerd[1587]: time="2025-06-21T04:38:43.566363747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79\" id:\"f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79\" pid:2919 exit_status:137 exited_at:{seconds:1750480723 nanos:566120269}" Jun 21 04:38:43.584236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772-rootfs.mount: Deactivated successfully. Jun 21 04:38:43.593273 containerd[1587]: time="2025-06-21T04:38:43.593229268Z" level=info msg="StopContainer for \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" returns successfully" Jun 21 04:38:43.593849 containerd[1587]: time="2025-06-21T04:38:43.593820555Z" level=info msg="StopPodSandbox for \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\"" Jun 21 04:38:43.593901 containerd[1587]: time="2025-06-21T04:38:43.593870752Z" level=info msg="Container to stop \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:38:43.593901 containerd[1587]: time="2025-06-21T04:38:43.593881493Z" level=info msg="Container to stop \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:38:43.593901 containerd[1587]: time="2025-06-21T04:38:43.593890450Z" level=info msg="Container to stop \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:38:43.593901 containerd[1587]: time="2025-06-21T04:38:43.593899778Z" level=info msg="Container to stop \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:38:43.594031 containerd[1587]: time="2025-06-21T04:38:43.593908394Z" level=info msg="Container to stop \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:38:43.596530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79-rootfs.mount: Deactivated successfully. Jun 21 04:38:43.599154 containerd[1587]: time="2025-06-21T04:38:43.599129288Z" level=info msg="shim disconnected" id=f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79 namespace=k8s.io Jun 21 04:38:43.599370 containerd[1587]: time="2025-06-21T04:38:43.599344972Z" level=warning msg="cleaning up after shim disconnected" id=f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79 namespace=k8s.io Jun 21 04:38:43.600713 systemd[1]: cri-containerd-69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1.scope: Deactivated successfully. Jun 21 04:38:43.609210 containerd[1587]: time="2025-06-21T04:38:43.599409957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 04:38:43.622768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1-rootfs.mount: Deactivated successfully. Jun 21 04:38:43.625584 containerd[1587]: time="2025-06-21T04:38:43.625556185Z" level=info msg="shim disconnected" id=69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1 namespace=k8s.io Jun 21 04:38:43.625747 containerd[1587]: time="2025-06-21T04:38:43.625724409Z" level=warning msg="cleaning up after shim disconnected" id=69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1 namespace=k8s.io Jun 21 04:38:43.625790 containerd[1587]: time="2025-06-21T04:38:43.625740389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 04:38:43.638440 containerd[1587]: time="2025-06-21T04:38:43.638373741Z" level=info msg="TearDown network for sandbox \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" successfully" Jun 21 04:38:43.638440 containerd[1587]: time="2025-06-21T04:38:43.638427375Z" level=info msg="StopPodSandbox for \"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" returns successfully" Jun 21 04:38:43.638772 containerd[1587]: time="2025-06-21T04:38:43.638731820Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" id:\"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" pid:2866 exit_status:137 exited_at:{seconds:1750480723 nanos:602663577}" Jun 21 04:38:43.640116 containerd[1587]: time="2025-06-21T04:38:43.640083641Z" level=info msg="TearDown network for sandbox \"f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79\" successfully" Jun 21 04:38:43.640116 containerd[1587]: time="2025-06-21T04:38:43.640108288Z" level=info msg="StopPodSandbox for \"f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79\" returns successfully" Jun 21 04:38:43.641082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1-shm.mount: Deactivated successfully. Jun 21 04:38:43.647123 containerd[1587]: time="2025-06-21T04:38:43.647092233Z" level=info msg="received exit event sandbox_id:\"69e1484abba755ae6346a7c27996b5a3a67378907900a55f988968ac2592c7f1\" exit_status:137 exited_at:{seconds:1750480723 nanos:602663577}" Jun 21 04:38:43.647306 containerd[1587]: time="2025-06-21T04:38:43.647247171Z" level=info msg="received exit event sandbox_id:\"f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79\" exit_status:137 exited_at:{seconds:1750480723 nanos:566120269}" Jun 21 04:38:43.799594 kubelet[2715]: I0621 04:38:43.799432 2715 scope.go:117] "RemoveContainer" containerID="43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a" Jun 21 04:38:43.801583 containerd[1587]: time="2025-06-21T04:38:43.801547176Z" level=info msg="RemoveContainer for \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\"" Jun 21 04:38:43.826441 kubelet[2715]: I0621 04:38:43.826379 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-host-proc-sys-net\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826441 kubelet[2715]: I0621 04:38:43.826432 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cni-path\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826441 kubelet[2715]: I0621 04:38:43.826453 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-lib-modules\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826441 kubelet[2715]: I0621 04:38:43.826456 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 04:38:43.826723 kubelet[2715]: I0621 04:38:43.826475 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdf68a01-59a3-42da-8481-9d5017e34364-hubble-tls\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826723 kubelet[2715]: I0621 04:38:43.826492 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-xtables-lock\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826723 kubelet[2715]: I0621 04:38:43.826493 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cni-path" (OuterVolumeSpecName: "cni-path") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 04:38:43.826723 kubelet[2715]: I0621 04:38:43.826503 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 04:38:43.826723 kubelet[2715]: I0621 04:38:43.826510 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-host-proc-sys-kernel\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826844 kubelet[2715]: I0621 04:38:43.826531 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 04:38:43.826844 kubelet[2715]: I0621 04:38:43.826554 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 04:38:43.826844 kubelet[2715]: I0621 04:38:43.826558 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc820e12-b1cf-4d89-b20e-50f0dc5643a5-cilium-config-path\") pod \"bc820e12-b1cf-4d89-b20e-50f0dc5643a5\" (UID: \"bc820e12-b1cf-4d89-b20e-50f0dc5643a5\") " Jun 21 04:38:43.826844 kubelet[2715]: I0621 04:38:43.826583 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-hostproc\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826844 kubelet[2715]: I0621 04:38:43.826601 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-bpf-maps\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826962 kubelet[2715]: I0621 04:38:43.826622 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-run\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826962 kubelet[2715]: I0621 04:38:43.826642 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckhvt\" (UniqueName: \"kubernetes.io/projected/bc820e12-b1cf-4d89-b20e-50f0dc5643a5-kube-api-access-ckhvt\") pod \"bc820e12-b1cf-4d89-b20e-50f0dc5643a5\" (UID: \"bc820e12-b1cf-4d89-b20e-50f0dc5643a5\") " Jun 21 04:38:43.826962 kubelet[2715]: I0621 04:38:43.826662 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l4zl\" (UniqueName: \"kubernetes.io/projected/cdf68a01-59a3-42da-8481-9d5017e34364-kube-api-access-6l4zl\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826962 kubelet[2715]: I0621 04:38:43.826679 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-etc-cni-netd\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826962 kubelet[2715]: I0621 04:38:43.826699 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdf68a01-59a3-42da-8481-9d5017e34364-clustermesh-secrets\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.826962 kubelet[2715]: I0621 04:38:43.826719 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-config-path\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.827101 kubelet[2715]: I0621 04:38:43.826739 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-cgroup\") pod \"cdf68a01-59a3-42da-8481-9d5017e34364\" (UID: \"cdf68a01-59a3-42da-8481-9d5017e34364\") " Jun 21 04:38:43.827101 kubelet[2715]: I0621 04:38:43.826770 2715 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.827101 kubelet[2715]: I0621 04:38:43.826784 2715 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.827101 kubelet[2715]: I0621 04:38:43.826794 2715 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.827101 kubelet[2715]: I0621 04:38:43.826804 2715 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.827101 kubelet[2715]: I0621 04:38:43.826814 2715 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cni-path\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.827101 kubelet[2715]: I0621 04:38:43.826834 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 04:38:43.827264 kubelet[2715]: I0621 04:38:43.826853 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-hostproc" (OuterVolumeSpecName: "hostproc") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 04:38:43.827264 kubelet[2715]: I0621 04:38:43.826871 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 04:38:43.827264 kubelet[2715]: I0621 04:38:43.826888 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 04:38:43.827840 kubelet[2715]: I0621 04:38:43.827457 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 04:38:43.830144 kubelet[2715]: I0621 04:38:43.830112 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc820e12-b1cf-4d89-b20e-50f0dc5643a5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bc820e12-b1cf-4d89-b20e-50f0dc5643a5" (UID: "bc820e12-b1cf-4d89-b20e-50f0dc5643a5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 04:38:43.844153 kubelet[2715]: I0621 04:38:43.844126 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf68a01-59a3-42da-8481-9d5017e34364-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 04:38:43.844255 kubelet[2715]: I0621 04:38:43.844217 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf68a01-59a3-42da-8481-9d5017e34364-kube-api-access-6l4zl" (OuterVolumeSpecName: "kube-api-access-6l4zl") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "kube-api-access-6l4zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 04:38:43.844294 kubelet[2715]: I0621 04:38:43.844132 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf68a01-59a3-42da-8481-9d5017e34364-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 21 04:38:43.844565 kubelet[2715]: I0621 04:38:43.844534 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc820e12-b1cf-4d89-b20e-50f0dc5643a5-kube-api-access-ckhvt" (OuterVolumeSpecName: "kube-api-access-ckhvt") pod "bc820e12-b1cf-4d89-b20e-50f0dc5643a5" (UID: "bc820e12-b1cf-4d89-b20e-50f0dc5643a5"). InnerVolumeSpecName "kube-api-access-ckhvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 04:38:43.846916 kubelet[2715]: I0621 04:38:43.846887 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cdf68a01-59a3-42da-8481-9d5017e34364" (UID: "cdf68a01-59a3-42da-8481-9d5017e34364"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 04:38:43.856952 containerd[1587]: time="2025-06-21T04:38:43.856918152Z" level=info msg="RemoveContainer for \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" returns successfully" Jun 21 04:38:43.863754 kubelet[2715]: I0621 04:38:43.863703 2715 scope.go:117] "RemoveContainer" containerID="43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a" Jun 21 04:38:43.864029 containerd[1587]: time="2025-06-21T04:38:43.863942314Z" level=error msg="ContainerStatus for \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\": not found" Jun 21 04:38:43.867996 kubelet[2715]: E0621 04:38:43.867964 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\": not found" containerID="43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a" Jun 21 04:38:43.869324 kubelet[2715]: I0621 04:38:43.869235 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a"} err="failed to get container status \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\": rpc error: code = NotFound desc = an error occurred when try to find container \"43d2d3b4fd68e075ff4ea32dfb1f9727604c4337d42e3faee37bb70734bc562a\": not found" Jun 21 04:38:43.869324 kubelet[2715]: I0621 04:38:43.869321 2715 scope.go:117] "RemoveContainer" containerID="83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772" Jun 21 04:38:43.870884 containerd[1587]: time="2025-06-21T04:38:43.870851995Z" level=info msg="RemoveContainer for \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\"" Jun 21 04:38:43.875984 containerd[1587]: time="2025-06-21T04:38:43.875948720Z" level=info msg="RemoveContainer for \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" returns successfully" Jun 21 04:38:43.876147 kubelet[2715]: I0621 04:38:43.876123 2715 scope.go:117] "RemoveContainer" containerID="32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c" Jun 21 04:38:43.877731 containerd[1587]: time="2025-06-21T04:38:43.877689368Z" level=info msg="RemoveContainer for \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\"" Jun 21 04:38:43.881716 containerd[1587]: time="2025-06-21T04:38:43.881677521Z" level=info msg="RemoveContainer for \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\" returns successfully" Jun 21 04:38:43.881811 kubelet[2715]: I0621 04:38:43.881784 2715 scope.go:117] "RemoveContainer" containerID="4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a" Jun 21 04:38:43.883680 containerd[1587]: time="2025-06-21T04:38:43.883628123Z" level=info msg="RemoveContainer for \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\"" Jun 21 04:38:43.887819 containerd[1587]: time="2025-06-21T04:38:43.887785130Z" level=info msg="RemoveContainer for \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\" returns successfully" Jun 21 04:38:43.887942 kubelet[2715]: I0621 04:38:43.887919 2715 scope.go:117] "RemoveContainer" containerID="f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae" Jun 21 04:38:43.888998 containerd[1587]: time="2025-06-21T04:38:43.888974548Z" level=info msg="RemoveContainer for \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\"" Jun 21 04:38:43.893045 containerd[1587]: time="2025-06-21T04:38:43.893022755Z" level=info msg="RemoveContainer for \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\" returns successfully" Jun 21 04:38:43.893187 kubelet[2715]: I0621 04:38:43.893146 2715 scope.go:117] "RemoveContainer" containerID="df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f" Jun 21 04:38:43.894429 containerd[1587]: time="2025-06-21T04:38:43.894383333Z" level=info msg="RemoveContainer for \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\"" Jun 21 04:38:43.908837 containerd[1587]: time="2025-06-21T04:38:43.908801890Z" level=info msg="RemoveContainer for \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\" returns successfully" Jun 21 04:38:43.908985 kubelet[2715]: I0621 04:38:43.908966 2715 scope.go:117] "RemoveContainer" containerID="83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772" Jun 21 04:38:43.909274 containerd[1587]: time="2025-06-21T04:38:43.909239581Z" level=error msg="ContainerStatus for \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\": not found" Jun 21 04:38:43.909401 kubelet[2715]: E0621 04:38:43.909378 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\": not found" containerID="83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772" Jun 21 04:38:43.909445 kubelet[2715]: I0621 04:38:43.909409 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772"} err="failed to get container status \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\": rpc error: code = NotFound desc = an error occurred when try to find container \"83d9260328e70313bd9d068cbbb7453703b252228f6441f46c166648fcdf4772\": not found" Jun 21 04:38:43.909469 kubelet[2715]: I0621 04:38:43.909446 2715 scope.go:117] "RemoveContainer" containerID="32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c" Jun 21 04:38:43.909614 containerd[1587]: time="2025-06-21T04:38:43.909582621Z" level=error msg="ContainerStatus for \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\": not found" Jun 21 04:38:43.911136 kubelet[2715]: E0621 04:38:43.910478 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\": not found" containerID="32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c" Jun 21 04:38:43.911136 kubelet[2715]: I0621 04:38:43.910500 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c"} err="failed to get container status \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\": rpc error: code = NotFound desc = an error occurred when try to find container \"32f40fc71440f450a276a877cad0f422aadf78c277f65f74b966421a6dcaa97c\": not found" Jun 21 04:38:43.911136 kubelet[2715]: I0621 04:38:43.910518 2715 scope.go:117] "RemoveContainer" containerID="4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a" Jun 21 04:38:43.911136 kubelet[2715]: E0621 04:38:43.910847 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\": not found" containerID="4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a" Jun 21 04:38:43.911136 kubelet[2715]: I0621 04:38:43.910860 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a"} err="failed to get container status \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\": not found" Jun 21 04:38:43.911136 kubelet[2715]: I0621 04:38:43.910871 2715 scope.go:117] "RemoveContainer" containerID="f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae" Jun 21 04:38:43.911527 containerd[1587]: time="2025-06-21T04:38:43.910779073Z" level=error msg="ContainerStatus for \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4130091dc9cb135d5ce607481464862967cb89fe710bff2849ae2efcb8cb087a\": not found" Jun 21 04:38:43.911527 containerd[1587]: time="2025-06-21T04:38:43.910984037Z" level=error msg="ContainerStatus for \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\": not found" Jun 21 04:38:43.911527 containerd[1587]: time="2025-06-21T04:38:43.911217426Z" level=error msg="ContainerStatus for \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\": not found" Jun 21 04:38:43.911602 kubelet[2715]: E0621 04:38:43.911057 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\": not found" containerID="f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae" Jun 21 04:38:43.911602 kubelet[2715]: I0621 04:38:43.911071 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae"} err="failed to get container status \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8a87b1b84a0cf7d8a005a992a0350d6d82d1c8bc871f89b9f2c945a0bc162ae\": not found" Jun 21 04:38:43.911602 kubelet[2715]: I0621 04:38:43.911081 2715 scope.go:117] "RemoveContainer" containerID="df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f" Jun 21 04:38:43.911602 kubelet[2715]: E0621 04:38:43.911310 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\": not found" containerID="df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f" Jun 21 04:38:43.911602 kubelet[2715]: I0621 04:38:43.911326 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f"} err="failed to get container status \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"df8148106e5a3b241154f9f82905bba8aae2bb848191869768eef08ab6379b3f\": not found" Jun 21 04:38:43.928559 kubelet[2715]: I0621 04:38:43.928508 2715 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-run\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.928559 kubelet[2715]: I0621 04:38:43.928554 2715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckhvt\" (UniqueName: \"kubernetes.io/projected/bc820e12-b1cf-4d89-b20e-50f0dc5643a5-kube-api-access-ckhvt\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.928559 kubelet[2715]: I0621 04:38:43.928566 2715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l4zl\" (UniqueName: \"kubernetes.io/projected/cdf68a01-59a3-42da-8481-9d5017e34364-kube-api-access-6l4zl\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.928559 kubelet[2715]: I0621 04:38:43.928574 2715 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.928559 kubelet[2715]: I0621 04:38:43.928582 2715 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdf68a01-59a3-42da-8481-9d5017e34364-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.928808 kubelet[2715]: I0621 04:38:43.928589 2715 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.928808 kubelet[2715]: I0621 04:38:43.928596 2715 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdf68a01-59a3-42da-8481-9d5017e34364-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.928808 kubelet[2715]: I0621 04:38:43.928603 2715 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-hostproc\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.928808 kubelet[2715]: I0621 04:38:43.928610 2715 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdf68a01-59a3-42da-8481-9d5017e34364-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.928808 kubelet[2715]: I0621 04:38:43.928617 2715 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc820e12-b1cf-4d89-b20e-50f0dc5643a5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:43.928808 kubelet[2715]: I0621 04:38:43.928624 2715 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdf68a01-59a3-42da-8481-9d5017e34364-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jun 21 04:38:44.107803 systemd[1]: Removed slice kubepods-besteffort-podbc820e12_b1cf_4d89_b20e_50f0dc5643a5.slice - libcontainer container kubepods-besteffort-podbc820e12_b1cf_4d89_b20e_50f0dc5643a5.slice. Jun 21 04:38:44.111512 systemd[1]: Removed slice kubepods-burstable-podcdf68a01_59a3_42da_8481_9d5017e34364.slice - libcontainer container kubepods-burstable-podcdf68a01_59a3_42da_8481_9d5017e34364.slice. Jun 21 04:38:44.111776 systemd[1]: kubepods-burstable-podcdf68a01_59a3_42da_8481_9d5017e34364.slice: Consumed 6.460s CPU time, 127.8M memory peak, 732K read from disk, 13.3M written to disk. Jun 21 04:38:44.535657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2690703c68ea952ba8c3cc240c9b5e515598e0359b26e1f8d6652e7c1e6ae79-shm.mount: Deactivated successfully. Jun 21 04:38:44.535775 systemd[1]: var-lib-kubelet-pods-bc820e12\x2db1cf\x2d4d89\x2db20e\x2d50f0dc5643a5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dckhvt.mount: Deactivated successfully. Jun 21 04:38:44.535859 systemd[1]: var-lib-kubelet-pods-cdf68a01\x2d59a3\x2d42da\x2d8481\x2d9d5017e34364-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6l4zl.mount: Deactivated successfully. Jun 21 04:38:44.535930 systemd[1]: var-lib-kubelet-pods-cdf68a01\x2d59a3\x2d42da\x2d8481\x2d9d5017e34364-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 21 04:38:44.536003 systemd[1]: var-lib-kubelet-pods-cdf68a01\x2d59a3\x2d42da\x2d8481\x2d9d5017e34364-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 21 04:38:45.462664 sshd[4314]: Connection closed by 10.0.0.1 port 41088 Jun 21 04:38:45.462980 sshd-session[4312]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:45.476153 systemd[1]: sshd@23-10.0.0.30:22-10.0.0.1:41088.service: Deactivated successfully. Jun 21 04:38:45.478105 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 04:38:45.479036 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Jun 21 04:38:45.482278 systemd[1]: Started sshd@24-10.0.0.30:22-10.0.0.1:41094.service - OpenSSH per-connection server daemon (10.0.0.1:41094). Jun 21 04:38:45.483149 systemd-logind[1570]: Removed session 24. Jun 21 04:38:45.541756 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 41094 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:45.543166 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:45.547546 systemd-logind[1570]: New session 25 of user core. Jun 21 04:38:45.557571 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 04:38:45.616862 kubelet[2715]: I0621 04:38:45.616816 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc820e12-b1cf-4d89-b20e-50f0dc5643a5" path="/var/lib/kubelet/pods/bc820e12-b1cf-4d89-b20e-50f0dc5643a5/volumes" Jun 21 04:38:45.617409 kubelet[2715]: I0621 04:38:45.617382 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdf68a01-59a3-42da-8481-9d5017e34364" path="/var/lib/kubelet/pods/cdf68a01-59a3-42da-8481-9d5017e34364/volumes" Jun 21 04:38:45.656607 kubelet[2715]: E0621 04:38:45.656061 2715 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 04:38:45.987388 sshd[4465]: Connection closed by 10.0.0.1 port 41094 Jun 21 04:38:45.988640 sshd-session[4463]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:45.998601 systemd[1]: sshd@24-10.0.0.30:22-10.0.0.1:41094.service: Deactivated successfully. Jun 21 04:38:46.001736 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 04:38:46.002909 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Jun 21 04:38:46.006629 systemd-logind[1570]: Removed session 25. Jun 21 04:38:46.009482 systemd[1]: Started sshd@25-10.0.0.30:22-10.0.0.1:42170.service - OpenSSH per-connection server daemon (10.0.0.1:42170). Jun 21 04:38:46.017222 kubelet[2715]: E0621 04:38:46.017171 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf68a01-59a3-42da-8481-9d5017e34364" containerName="cilium-agent" Jun 21 04:38:46.017222 kubelet[2715]: E0621 04:38:46.017206 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf68a01-59a3-42da-8481-9d5017e34364" containerName="mount-bpf-fs" Jun 21 04:38:46.017222 kubelet[2715]: E0621 04:38:46.017215 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc820e12-b1cf-4d89-b20e-50f0dc5643a5" containerName="cilium-operator" Jun 21 04:38:46.017222 kubelet[2715]: E0621 04:38:46.017223 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf68a01-59a3-42da-8481-9d5017e34364" containerName="clean-cilium-state" Jun 21 04:38:46.017222 kubelet[2715]: E0621 04:38:46.017229 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf68a01-59a3-42da-8481-9d5017e34364" containerName="mount-cgroup" Jun 21 04:38:46.017222 kubelet[2715]: E0621 04:38:46.017235 2715 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf68a01-59a3-42da-8481-9d5017e34364" containerName="apply-sysctl-overwrites" Jun 21 04:38:46.017474 kubelet[2715]: I0621 04:38:46.017256 2715 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf68a01-59a3-42da-8481-9d5017e34364" containerName="cilium-agent" Jun 21 04:38:46.017474 kubelet[2715]: I0621 04:38:46.017262 2715 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc820e12-b1cf-4d89-b20e-50f0dc5643a5" containerName="cilium-operator" Jun 21 04:38:46.029843 systemd[1]: Created slice kubepods-burstable-pod99a11064_305c_476d_8cca_3b5dd1a56c32.slice - libcontainer container kubepods-burstable-pod99a11064_305c_476d_8cca_3b5dd1a56c32.slice. Jun 21 04:38:46.066911 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 42170 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:46.068195 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:46.072277 systemd-logind[1570]: New session 26 of user core. Jun 21 04:38:46.085575 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 21 04:38:46.135715 sshd[4479]: Connection closed by 10.0.0.1 port 42170 Jun 21 04:38:46.137603 sshd-session[4477]: pam_unix(sshd:session): session closed for user core Jun 21 04:38:46.138947 kubelet[2715]: I0621 04:38:46.138916 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99a11064-305c-476d-8cca-3b5dd1a56c32-lib-modules\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139081 kubelet[2715]: I0621 04:38:46.138951 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/99a11064-305c-476d-8cca-3b5dd1a56c32-cilium-ipsec-secrets\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139081 kubelet[2715]: I0621 04:38:46.138973 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99a11064-305c-476d-8cca-3b5dd1a56c32-cilium-run\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139081 kubelet[2715]: I0621 04:38:46.138991 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99a11064-305c-476d-8cca-3b5dd1a56c32-host-proc-sys-net\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139081 kubelet[2715]: I0621 04:38:46.139007 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99a11064-305c-476d-8cca-3b5dd1a56c32-bpf-maps\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139081 kubelet[2715]: I0621 04:38:46.139024 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99a11064-305c-476d-8cca-3b5dd1a56c32-cilium-cgroup\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139081 kubelet[2715]: I0621 04:38:46.139039 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99a11064-305c-476d-8cca-3b5dd1a56c32-clustermesh-secrets\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139470 kubelet[2715]: I0621 04:38:46.139056 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99a11064-305c-476d-8cca-3b5dd1a56c32-cilium-config-path\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139470 kubelet[2715]: I0621 04:38:46.139072 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99a11064-305c-476d-8cca-3b5dd1a56c32-host-proc-sys-kernel\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139470 kubelet[2715]: I0621 04:38:46.139088 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99a11064-305c-476d-8cca-3b5dd1a56c32-xtables-lock\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139470 kubelet[2715]: I0621 04:38:46.139103 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99a11064-305c-476d-8cca-3b5dd1a56c32-cni-path\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139470 kubelet[2715]: I0621 04:38:46.139161 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99a11064-305c-476d-8cca-3b5dd1a56c32-etc-cni-netd\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139625 kubelet[2715]: I0621 04:38:46.139212 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99a11064-305c-476d-8cca-3b5dd1a56c32-hubble-tls\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139726 kubelet[2715]: I0621 04:38:46.139644 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxt8s\" (UniqueName: \"kubernetes.io/projected/99a11064-305c-476d-8cca-3b5dd1a56c32-kube-api-access-kxt8s\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.139726 kubelet[2715]: I0621 04:38:46.139667 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99a11064-305c-476d-8cca-3b5dd1a56c32-hostproc\") pod \"cilium-g7qzp\" (UID: \"99a11064-305c-476d-8cca-3b5dd1a56c32\") " pod="kube-system/cilium-g7qzp" Jun 21 04:38:46.149133 systemd[1]: sshd@25-10.0.0.30:22-10.0.0.1:42170.service: Deactivated successfully. Jun 21 04:38:46.151239 systemd[1]: session-26.scope: Deactivated successfully. Jun 21 04:38:46.152033 systemd-logind[1570]: Session 26 logged out. Waiting for processes to exit. Jun 21 04:38:46.155780 systemd[1]: Started sshd@26-10.0.0.30:22-10.0.0.1:42176.service - OpenSSH per-connection server daemon (10.0.0.1:42176). Jun 21 04:38:46.157290 systemd-logind[1570]: Removed session 26. Jun 21 04:38:46.204654 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 42176 ssh2: RSA SHA256:015yC5fRvb07MyWOgrdDHnl6DLRQb6q1XcuQXpFRy7c Jun 21 04:38:46.205885 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:38:46.209884 systemd-logind[1570]: New session 27 of user core. Jun 21 04:38:46.219527 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 21 04:38:46.332940 kubelet[2715]: E0621 04:38:46.332809 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:46.334352 containerd[1587]: time="2025-06-21T04:38:46.334276009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7qzp,Uid:99a11064-305c-476d-8cca-3b5dd1a56c32,Namespace:kube-system,Attempt:0,}" Jun 21 04:38:46.352532 containerd[1587]: time="2025-06-21T04:38:46.352495841Z" level=info msg="connecting to shim 4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd" address="unix:///run/containerd/s/aa829b58476c7d3897135e66950b375c73aecf4b1699a968b258e766354d243a" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:38:46.379539 systemd[1]: Started cri-containerd-4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd.scope - libcontainer container 4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd. Jun 21 04:38:46.403839 containerd[1587]: time="2025-06-21T04:38:46.403807548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7qzp,Uid:99a11064-305c-476d-8cca-3b5dd1a56c32,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\"" Jun 21 04:38:46.404567 kubelet[2715]: E0621 04:38:46.404549 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:46.406182 containerd[1587]: time="2025-06-21T04:38:46.406141509Z" level=info msg="CreateContainer within sandbox \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 04:38:46.413139 containerd[1587]: time="2025-06-21T04:38:46.413098426Z" level=info msg="Container 5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:38:46.420768 containerd[1587]: time="2025-06-21T04:38:46.420720110Z" level=info msg="CreateContainer within sandbox \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052\"" Jun 21 04:38:46.421196 containerd[1587]: time="2025-06-21T04:38:46.421174673Z" level=info msg="StartContainer for \"5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052\"" Jun 21 04:38:46.421941 containerd[1587]: time="2025-06-21T04:38:46.421915735Z" level=info msg="connecting to shim 5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052" address="unix:///run/containerd/s/aa829b58476c7d3897135e66950b375c73aecf4b1699a968b258e766354d243a" protocol=ttrpc version=3 Jun 21 04:38:46.445571 systemd[1]: Started cri-containerd-5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052.scope - libcontainer container 5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052. Jun 21 04:38:46.474533 containerd[1587]: time="2025-06-21T04:38:46.474485047Z" level=info msg="StartContainer for \"5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052\" returns successfully" Jun 21 04:38:46.483855 systemd[1]: cri-containerd-5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052.scope: Deactivated successfully. Jun 21 04:38:46.484740 containerd[1587]: time="2025-06-21T04:38:46.484666765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052\" id:\"5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052\" pid:4558 exited_at:{seconds:1750480726 nanos:484367020}" Jun 21 04:38:46.484740 containerd[1587]: time="2025-06-21T04:38:46.484681444Z" level=info msg="received exit event container_id:\"5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052\" id:\"5a1bd948c590e2ec0404a51d59acb9fe02d4c6ffbc23fde2c6822290d7538052\" pid:4558 exited_at:{seconds:1750480726 nanos:484367020}" Jun 21 04:38:46.811438 kubelet[2715]: E0621 04:38:46.811384 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:46.813069 containerd[1587]: time="2025-06-21T04:38:46.813026868Z" level=info msg="CreateContainer within sandbox \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 04:38:46.820649 containerd[1587]: time="2025-06-21T04:38:46.820611861Z" level=info msg="Container 9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:38:46.827755 containerd[1587]: time="2025-06-21T04:38:46.827710320Z" level=info msg="CreateContainer within sandbox \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db\"" Jun 21 04:38:46.828236 containerd[1587]: time="2025-06-21T04:38:46.828181785Z" level=info msg="StartContainer for \"9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db\"" Jun 21 04:38:46.829148 containerd[1587]: time="2025-06-21T04:38:46.829109106Z" level=info msg="connecting to shim 9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db" address="unix:///run/containerd/s/aa829b58476c7d3897135e66950b375c73aecf4b1699a968b258e766354d243a" protocol=ttrpc version=3 Jun 21 04:38:46.850542 systemd[1]: Started cri-containerd-9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db.scope - libcontainer container 9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db. Jun 21 04:38:46.876924 containerd[1587]: time="2025-06-21T04:38:46.876820951Z" level=info msg="StartContainer for \"9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db\" returns successfully" Jun 21 04:38:46.881403 systemd[1]: cri-containerd-9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db.scope: Deactivated successfully. Jun 21 04:38:46.881897 containerd[1587]: time="2025-06-21T04:38:46.881826541Z" level=info msg="received exit event container_id:\"9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db\" id:\"9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db\" pid:4605 exited_at:{seconds:1750480726 nanos:881612000}" Jun 21 04:38:46.882007 containerd[1587]: time="2025-06-21T04:38:46.881856009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db\" id:\"9a1bd7f7b19cd6388b2c2dc5983bf4742cb55139c210f6665b0dcab0f4ade9db\" pid:4605 exited_at:{seconds:1750480726 nanos:881612000}" Jun 21 04:38:47.137576 kubelet[2715]: I0621 04:38:47.137457 2715 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-21T04:38:47Z","lastTransitionTime":"2025-06-21T04:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 21 04:38:47.245023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1654929916.mount: Deactivated successfully. Jun 21 04:38:47.816077 kubelet[2715]: E0621 04:38:47.816038 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:47.818324 containerd[1587]: time="2025-06-21T04:38:47.818283707Z" level=info msg="CreateContainer within sandbox \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 04:38:47.835490 containerd[1587]: time="2025-06-21T04:38:47.835444035Z" level=info msg="Container 0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:38:47.844326 containerd[1587]: time="2025-06-21T04:38:47.844267656Z" level=info msg="CreateContainer within sandbox \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9\"" Jun 21 04:38:47.844799 containerd[1587]: time="2025-06-21T04:38:47.844749791Z" level=info msg="StartContainer for \"0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9\"" Jun 21 04:38:47.846004 containerd[1587]: time="2025-06-21T04:38:47.845978458Z" level=info msg="connecting to shim 0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9" address="unix:///run/containerd/s/aa829b58476c7d3897135e66950b375c73aecf4b1699a968b258e766354d243a" protocol=ttrpc version=3 Jun 21 04:38:47.867568 systemd[1]: Started cri-containerd-0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9.scope - libcontainer container 0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9. Jun 21 04:38:47.909487 containerd[1587]: time="2025-06-21T04:38:47.909450169Z" level=info msg="StartContainer for \"0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9\" returns successfully" Jun 21 04:38:47.911525 systemd[1]: cri-containerd-0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9.scope: Deactivated successfully. Jun 21 04:38:47.912718 containerd[1587]: time="2025-06-21T04:38:47.912673895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9\" id:\"0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9\" pid:4649 exited_at:{seconds:1750480727 nanos:912412894}" Jun 21 04:38:47.912858 containerd[1587]: time="2025-06-21T04:38:47.912729361Z" level=info msg="received exit event container_id:\"0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9\" id:\"0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9\" pid:4649 exited_at:{seconds:1750480727 nanos:912412894}" Jun 21 04:38:47.933956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f016591cccb73cae4101f61fff4710a4a8dad56682eed1229a927ab260f24c9-rootfs.mount: Deactivated successfully. Jun 21 04:38:48.820325 kubelet[2715]: E0621 04:38:48.820294 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:48.822342 containerd[1587]: time="2025-06-21T04:38:48.821904366Z" level=info msg="CreateContainer within sandbox \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 04:38:48.860186 containerd[1587]: time="2025-06-21T04:38:48.860129971Z" level=info msg="Container 760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:38:48.868551 containerd[1587]: time="2025-06-21T04:38:48.868506244Z" level=info msg="CreateContainer within sandbox \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273\"" Jun 21 04:38:48.869258 containerd[1587]: time="2025-06-21T04:38:48.868978590Z" level=info msg="StartContainer for \"760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273\"" Jun 21 04:38:48.869878 containerd[1587]: time="2025-06-21T04:38:48.869847216Z" level=info msg="connecting to shim 760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273" address="unix:///run/containerd/s/aa829b58476c7d3897135e66950b375c73aecf4b1699a968b258e766354d243a" protocol=ttrpc version=3 Jun 21 04:38:48.891552 systemd[1]: Started cri-containerd-760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273.scope - libcontainer container 760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273. Jun 21 04:38:48.916640 systemd[1]: cri-containerd-760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273.scope: Deactivated successfully. Jun 21 04:38:48.917286 containerd[1587]: time="2025-06-21T04:38:48.917146763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273\" id:\"760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273\" pid:4688 exited_at:{seconds:1750480728 nanos:916813914}" Jun 21 04:38:48.918555 containerd[1587]: time="2025-06-21T04:38:48.918519104Z" level=info msg="received exit event container_id:\"760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273\" id:\"760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273\" pid:4688 exited_at:{seconds:1750480728 nanos:916813914}" Jun 21 04:38:48.926278 containerd[1587]: time="2025-06-21T04:38:48.926243317Z" level=info msg="StartContainer for \"760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273\" returns successfully" Jun 21 04:38:48.938681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-760e56291a661d87b30dc44ed03eb9e99f028c781db12d234655b8711ed68273-rootfs.mount: Deactivated successfully. Jun 21 04:38:49.828617 kubelet[2715]: E0621 04:38:49.828579 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:49.831441 containerd[1587]: time="2025-06-21T04:38:49.831356246Z" level=info msg="CreateContainer within sandbox \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 04:38:49.843132 containerd[1587]: time="2025-06-21T04:38:49.843079204Z" level=info msg="Container d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:38:49.852696 containerd[1587]: time="2025-06-21T04:38:49.852651721Z" level=info msg="CreateContainer within sandbox \"4ee069bb82dfc4a0d56984d0402dc5272ee7ba92a16c9e8d47662c29bdb780fd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b\"" Jun 21 04:38:49.853221 containerd[1587]: time="2025-06-21T04:38:49.853190324Z" level=info msg="StartContainer for \"d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b\"" Jun 21 04:38:49.854067 containerd[1587]: time="2025-06-21T04:38:49.854046956Z" level=info msg="connecting to shim d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b" address="unix:///run/containerd/s/aa829b58476c7d3897135e66950b375c73aecf4b1699a968b258e766354d243a" protocol=ttrpc version=3 Jun 21 04:38:49.878547 systemd[1]: Started cri-containerd-d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b.scope - libcontainer container d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b. Jun 21 04:38:49.911970 containerd[1587]: time="2025-06-21T04:38:49.911868136Z" level=info msg="StartContainer for \"d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b\" returns successfully" Jun 21 04:38:49.981721 containerd[1587]: time="2025-06-21T04:38:49.981677423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b\" id:\"3e74053ef6ed2ec713da37aacf4b2cd8a40990a8b07de1cb6c85b83e5bd6cac7\" pid:4756 exited_at:{seconds:1750480729 nanos:981348241}" Jun 21 04:38:50.332452 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jun 21 04:38:50.835700 kubelet[2715]: E0621 04:38:50.835620 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:51.613929 kubelet[2715]: E0621 04:38:51.613894 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:52.334287 kubelet[2715]: E0621 04:38:52.334192 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:52.509281 containerd[1587]: time="2025-06-21T04:38:52.509215152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b\" id:\"618b228dde810358d4dbc2069131d4e801647e95bf5b1edb1c5f75d18a343f99\" pid:5045 exit_status:1 exited_at:{seconds:1750480732 nanos:508752748}" Jun 21 04:38:53.302617 systemd-networkd[1492]: lxc_health: Link UP Jun 21 04:38:53.312826 systemd-networkd[1492]: lxc_health: Gained carrier Jun 21 04:38:54.335094 kubelet[2715]: E0621 04:38:54.335042 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:54.348166 kubelet[2715]: I0621 04:38:54.348084 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g7qzp" podStartSLOduration=8.34806658 podStartE2EDuration="8.34806658s" podCreationTimestamp="2025-06-21 04:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:38:50.852281708 +0000 UTC m=+85.314430392" watchObservedRunningTime="2025-06-21 04:38:54.34806658 +0000 UTC m=+88.810215264" Jun 21 04:38:54.608079 containerd[1587]: time="2025-06-21T04:38:54.607892185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b\" id:\"e5cd9433fbf28b566bc8e74c53485586e3648c8f8e79f72a685b0297e762f629\" pid:5296 exited_at:{seconds:1750480734 nanos:607329380}" Jun 21 04:38:54.614540 kubelet[2715]: E0621 04:38:54.614487 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:54.842753 kubelet[2715]: E0621 04:38:54.842717 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:38:55.373643 systemd-networkd[1492]: lxc_health: Gained IPv6LL Jun 21 04:38:56.702835 containerd[1587]: time="2025-06-21T04:38:56.702769536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b\" id:\"730a3c020c3812f502f198f8a9bdffbe5a7573cea5c030670613f13a68ff6d24\" pid:5328 exited_at:{seconds:1750480736 nanos:702443643}" Jun 21 04:38:58.781100 containerd[1587]: time="2025-06-21T04:38:58.781041364Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b\" id:\"2473c7d3fb695bb7da938785719a418d35f798de25b08a79fed0774ae5299433\" pid:5353 exited_at:{seconds:1750480738 nanos:780738416}" Jun 21 04:39:00.613701 kubelet[2715]: E0621 04:39:00.613655 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 04:39:00.873918 containerd[1587]: time="2025-06-21T04:39:00.873791441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d97b3c95bbf16c0f5df040a040f7935fe0872a55e8b56e644a4b76da19ce5a3b\" id:\"7ec6c6b9992cc18f0ecf65e41d43c7640c231a17f503335123417cbc2ecf5f07\" pid:5376 exited_at:{seconds:1750480740 nanos:873381830}" Jun 21 04:39:00.885243 sshd[4488]: Connection closed by 10.0.0.1 port 42176 Jun 21 04:39:00.885672 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Jun 21 04:39:00.888660 systemd[1]: sshd@26-10.0.0.30:22-10.0.0.1:42176.service: Deactivated successfully. Jun 21 04:39:00.890648 systemd[1]: session-27.scope: Deactivated successfully. Jun 21 04:39:00.891498 systemd-logind[1570]: Session 27 logged out. Waiting for processes to exit. Jun 21 04:39:00.893262 systemd-logind[1570]: Removed session 27. Jun 21 04:39:01.614499 kubelet[2715]: E0621 04:39:01.614449 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"