Sep 9 21:52:13.088745 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 19:55:16 -00 2025 Sep 9 21:52:13.088802 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 21:52:13.088823 kernel: BIOS-provided physical RAM map: Sep 9 21:52:13.088832 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 21:52:13.088840 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 21:52:13.088850 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 21:52:13.088860 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 21:52:13.088870 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 21:52:13.088884 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 21:52:13.088897 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 21:52:13.088907 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 9 21:52:13.088916 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 21:52:13.088925 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 21:52:13.089060 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 21:52:13.089072 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 21:52:13.089090 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 21:52:13.089103 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 21:52:13.089113 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 21:52:13.089123 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 21:52:13.089133 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 21:52:13.089142 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 21:52:13.089151 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 21:52:13.089160 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 21:52:13.089170 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 21:52:13.089180 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 21:52:13.089193 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 21:52:13.089203 kernel: NX (Execute Disable) protection: active Sep 9 21:52:13.089213 kernel: APIC: Static calls initialized Sep 9 21:52:13.089222 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 9 21:52:13.089232 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 9 21:52:13.089242 kernel: extended physical RAM map: Sep 9 21:52:13.089252 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 21:52:13.089261 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 21:52:13.089272 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 21:52:13.089282 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 21:52:13.089292 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 21:52:13.089307 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 21:52:13.089318 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 21:52:13.089328 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 9 21:52:13.089340 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 9 21:52:13.089356 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 9 21:52:13.089366 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 9 21:52:13.089380 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 9 21:52:13.089391 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 21:52:13.089403 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 21:52:13.089414 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 21:52:13.089425 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 21:52:13.089436 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 21:52:13.089447 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 21:52:13.089457 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 21:52:13.089469 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 21:52:13.089480 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 21:52:13.089495 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 21:52:13.089505 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 21:52:13.089544 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 21:52:13.089556 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 21:52:13.089568 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 21:52:13.089578 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 21:52:13.089595 kernel: efi: EFI v2.7 by EDK II Sep 9 21:52:13.089607 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 9 21:52:13.089618 kernel: random: crng init done Sep 9 21:52:13.089632 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 9 21:52:13.089643 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 9 21:52:13.089663 kernel: secureboot: Secure boot disabled Sep 9 21:52:13.089675 kernel: SMBIOS 2.8 present. Sep 9 21:52:13.089686 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 21:52:13.089696 kernel: DMI: Memory slots populated: 1/1 Sep 9 21:52:13.089706 kernel: Hypervisor detected: KVM Sep 9 21:52:13.089716 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 21:52:13.089726 kernel: kvm-clock: using sched offset of 10008398614 cycles Sep 9 21:52:13.089738 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 21:52:13.089749 kernel: tsc: Detected 2794.748 MHz processor Sep 9 21:52:13.089759 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 21:52:13.089770 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 21:52:13.089785 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 9 21:52:13.089796 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 21:52:13.089806 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 21:52:13.089817 kernel: Using GB pages for direct mapping Sep 9 21:52:13.089827 kernel: ACPI: Early table checksum verification disabled Sep 9 21:52:13.089837 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 21:52:13.089848 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 21:52:13.089859 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:52:13.089870 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:52:13.089885 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 21:52:13.089896 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:52:13.089908 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:52:13.089920 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:52:13.089931 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:52:13.089951 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 21:52:13.089961 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 21:52:13.089971 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 21:52:13.089985 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 21:52:13.089995 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 21:52:13.090005 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 21:52:13.090016 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 21:52:13.090027 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 21:52:13.090037 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 21:52:13.090047 kernel: No NUMA configuration found Sep 9 21:52:13.090058 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 9 21:52:13.090069 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 9 21:52:13.090083 kernel: Zone ranges: Sep 9 21:52:13.090094 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 21:52:13.090105 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 9 21:52:13.090115 kernel: Normal empty Sep 9 21:52:13.090125 kernel: Device empty Sep 9 21:52:13.090135 kernel: Movable zone start for each node Sep 9 21:52:13.090146 kernel: Early memory node ranges Sep 9 21:52:13.090156 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 21:52:13.090167 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 21:52:13.090183 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 21:52:13.090199 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 9 21:52:13.090211 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 9 21:52:13.090223 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 9 21:52:13.090234 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 9 21:52:13.090245 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 9 21:52:13.090257 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 9 21:52:13.090268 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 21:52:13.090283 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 21:52:13.090309 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 21:52:13.090320 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 21:52:13.090332 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 9 21:52:13.090344 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 9 21:52:13.090366 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 21:52:13.090378 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 21:52:13.090390 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 9 21:52:13.090403 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 21:52:13.090415 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 21:52:13.090432 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 21:52:13.090443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 21:52:13.090455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 21:52:13.090467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 21:52:13.090479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 21:52:13.090491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 21:52:13.090502 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 21:52:13.090533 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 21:52:13.090546 kernel: TSC deadline timer available Sep 9 21:52:13.090561 kernel: CPU topo: Max. logical packages: 1 Sep 9 21:52:13.090570 kernel: CPU topo: Max. logical dies: 1 Sep 9 21:52:13.090579 kernel: CPU topo: Max. dies per package: 1 Sep 9 21:52:13.090588 kernel: CPU topo: Max. threads per core: 1 Sep 9 21:52:13.090600 kernel: CPU topo: Num. cores per package: 4 Sep 9 21:52:13.090612 kernel: CPU topo: Num. threads per package: 4 Sep 9 21:52:13.090623 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 21:52:13.090637 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 21:52:13.090652 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 21:52:13.090670 kernel: kvm-guest: setup PV sched yield Sep 9 21:52:13.090683 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 21:52:13.090695 kernel: Booting paravirtualized kernel on KVM Sep 9 21:52:13.090707 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 21:52:13.090718 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 21:52:13.090730 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 21:52:13.090742 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 21:52:13.090753 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 21:52:13.090764 kernel: kvm-guest: PV spinlocks enabled Sep 9 21:52:13.090779 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 21:52:13.090791 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 21:52:13.090806 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 21:52:13.090818 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 21:52:13.090829 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 21:52:13.090840 kernel: Fallback order for Node 0: 0 Sep 9 21:52:13.090851 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 9 21:52:13.090862 kernel: Policy zone: DMA32 Sep 9 21:52:13.090877 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 21:52:13.090889 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 21:52:13.090901 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 21:52:13.090913 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 21:52:13.090924 kernel: Dynamic Preempt: voluntary Sep 9 21:52:13.090936 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 21:52:13.111347 kernel: rcu: RCU event tracing is enabled. Sep 9 21:52:13.111366 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 21:52:13.111378 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 21:52:13.111404 kernel: Rude variant of Tasks RCU enabled. Sep 9 21:52:13.111415 kernel: Tracing variant of Tasks RCU enabled. Sep 9 21:52:13.111427 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 21:52:13.111442 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 21:52:13.111454 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:52:13.111465 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:52:13.111476 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:52:13.111486 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 21:52:13.111497 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 21:52:13.111531 kernel: Console: colour dummy device 80x25 Sep 9 21:52:13.111544 kernel: printk: legacy console [ttyS0] enabled Sep 9 21:52:13.111555 kernel: ACPI: Core revision 20240827 Sep 9 21:52:13.111566 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 21:52:13.111578 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 21:52:13.111590 kernel: x2apic enabled Sep 9 21:52:13.111601 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 21:52:13.111614 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 21:52:13.111626 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 21:52:13.111643 kernel: kvm-guest: setup PV IPIs Sep 9 21:52:13.111655 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 21:52:13.111667 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 21:52:13.111683 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 21:52:13.111697 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 21:52:13.111711 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 21:52:13.111725 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 21:52:13.111739 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 21:52:13.111753 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 21:52:13.111772 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 21:52:13.111787 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 21:52:13.111800 kernel: active return thunk: retbleed_return_thunk Sep 9 21:52:13.111814 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 21:52:13.111832 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 21:52:13.111844 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 21:52:13.111855 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 21:52:13.111869 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 21:52:13.111885 kernel: active return thunk: srso_return_thunk Sep 9 21:52:13.111897 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 21:52:13.111908 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 21:52:13.111920 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 21:52:13.111932 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 21:52:13.116398 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 21:52:13.116424 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 21:52:13.116436 kernel: Freeing SMP alternatives memory: 32K Sep 9 21:52:13.116447 kernel: pid_max: default: 32768 minimum: 301 Sep 9 21:52:13.116458 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 21:52:13.116481 kernel: landlock: Up and running. Sep 9 21:52:13.116492 kernel: SELinux: Initializing. Sep 9 21:52:13.116503 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 21:52:13.116535 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 21:52:13.116547 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 21:52:13.116558 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 21:52:13.116569 kernel: ... version: 0 Sep 9 21:52:13.116579 kernel: ... bit width: 48 Sep 9 21:52:13.116589 kernel: ... generic registers: 6 Sep 9 21:52:13.116605 kernel: ... value mask: 0000ffffffffffff Sep 9 21:52:13.116616 kernel: ... max period: 00007fffffffffff Sep 9 21:52:13.116626 kernel: ... fixed-purpose events: 0 Sep 9 21:52:13.116637 kernel: ... event mask: 000000000000003f Sep 9 21:52:13.116648 kernel: signal: max sigframe size: 1776 Sep 9 21:52:13.116659 kernel: rcu: Hierarchical SRCU implementation. Sep 9 21:52:13.116673 kernel: rcu: Max phase no-delay instances is 400. Sep 9 21:52:13.116691 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 21:52:13.116703 kernel: smp: Bringing up secondary CPUs ... Sep 9 21:52:13.116720 kernel: smpboot: x86: Booting SMP configuration: Sep 9 21:52:13.116732 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 21:52:13.116745 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 21:52:13.116758 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 21:52:13.116770 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54092K init, 2876K bss, 137200K reserved, 0K cma-reserved) Sep 9 21:52:13.116782 kernel: devtmpfs: initialized Sep 9 21:52:13.116793 kernel: x86/mm: Memory block size: 128MB Sep 9 21:52:13.116805 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 21:52:13.116817 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 21:52:13.116831 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 9 21:52:13.116843 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 21:52:13.116854 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 9 21:52:13.116864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 21:52:13.116875 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 21:52:13.116887 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 21:52:13.116898 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 21:52:13.116910 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 21:52:13.116925 kernel: audit: initializing netlink subsys (disabled) Sep 9 21:52:13.120268 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 21:52:13.120314 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 21:52:13.120325 kernel: audit: type=2000 audit(1757454720.910:1): state=initialized audit_enabled=0 res=1 Sep 9 21:52:13.120335 kernel: cpuidle: using governor menu Sep 9 21:52:13.120344 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 21:52:13.120354 kernel: dca service started, version 1.12.1 Sep 9 21:52:13.120364 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 21:52:13.120373 kernel: PCI: Using configuration type 1 for base access Sep 9 21:52:13.120392 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 21:52:13.120401 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 21:52:13.120410 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 21:52:13.120420 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 21:52:13.120429 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 21:52:13.120439 kernel: ACPI: Added _OSI(Module Device) Sep 9 21:52:13.120448 kernel: ACPI: Added _OSI(Processor Device) Sep 9 21:52:13.120458 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 21:52:13.120467 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 21:52:13.120479 kernel: ACPI: Interpreter enabled Sep 9 21:52:13.120488 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 21:52:13.120497 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 21:52:13.120507 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 21:52:13.120528 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 21:52:13.120537 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 21:52:13.120547 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 21:52:13.127049 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 21:52:13.127395 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 21:52:13.127602 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 21:52:13.127621 kernel: PCI host bridge to bus 0000:00 Sep 9 21:52:13.127844 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 21:52:13.128033 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 21:52:13.128199 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 21:52:13.128348 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 21:52:13.128533 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 21:52:13.128679 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 21:52:13.128834 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 21:52:13.138405 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 21:52:13.138700 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 21:52:13.139265 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 21:52:13.139467 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 21:52:13.139683 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 21:52:13.139880 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 21:52:13.143281 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 21:52:13.152414 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 21:52:13.152618 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 21:52:13.152780 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 21:52:13.162499 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 21:52:13.162771 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 21:52:13.162973 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 21:52:13.163158 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 21:52:13.163373 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 21:52:13.163584 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 21:52:13.163753 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 21:52:13.163926 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 21:52:13.164115 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 21:52:13.164325 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 21:52:13.164525 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 21:52:13.164847 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 21:52:13.175822 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 21:52:13.176062 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 21:52:13.176276 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 21:52:13.176441 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 21:52:13.176458 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 21:52:13.176470 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 21:52:13.176482 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 21:52:13.176493 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 21:52:13.176505 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 21:52:13.176538 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 21:52:13.176554 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 21:52:13.176566 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 21:52:13.176578 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 21:52:13.176590 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 21:52:13.176601 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 21:52:13.176613 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 21:52:13.176625 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 21:52:13.176637 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 21:52:13.176648 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 21:52:13.176663 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 21:52:13.176675 kernel: iommu: Default domain type: Translated Sep 9 21:52:13.176686 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 21:52:13.176698 kernel: efivars: Registered efivars operations Sep 9 21:52:13.176710 kernel: PCI: Using ACPI for IRQ routing Sep 9 21:52:13.176722 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 21:52:13.176734 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 21:52:13.176746 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 9 21:52:13.176757 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 9 21:52:13.176772 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 9 21:52:13.176784 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 9 21:52:13.176795 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 9 21:52:13.176807 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 9 21:52:13.176819 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 9 21:52:13.183172 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 21:52:13.183385 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 21:52:13.183602 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 21:52:13.183633 kernel: vgaarb: loaded Sep 9 21:52:13.183645 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 21:52:13.183656 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 21:52:13.183667 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 21:52:13.183679 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 21:52:13.183690 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 21:52:13.183701 kernel: pnp: PnP ACPI init Sep 9 21:52:13.183955 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 21:52:13.183982 kernel: pnp: PnP ACPI: found 6 devices Sep 9 21:52:13.183994 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 21:52:13.184006 kernel: NET: Registered PF_INET protocol family Sep 9 21:52:13.184018 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 21:52:13.184030 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 21:52:13.184041 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 21:52:13.184053 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 21:52:13.184069 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 21:52:13.184082 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 21:52:13.184098 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 21:52:13.184111 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 21:52:13.184123 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 21:52:13.184135 kernel: NET: Registered PF_XDP protocol family Sep 9 21:52:13.184315 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 21:52:13.184483 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 21:52:13.184668 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 21:52:13.184829 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 21:52:13.187067 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 21:52:13.187248 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 21:52:13.187452 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 21:52:13.187675 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 21:52:13.187696 kernel: PCI: CLS 0 bytes, default 64 Sep 9 21:52:13.187709 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 21:52:13.187722 kernel: Initialise system trusted keyrings Sep 9 21:52:13.187742 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 21:52:13.187754 kernel: Key type asymmetric registered Sep 9 21:52:13.187767 kernel: Asymmetric key parser 'x509' registered Sep 9 21:52:13.187779 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 21:52:13.187792 kernel: io scheduler mq-deadline registered Sep 9 21:52:13.187805 kernel: io scheduler kyber registered Sep 9 21:52:13.187817 kernel: io scheduler bfq registered Sep 9 21:52:13.187833 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 21:52:13.187846 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 21:52:13.187859 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 21:52:13.187871 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 21:52:13.187883 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 21:52:13.187895 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 21:52:13.187907 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 21:52:13.187919 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 21:52:13.187931 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 21:52:13.188148 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 21:52:13.188299 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 21:52:13.188316 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 21:52:13.188457 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T21:52:11 UTC (1757454731) Sep 9 21:52:13.188623 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 21:52:13.188639 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 21:52:13.188651 kernel: efifb: probing for efifb Sep 9 21:52:13.188663 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 21:52:13.188679 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 21:52:13.188691 kernel: efifb: scrolling: redraw Sep 9 21:52:13.188704 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 21:52:13.188716 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 21:52:13.188728 kernel: fb0: EFI VGA frame buffer device Sep 9 21:52:13.188740 kernel: pstore: Using crash dump compression: deflate Sep 9 21:52:13.188752 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 21:52:13.188764 kernel: NET: Registered PF_INET6 protocol family Sep 9 21:52:13.188776 kernel: Segment Routing with IPv6 Sep 9 21:52:13.188791 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 21:52:13.188803 kernel: NET: Registered PF_PACKET protocol family Sep 9 21:52:13.188815 kernel: Key type dns_resolver registered Sep 9 21:52:13.188827 kernel: IPI shorthand broadcast: enabled Sep 9 21:52:13.188839 kernel: sched_clock: Marking stable (11095005116, 337870635)->(11801148496, -368272745) Sep 9 21:52:13.188851 kernel: registered taskstats version 1 Sep 9 21:52:13.188863 kernel: Loading compiled-in X.509 certificates Sep 9 21:52:13.188875 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 003b39862f2a560eb5545d7d88a07fc5bdfce075' Sep 9 21:52:13.188887 kernel: Demotion targets for Node 0: null Sep 9 21:52:13.188903 kernel: Key type .fscrypt registered Sep 9 21:52:13.188915 kernel: Key type fscrypt-provisioning registered Sep 9 21:52:13.188929 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 21:52:13.191475 kernel: ima: Allocated hash algorithm: sha1 Sep 9 21:52:13.191492 kernel: ima: No architecture policies found Sep 9 21:52:13.191503 kernel: clk: Disabling unused clocks Sep 9 21:52:13.191542 kernel: Warning: unable to open an initial console. Sep 9 21:52:13.191554 kernel: Freeing unused kernel image (initmem) memory: 54092K Sep 9 21:52:13.191565 kernel: Write protecting the kernel read-only data: 24576k Sep 9 21:52:13.191585 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 21:52:13.191596 kernel: Run /init as init process Sep 9 21:52:13.191607 kernel: with arguments: Sep 9 21:52:13.191619 kernel: /init Sep 9 21:52:13.191630 kernel: with environment: Sep 9 21:52:13.191641 kernel: HOME=/ Sep 9 21:52:13.191652 kernel: TERM=linux Sep 9 21:52:13.191663 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 21:52:13.191676 systemd[1]: Successfully made /usr/ read-only. Sep 9 21:52:13.191697 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:52:13.191710 systemd[1]: Detected virtualization kvm. Sep 9 21:52:13.191723 systemd[1]: Detected architecture x86-64. Sep 9 21:52:13.191734 systemd[1]: Running in initrd. Sep 9 21:52:13.191746 systemd[1]: No hostname configured, using default hostname. Sep 9 21:52:13.191758 systemd[1]: Hostname set to . Sep 9 21:52:13.191771 systemd[1]: Initializing machine ID from VM UUID. Sep 9 21:52:13.191787 systemd[1]: Queued start job for default target initrd.target. Sep 9 21:52:13.191799 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:52:13.191811 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:52:13.191825 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 21:52:13.191837 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:52:13.191849 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 21:52:13.191862 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 21:52:13.191879 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 21:52:13.191892 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 21:52:13.191904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:52:13.191917 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:52:13.191929 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:52:13.191952 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:52:13.191964 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:52:13.191977 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:52:13.191994 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:52:13.192007 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:52:13.192020 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 21:52:13.192033 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 21:52:13.192046 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:52:13.192060 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:52:13.192074 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:52:13.192087 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:52:13.192099 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 21:52:13.194028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:52:13.194046 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 21:52:13.194062 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 21:52:13.194076 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 21:52:13.194089 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:52:13.194101 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:52:13.194114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:52:13.194127 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 21:52:13.194149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:52:13.194162 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 21:52:13.194176 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 21:52:13.194287 systemd-journald[222]: Collecting audit messages is disabled. Sep 9 21:52:13.194324 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 21:52:13.194336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:52:13.194348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:13.194360 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 21:52:13.194376 systemd-journald[222]: Journal started Sep 9 21:52:13.194404 systemd-journald[222]: Runtime Journal (/run/log/journal/6a7fe03d1feb4790912b55d0a046ac11) is 6M, max 48.4M, 42.4M free. Sep 9 21:52:13.069084 systemd-modules-load[223]: Inserted module 'overlay' Sep 9 21:52:13.217725 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:52:13.250999 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 21:52:13.261274 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:52:13.278179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:52:13.292966 kernel: Bridge firewalling registered Sep 9 21:52:13.305496 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 9 21:52:13.307206 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:52:13.322684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:52:13.322906 systemd-tmpfiles[236]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 21:52:13.334647 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:52:13.354747 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:52:13.366223 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 21:52:13.397173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:52:13.406217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:52:13.454698 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 21:52:13.587046 systemd-resolved[265]: Positive Trust Anchors: Sep 9 21:52:13.587077 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:52:13.587115 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:52:13.590768 systemd-resolved[265]: Defaulting to hostname 'linux'. Sep 9 21:52:13.593113 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:52:13.620604 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:52:13.854966 kernel: SCSI subsystem initialized Sep 9 21:52:13.900602 kernel: Loading iSCSI transport class v2.0-870. Sep 9 21:52:13.940097 kernel: iscsi: registered transport (tcp) Sep 9 21:52:14.013057 kernel: iscsi: registered transport (qla4xxx) Sep 9 21:52:14.013155 kernel: QLogic iSCSI HBA Driver Sep 9 21:52:14.106812 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:52:14.176375 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:52:14.194073 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:52:14.432822 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 21:52:14.457236 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 21:52:14.612637 kernel: raid6: avx2x4 gen() 15106 MB/s Sep 9 21:52:14.630641 kernel: raid6: avx2x2 gen() 8050 MB/s Sep 9 21:52:14.649911 kernel: raid6: avx2x1 gen() 5969 MB/s Sep 9 21:52:14.650011 kernel: raid6: using algorithm avx2x4 gen() 15106 MB/s Sep 9 21:52:14.670820 kernel: raid6: .... xor() 1927 MB/s, rmw enabled Sep 9 21:52:14.670962 kernel: raid6: using avx2x2 recovery algorithm Sep 9 21:52:14.802943 kernel: xor: automatically using best checksumming function avx Sep 9 21:52:15.278450 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 21:52:15.319174 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:52:15.342132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:52:15.427231 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 9 21:52:15.440684 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:52:15.444230 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 21:52:15.508436 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Sep 9 21:52:15.595476 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:52:15.599663 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:52:15.794253 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:52:15.818124 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 21:52:15.972012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:52:15.972194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:15.979874 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:52:15.986906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:52:16.006019 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:52:16.025621 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 21:52:16.027663 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 21:52:16.026125 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:52:16.036013 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 21:52:16.027156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:16.046718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:52:16.066568 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 21:52:16.066596 kernel: GPT:9289727 != 19775487 Sep 9 21:52:16.066609 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 21:52:16.066622 kernel: GPT:9289727 != 19775487 Sep 9 21:52:16.066642 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 21:52:16.066654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:52:16.107960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:16.151936 kernel: libata version 3.00 loaded. Sep 9 21:52:16.278400 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 21:52:16.315555 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 21:52:16.319872 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 21:52:16.394333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 21:52:16.415917 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 21:52:16.416294 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 21:52:16.416314 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 21:52:16.423933 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 21:52:16.424339 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 21:52:16.436905 kernel: scsi host0: ahci Sep 9 21:52:16.437272 kernel: scsi host1: ahci Sep 9 21:52:16.437470 kernel: scsi host2: ahci Sep 9 21:52:16.437693 kernel: scsi host3: ahci Sep 9 21:52:16.438127 kernel: scsi host4: ahci Sep 9 21:52:16.438360 kernel: scsi host5: ahci Sep 9 21:52:16.438595 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 21:52:16.438613 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 21:52:16.438627 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 21:52:16.438641 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 21:52:16.438654 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 21:52:16.438668 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 21:52:16.435679 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 21:52:16.464276 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 21:52:16.474278 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 21:52:16.481466 kernel: AES CTR mode by8 optimization enabled Sep 9 21:52:16.535969 disk-uuid[627]: Primary Header is updated. Sep 9 21:52:16.535969 disk-uuid[627]: Secondary Entries is updated. Sep 9 21:52:16.535969 disk-uuid[627]: Secondary Header is updated. Sep 9 21:52:16.549189 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:52:16.572407 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:52:16.774343 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 21:52:16.775399 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 21:52:16.775425 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 21:52:16.778895 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 21:52:16.788947 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 21:52:16.789075 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 21:52:16.790840 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 21:52:16.794275 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 21:52:16.794319 kernel: ata3.00: applying bridge limits Sep 9 21:52:16.806307 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 21:52:16.806374 kernel: ata3.00: configured for UDMA/100 Sep 9 21:52:16.813576 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 21:52:16.960916 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 21:52:16.961360 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 21:52:16.991562 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 21:52:17.579630 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:52:17.581479 disk-uuid[629]: The operation has completed successfully. Sep 9 21:52:17.772279 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 21:52:17.774490 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 21:52:17.774684 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 21:52:17.895816 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:52:17.907987 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:52:17.926082 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:52:17.941592 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 21:52:17.963895 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 21:52:18.029358 sh[662]: Success Sep 9 21:52:18.062199 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:52:18.091905 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 21:52:18.092015 kernel: device-mapper: uevent: version 1.0.3 Sep 9 21:52:18.092047 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 21:52:18.172228 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 21:52:18.293202 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 21:52:18.297358 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 21:52:18.351366 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 21:52:18.368717 kernel: BTRFS: device fsid f72d0a81-8b28-47a3-b3ab-bf6ecd8938f0 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (682) Sep 9 21:52:18.374688 kernel: BTRFS info (device dm-0): first mount of filesystem f72d0a81-8b28-47a3-b3ab-bf6ecd8938f0 Sep 9 21:52:18.374782 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:52:18.423411 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 21:52:18.423507 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 21:52:18.431144 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 21:52:18.438039 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:52:18.454613 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 21:52:18.467184 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 21:52:18.485373 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 21:52:18.621071 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (723) Sep 9 21:52:18.637417 kernel: BTRFS info (device vda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:52:18.637504 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:52:18.657364 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:52:18.657453 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:52:18.677984 kernel: BTRFS info (device vda6): last unmount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:52:18.700873 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 21:52:18.716473 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 21:52:19.177885 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:52:19.191600 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:52:19.639407 systemd-networkd[851]: lo: Link UP Sep 9 21:52:19.639420 systemd-networkd[851]: lo: Gained carrier Sep 9 21:52:19.647067 systemd-networkd[851]: Enumeration completed Sep 9 21:52:19.649694 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:52:19.650632 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:52:19.651815 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 21:52:19.659915 systemd[1]: Reached target network.target - Network. Sep 9 21:52:19.664632 systemd-networkd[851]: eth0: Link UP Sep 9 21:52:19.665902 systemd-networkd[851]: eth0: Gained carrier Sep 9 21:52:19.665923 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:52:19.870963 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 21:52:19.872676 ignition[780]: Ignition 2.22.0 Sep 9 21:52:19.872685 ignition[780]: Stage: fetch-offline Sep 9 21:52:19.872761 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:19.872780 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:52:19.872966 ignition[780]: parsed url from cmdline: "" Sep 9 21:52:19.872972 ignition[780]: no config URL provided Sep 9 21:52:19.872984 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 21:52:19.872996 ignition[780]: no config at "/usr/lib/ignition/user.ign" Sep 9 21:52:19.873027 ignition[780]: op(1): [started] loading QEMU firmware config module Sep 9 21:52:19.873033 ignition[780]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 21:52:19.920305 ignition[780]: op(1): [finished] loading QEMU firmware config module Sep 9 21:52:19.985406 ignition[780]: parsing config with SHA512: c4c10795b7cd45d7ed206cf43c9661f69cea8aab6ca61edfce8b25f5306825ce039bbeab1f5167ad0ea0efba0ee30da3fdc2f888bd3410617ec7564b53cbe69b Sep 9 21:52:20.047851 unknown[780]: fetched base config from "system" Sep 9 21:52:20.047872 unknown[780]: fetched user config from "qemu" Sep 9 21:52:20.050059 ignition[780]: fetch-offline: fetch-offline passed Sep 9 21:52:20.061128 systemd-resolved[265]: Detected conflict on linux IN A 10.0.0.15 Sep 9 21:52:20.050226 ignition[780]: Ignition finished successfully Sep 9 21:52:20.061146 systemd-resolved[265]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Sep 9 21:52:20.070313 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:52:20.073017 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 21:52:20.080041 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 21:52:20.409668 ignition[864]: Ignition 2.22.0 Sep 9 21:52:20.409681 ignition[864]: Stage: kargs Sep 9 21:52:20.418978 ignition[864]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:20.428366 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 21:52:20.419002 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:52:20.420419 ignition[864]: kargs: kargs passed Sep 9 21:52:20.459023 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 21:52:20.420490 ignition[864]: Ignition finished successfully Sep 9 21:52:20.598215 ignition[872]: Ignition 2.22.0 Sep 9 21:52:20.598242 ignition[872]: Stage: disks Sep 9 21:52:20.703739 ignition[872]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:20.703776 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:52:20.705768 ignition[872]: disks: disks passed Sep 9 21:52:20.705857 ignition[872]: Ignition finished successfully Sep 9 21:52:20.721571 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 21:52:20.782303 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 21:52:20.790427 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 21:52:20.840047 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:52:20.861017 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:52:20.871810 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:52:20.893059 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 21:52:20.991831 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 21:52:21.365203 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 21:52:21.372957 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 21:52:21.618635 systemd-networkd[851]: eth0: Gained IPv6LL Sep 9 21:52:21.882759 kernel: EXT4-fs (vda9): mounted filesystem b54acc07-9600-49db-baed-d5fd6f41a1a5 r/w with ordered data mode. Quota mode: none. Sep 9 21:52:21.894332 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 21:52:21.901752 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 21:52:21.921627 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:52:22.009892 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 21:52:22.023265 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 21:52:22.026097 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 21:52:22.031017 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:52:22.064807 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Sep 9 21:52:22.079727 kernel: BTRFS info (device vda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:52:22.079834 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:52:22.080802 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 21:52:22.105799 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 21:52:22.147961 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:52:22.148065 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:52:22.160258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:52:22.372088 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 21:52:22.392689 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Sep 9 21:52:22.432761 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 21:52:22.458895 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 21:52:22.931395 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 21:52:22.962430 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 21:52:22.972918 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 21:52:23.032942 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 21:52:23.039721 kernel: BTRFS info (device vda6): last unmount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:52:23.348564 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 21:52:23.368801 ignition[1003]: INFO : Ignition 2.22.0 Sep 9 21:52:23.368801 ignition[1003]: INFO : Stage: mount Sep 9 21:52:23.378877 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:23.378877 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:52:23.378877 ignition[1003]: INFO : mount: mount passed Sep 9 21:52:23.378877 ignition[1003]: INFO : Ignition finished successfully Sep 9 21:52:23.393088 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 21:52:23.420668 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 21:52:23.461832 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:52:23.545875 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1016) Sep 9 21:52:23.553394 kernel: BTRFS info (device vda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:52:23.553493 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:52:23.572900 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:52:23.572989 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:52:23.584363 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:52:23.780896 ignition[1033]: INFO : Ignition 2.22.0 Sep 9 21:52:23.780896 ignition[1033]: INFO : Stage: files Sep 9 21:52:23.780896 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:23.780896 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:52:23.793101 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Sep 9 21:52:23.838265 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 21:52:23.838265 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 21:52:23.877665 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 21:52:23.877665 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 21:52:23.893734 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 21:52:23.880939 unknown[1033]: wrote ssh authorized keys file for user: core Sep 9 21:52:23.909464 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 21:52:23.913837 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 21:52:23.969748 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 21:52:24.432234 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 21:52:24.432234 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 21:52:24.432234 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 21:52:24.629853 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 21:52:25.055637 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 21:52:25.055637 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 21:52:25.081685 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 21:52:25.081685 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:52:25.081685 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:52:25.081685 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:52:25.081685 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:52:25.081685 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:52:25.081685 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:52:25.081685 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:52:25.081685 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:52:25.081685 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 21:52:25.135651 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 21:52:25.135651 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 21:52:25.135651 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 21:52:25.543493 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 21:52:28.013048 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 21:52:28.013048 ignition[1033]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 21:52:28.049423 ignition[1033]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:52:28.440086 ignition[1033]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:52:28.440086 ignition[1033]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 21:52:28.440086 ignition[1033]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 21:52:28.440086 ignition[1033]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:52:28.519633 ignition[1033]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:52:28.519633 ignition[1033]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 21:52:28.519633 ignition[1033]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 21:52:28.644875 ignition[1033]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:52:28.670111 ignition[1033]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:52:28.670111 ignition[1033]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 21:52:28.670111 ignition[1033]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 21:52:28.670111 ignition[1033]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 21:52:28.670111 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:52:28.670111 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:52:28.670111 ignition[1033]: INFO : files: files passed Sep 9 21:52:28.670111 ignition[1033]: INFO : Ignition finished successfully Sep 9 21:52:28.684720 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 21:52:28.742870 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 21:52:28.757907 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 21:52:28.815167 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 21:52:28.815370 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 21:52:28.827075 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 21:52:28.835961 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:52:28.835961 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:52:28.852265 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:52:28.862582 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:52:28.866800 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 21:52:28.881640 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 21:52:29.050243 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 21:52:29.054339 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 21:52:29.123289 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 21:52:29.136148 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 21:52:29.142108 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 21:52:29.146860 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 21:52:29.239354 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:52:29.260080 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 21:52:29.347021 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:52:29.351536 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:52:29.353374 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 21:52:29.364827 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 21:52:29.365044 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:52:29.368969 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 21:52:29.370671 systemd[1]: Stopped target basic.target - Basic System. Sep 9 21:52:29.378328 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 21:52:29.379990 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:52:29.381755 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 21:52:29.387839 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:52:29.390795 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 21:52:29.411470 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:52:29.452397 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 21:52:29.481955 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 21:52:29.483660 systemd[1]: Stopped target swap.target - Swaps. Sep 9 21:52:29.487767 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 21:52:29.487990 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:52:29.496789 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:52:29.505917 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:52:29.509894 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 21:52:29.510176 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:52:29.548441 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 21:52:29.548707 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 21:52:29.561128 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 21:52:29.561316 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:52:29.563441 systemd[1]: Stopped target paths.target - Path Units. Sep 9 21:52:29.566314 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 21:52:29.567888 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:52:29.569666 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 21:52:29.575075 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 21:52:29.592115 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 21:52:29.592267 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:52:29.592432 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 21:52:29.594671 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:52:29.595083 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 21:52:29.595240 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:52:29.595410 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 21:52:29.595556 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 21:52:29.604831 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 21:52:29.607592 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 21:52:29.607875 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:52:29.613119 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 21:52:29.613424 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 21:52:29.613708 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:52:29.613987 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 21:52:29.614186 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:52:29.716256 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 21:52:29.725705 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 21:52:29.820748 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 21:52:29.849055 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 21:52:29.850703 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 21:52:30.042278 ignition[1088]: INFO : Ignition 2.22.0 Sep 9 21:52:30.042278 ignition[1088]: INFO : Stage: umount Sep 9 21:52:30.042278 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:30.042278 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:52:30.088669 ignition[1088]: INFO : umount: umount passed Sep 9 21:52:30.088669 ignition[1088]: INFO : Ignition finished successfully Sep 9 21:52:30.056964 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 21:52:30.057193 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 21:52:30.060804 systemd[1]: Stopped target network.target - Network. Sep 9 21:52:30.060868 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 21:52:30.060932 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 21:52:30.061030 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 21:52:30.061097 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 21:52:30.061177 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 21:52:30.061255 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 21:52:30.061342 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 21:52:30.061400 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 21:52:30.061495 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 21:52:30.061629 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 21:52:30.061913 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 21:52:30.062096 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 21:52:30.088683 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 21:52:30.088926 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 21:52:30.110223 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 21:52:30.110835 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 21:52:30.111152 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 21:52:30.126358 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 21:52:30.127951 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 21:52:30.145534 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 21:52:30.145622 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:52:30.157086 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 21:52:30.173082 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 21:52:30.173215 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:52:30.179736 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 21:52:30.179836 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:52:30.200727 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 21:52:30.200881 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 21:52:30.205102 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 21:52:30.205164 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:52:30.210115 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:52:30.217140 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 21:52:30.217262 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:52:30.263204 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 21:52:30.263621 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:52:30.284889 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 21:52:30.284972 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 21:52:30.298241 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 21:52:30.298326 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:52:30.302177 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 21:52:30.302330 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:52:30.308753 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 21:52:30.308870 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 21:52:30.323322 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 21:52:30.323474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:52:30.334759 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 21:52:30.337680 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 21:52:30.337996 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:52:30.350105 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 21:52:30.350211 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:52:30.360038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:52:30.360144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:30.366180 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 21:52:30.366283 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 21:52:30.366354 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:52:30.366972 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 21:52:30.367215 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 21:52:30.393462 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 21:52:30.394175 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 21:52:30.401452 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 21:52:30.418304 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 21:52:30.468813 systemd[1]: Switching root. Sep 9 21:52:30.549121 systemd-journald[222]: Journal stopped Sep 9 21:52:34.968330 systemd-journald[222]: Received SIGTERM from PID 1 (systemd). Sep 9 21:52:34.968447 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 21:52:34.968484 kernel: SELinux: policy capability open_perms=1 Sep 9 21:52:34.968597 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 21:52:34.968620 kernel: SELinux: policy capability always_check_network=0 Sep 9 21:52:34.968643 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 21:52:34.968660 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 21:52:34.968690 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 21:52:34.968722 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 21:52:34.968740 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 21:52:34.968759 kernel: audit: type=1403 audit(1757454752.072:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 21:52:34.968778 systemd[1]: Successfully loaded SELinux policy in 153.026ms. Sep 9 21:52:34.968799 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 30.887ms. Sep 9 21:52:34.968827 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:52:34.968846 systemd[1]: Detected virtualization kvm. Sep 9 21:52:34.968865 systemd[1]: Detected architecture x86-64. Sep 9 21:52:34.968883 systemd[1]: Detected first boot. Sep 9 21:52:34.968905 systemd[1]: Initializing machine ID from VM UUID. Sep 9 21:52:34.968923 zram_generator::config[1133]: No configuration found. Sep 9 21:52:34.968942 kernel: Guest personality initialized and is inactive Sep 9 21:52:34.968960 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 21:52:34.968980 kernel: Initialized host personality Sep 9 21:52:34.968996 kernel: NET: Registered PF_VSOCK protocol family Sep 9 21:52:34.969013 systemd[1]: Populated /etc with preset unit settings. Sep 9 21:52:34.969034 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 21:52:34.969063 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 21:52:34.969081 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 21:52:34.969104 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 21:52:34.969124 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 21:52:34.969141 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 21:52:34.969157 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 21:52:34.969172 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 21:52:34.969188 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 21:52:34.969204 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 21:52:34.969224 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 21:52:34.969241 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 21:52:34.969257 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:52:34.969275 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:52:34.969291 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 21:52:34.972397 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 21:52:34.972425 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 21:52:34.972453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:52:34.972470 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 21:52:34.972487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:52:34.972503 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:52:34.972539 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 21:52:34.972561 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 21:52:34.972579 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 21:52:34.972595 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 21:52:34.972612 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:52:34.972634 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:52:34.972650 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:52:34.972667 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:52:34.972684 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 21:52:34.972702 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 21:52:34.972720 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 21:52:34.972739 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:52:34.972756 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:52:34.972778 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:52:34.972795 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 21:52:34.972816 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 21:52:34.972832 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 21:52:34.972848 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 21:52:34.972865 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:34.972881 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 21:52:34.972897 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 21:52:34.972914 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 21:52:34.972931 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 21:52:34.972960 systemd[1]: Reached target machines.target - Containers. Sep 9 21:52:34.972977 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 21:52:34.972995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:52:34.973011 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:52:34.973031 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 21:52:34.973047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:52:34.973063 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:52:34.973078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:52:34.973094 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 21:52:34.973115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:52:34.973135 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 21:52:34.973157 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 21:52:34.973176 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 21:52:34.973194 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 21:52:34.973212 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 21:52:34.973231 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:52:34.973249 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:52:34.973271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:52:34.973289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:52:34.973317 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 21:52:34.973335 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 21:52:34.973352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:52:34.973378 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 21:52:34.973397 systemd[1]: Stopped verity-setup.service. Sep 9 21:52:34.973416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:34.973439 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 21:52:34.973457 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 21:52:34.973483 kernel: fuse: init (API version 7.41) Sep 9 21:52:34.973503 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 21:52:34.973542 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 21:52:34.973562 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 21:52:34.973581 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 21:52:34.973603 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 21:52:34.973622 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:52:34.973640 kernel: loop: module loaded Sep 9 21:52:34.973657 kernel: ACPI: bus type drm_connector registered Sep 9 21:52:34.973685 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 21:52:34.973705 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 21:52:34.973728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:52:34.973818 systemd-journald[1211]: Collecting audit messages is disabled. Sep 9 21:52:34.973848 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:52:34.973863 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:52:34.973877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:52:34.973894 systemd-journald[1211]: Journal started Sep 9 21:52:34.973936 systemd-journald[1211]: Runtime Journal (/run/log/journal/6a7fe03d1feb4790912b55d0a046ac11) is 6M, max 48.4M, 42.4M free. Sep 9 21:52:33.754922 systemd[1]: Queued start job for default target multi-user.target. Sep 9 21:52:33.782087 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 21:52:33.786702 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 21:52:34.984658 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:52:34.985473 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:52:34.985790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:52:34.994656 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 21:52:34.995021 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 21:52:34.998796 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:52:34.999191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:52:35.007289 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:52:35.010417 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:52:35.018450 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 21:52:35.021854 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 21:52:35.076151 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:52:35.085487 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 21:52:35.092478 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 21:52:35.097678 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 21:52:35.097747 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:52:35.107408 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 21:52:35.152338 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 21:52:35.161645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:52:35.167644 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 21:52:35.187679 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 21:52:35.201041 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:52:35.208083 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 21:52:35.212591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:52:35.221902 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:52:35.253721 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 21:52:35.285632 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 21:52:35.303830 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:52:35.306526 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 21:52:35.318976 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 21:52:35.327506 systemd-journald[1211]: Time spent on flushing to /var/log/journal/6a7fe03d1feb4790912b55d0a046ac11 is 59.311ms for 1080 entries. Sep 9 21:52:35.327506 systemd-journald[1211]: System Journal (/var/log/journal/6a7fe03d1feb4790912b55d0a046ac11) is 8M, max 195.6M, 187.6M free. Sep 9 21:52:35.508505 systemd-journald[1211]: Received client request to flush runtime journal. Sep 9 21:52:35.508570 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 21:52:35.443960 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 21:52:35.457933 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 21:52:35.478567 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 21:52:35.487068 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:52:35.510245 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 21:52:35.625612 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 21:52:35.692992 kernel: loop1: detected capacity change from 0 to 221472 Sep 9 21:52:35.759393 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 21:52:35.812025 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 21:52:35.876632 kernel: loop2: detected capacity change from 0 to 110984 Sep 9 21:52:35.991559 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 21:52:36.000681 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:52:36.056975 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 21:52:36.101047 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 9 21:52:36.101071 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 9 21:52:36.118622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:52:36.131828 kernel: loop4: detected capacity change from 0 to 221472 Sep 9 21:52:36.198860 kernel: loop5: detected capacity change from 0 to 110984 Sep 9 21:52:36.235344 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 21:52:36.236117 (sd-merge)[1273]: Merged extensions into '/usr'. Sep 9 21:52:36.274093 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 21:52:36.274978 systemd[1]: Reloading... Sep 9 21:52:36.655546 zram_generator::config[1300]: No configuration found. Sep 9 21:52:37.135399 systemd[1]: Reloading finished in 859 ms. Sep 9 21:52:37.194121 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 21:52:37.227496 systemd[1]: Starting ensure-sysext.service... Sep 9 21:52:37.256813 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:52:37.342783 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Sep 9 21:52:37.342808 systemd[1]: Reloading... Sep 9 21:52:37.359370 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 21:52:37.359415 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 21:52:37.359802 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 21:52:37.360067 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 21:52:37.361036 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 21:52:37.361319 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 9 21:52:37.361391 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 9 21:52:37.366724 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:52:37.366744 systemd-tmpfiles[1337]: Skipping /boot Sep 9 21:52:37.453573 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:52:37.453615 systemd-tmpfiles[1337]: Skipping /boot Sep 9 21:52:37.542784 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 21:52:37.545582 zram_generator::config[1365]: No configuration found. Sep 9 21:52:38.105850 systemd[1]: Reloading finished in 761 ms. Sep 9 21:52:38.149035 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 21:52:38.154852 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 21:52:38.188246 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:52:38.206767 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:52:38.225880 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 21:52:38.235627 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 21:52:38.253469 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:52:38.273643 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:52:38.281066 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 21:52:38.296089 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:38.296322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:52:38.305682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:52:38.314334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:52:38.320089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:52:38.324941 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:52:38.325152 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:52:38.335667 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 21:52:38.337430 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:38.345679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:52:38.358316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:52:38.367384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:52:38.367723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:52:38.374142 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:52:38.374498 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:52:38.402776 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 21:52:38.421443 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 21:52:38.435718 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:38.436063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:52:38.438494 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:52:38.445655 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:52:38.456108 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:52:38.465880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:52:38.473669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:52:38.473906 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:52:38.479431 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Sep 9 21:52:38.492731 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 21:52:38.494951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:38.501729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:52:38.502108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:52:38.552325 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:52:38.552688 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:52:38.563934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:52:38.564271 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:52:38.567364 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:52:38.567700 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:52:38.580412 systemd[1]: Finished ensure-sysext.service. Sep 9 21:52:38.604810 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:52:38.604934 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:52:38.617670 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 21:52:38.676019 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 21:52:38.684702 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 21:52:38.743259 augenrules[1457]: No rules Sep 9 21:52:38.746048 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:52:38.746590 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:52:38.767095 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:52:38.781765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:52:39.040596 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 21:52:39.080493 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 21:52:39.110466 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 21:52:39.160372 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 21:52:39.176049 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 21:52:39.188266 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 21:52:39.192540 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 21:52:39.204555 kernel: ACPI: button: Power Button [PWRF] Sep 9 21:52:39.269908 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 21:52:39.285557 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 21:52:39.286137 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 21:52:39.288550 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 21:52:39.451740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:52:39.458431 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 21:52:39.465971 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 21:52:39.495353 systemd-networkd[1465]: lo: Link UP Sep 9 21:52:39.495378 systemd-networkd[1465]: lo: Gained carrier Sep 9 21:52:39.498687 systemd-resolved[1409]: Positive Trust Anchors: Sep 9 21:52:39.498714 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:52:39.498759 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:52:39.501019 systemd-networkd[1465]: Enumeration completed Sep 9 21:52:39.501706 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:52:39.501720 systemd-networkd[1465]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 21:52:39.504284 systemd-networkd[1465]: eth0: Link UP Sep 9 21:52:39.504585 systemd-networkd[1465]: eth0: Gained carrier Sep 9 21:52:39.504623 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:52:39.506972 systemd-resolved[1409]: Defaulting to hostname 'linux'. Sep 9 21:52:39.521236 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:52:39.585366 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:52:39.602299 systemd[1]: Reached target network.target - Network. Sep 9 21:52:39.602937 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:52:39.616582 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 21:52:39.630336 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 21:52:39.632689 systemd-networkd[1465]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 21:52:39.635273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:52:39.635707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:39.636272 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. Sep 9 21:52:40.356862 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 21:52:40.356967 systemd-timesyncd[1446]: Initial clock synchronization to Tue 2025-09-09 21:52:40.356660 UTC. Sep 9 21:52:40.357055 systemd-resolved[1409]: Clock change detected. Flushing caches. Sep 9 21:52:40.357565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:52:40.415368 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 21:52:40.575155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:40.580900 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:52:40.590588 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 21:52:40.592645 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 21:52:40.594602 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 21:52:40.604607 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 21:52:40.609268 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 21:52:40.618805 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 21:52:40.630442 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 21:52:40.630511 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:52:40.632378 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:52:40.639787 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 21:52:40.648755 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 21:52:40.659660 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 21:52:40.672574 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 21:52:40.674161 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 21:52:40.688571 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 21:52:40.697705 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 21:52:40.700382 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 21:52:40.703190 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:52:40.704776 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:52:40.706158 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:52:40.706195 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:52:40.713567 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 21:52:40.721389 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 21:52:40.725743 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 21:52:40.740565 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 21:52:40.744751 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 21:52:40.746181 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 21:52:40.749954 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 21:52:40.752022 kernel: kvm_amd: TSC scaling supported Sep 9 21:52:40.752092 kernel: kvm_amd: Nested Virtualization enabled Sep 9 21:52:40.752149 kernel: kvm_amd: Nested Paging enabled Sep 9 21:52:40.752173 kernel: kvm_amd: LBR virtualization supported Sep 9 21:52:40.752191 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 21:52:40.753608 kernel: kvm_amd: Virtual GIF supported Sep 9 21:52:40.772251 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 21:52:40.779252 jq[1539]: false Sep 9 21:52:40.781742 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 21:52:40.796294 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 21:52:40.800272 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing passwd entry cache Sep 9 21:52:40.800300 oslogin_cache_refresh[1541]: Refreshing passwd entry cache Sep 9 21:52:40.801228 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 21:52:40.818980 extend-filesystems[1540]: Found /dev/vda6 Sep 9 21:52:40.827236 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting users, quitting Sep 9 21:52:40.827236 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 21:52:40.827236 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing group entry cache Sep 9 21:52:40.826732 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 21:52:40.819645 oslogin_cache_refresh[1541]: Failure getting users, quitting Sep 9 21:52:40.819690 oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 21:52:40.820686 oslogin_cache_refresh[1541]: Refreshing group entry cache Sep 9 21:52:40.833980 extend-filesystems[1540]: Found /dev/vda9 Sep 9 21:52:40.845118 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting groups, quitting Sep 9 21:52:40.845118 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 21:52:40.836096 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 21:52:40.835584 oslogin_cache_refresh[1541]: Failure getting groups, quitting Sep 9 21:52:40.837470 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 21:52:40.835609 oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 21:52:40.840610 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 21:52:40.848842 extend-filesystems[1540]: Checking size of /dev/vda9 Sep 9 21:52:40.865302 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 21:52:40.881676 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 21:52:40.882371 jq[1560]: true Sep 9 21:52:40.886924 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 21:52:40.888756 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 21:52:40.896553 extend-filesystems[1540]: Resized partition /dev/vda9 Sep 9 21:52:40.889386 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 21:52:40.889830 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 21:52:40.898885 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 21:52:40.904709 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 21:52:40.909612 extend-filesystems[1566]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 21:52:40.912025 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 21:52:40.912467 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 21:52:40.924365 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 21:52:40.956763 (ntainerd)[1570]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 21:52:40.972997 tar[1568]: linux-amd64/helm Sep 9 21:52:40.972325 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 21:52:40.978800 jq[1569]: true Sep 9 21:52:41.007052 update_engine[1557]: I20250909 21:52:41.006820 1557 main.cc:92] Flatcar Update Engine starting Sep 9 21:52:41.084779 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 21:52:41.096005 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 21:52:41.096005 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 21:52:41.096005 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 21:52:41.090595 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 21:52:41.101457 dbus-daemon[1537]: [system] SELinux support is enabled Sep 9 21:52:41.105282 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Sep 9 21:52:41.091143 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 21:52:41.128519 kernel: EDAC MC: Ver: 3.0.0 Sep 9 21:52:41.128639 update_engine[1557]: I20250909 21:52:41.105635 1557 update_check_scheduler.cc:74] Next update check in 7m4s Sep 9 21:52:41.102808 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 21:52:41.115552 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 21:52:41.115590 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 21:52:41.124800 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 21:52:41.124832 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 21:52:41.134171 systemd[1]: Started update-engine.service - Update Engine. Sep 9 21:52:41.141466 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 21:52:41.257591 systemd-logind[1552]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 21:52:41.257635 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 21:52:41.259156 systemd-logind[1552]: New seat seat0. Sep 9 21:52:41.274788 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 21:52:41.299301 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 21:52:41.334214 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Sep 9 21:52:41.337022 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 21:52:41.356004 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 21:52:41.382415 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 21:52:41.390665 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 21:52:41.395107 systemd-networkd[1465]: eth0: Gained IPv6LL Sep 9 21:52:41.401730 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:33680.service - OpenSSH per-connection server daemon (10.0.0.1:33680). Sep 9 21:52:41.409533 locksmithd[1603]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 21:52:41.418597 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 21:52:41.429047 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 21:52:41.447720 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 21:52:41.538314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:52:41.555795 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 21:52:41.560281 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 21:52:41.560709 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 21:52:41.609502 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 21:52:41.813227 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 21:52:41.838111 sshd[1620]: Access denied for user core by PAM account configuration [preauth] Sep 9 21:52:41.847552 systemd[1]: sshd@0-10.0.0.15:22-10.0.0.1:33680.service: Deactivated successfully. Sep 9 21:52:41.878805 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 21:52:41.932667 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 21:52:41.956863 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 21:52:41.960696 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 21:52:41.965022 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 21:52:41.965425 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 21:52:41.972215 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 21:52:42.090142 containerd[1570]: time="2025-09-09T21:52:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 21:52:42.091352 containerd[1570]: time="2025-09-09T21:52:42.091213676Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.242480576Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.625µs" Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.242539656Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.242571005Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.242904070Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.242944836Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.242990161Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.243099436Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.243122489Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.243585518Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.243608631Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.243624150Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:52:42.248742 containerd[1570]: time="2025-09-09T21:52:42.243640501Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 21:52:42.249301 containerd[1570]: time="2025-09-09T21:52:42.243787156Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 21:52:42.249301 containerd[1570]: time="2025-09-09T21:52:42.244150016Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:52:42.249301 containerd[1570]: time="2025-09-09T21:52:42.244196824Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:52:42.249301 containerd[1570]: time="2025-09-09T21:52:42.244214908Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 21:52:42.249301 containerd[1570]: time="2025-09-09T21:52:42.244253731Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 21:52:42.249301 containerd[1570]: time="2025-09-09T21:52:42.244595612Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 21:52:42.249301 containerd[1570]: time="2025-09-09T21:52:42.244700068Z" level=info msg="metadata content store policy set" policy=shared Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277133122Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277275779Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277305916Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277325462Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277365798Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277401866Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277427804Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277449835Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277467909Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277483829Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277499589Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.277517432Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.278001510Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 21:52:42.278998 containerd[1570]: time="2025-09-09T21:52:42.278091479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278117518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278146101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278166469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278181448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278195704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278208849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278222454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278235589Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278250196Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278364220Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278386342Z" level=info msg="Start snapshots syncer" Sep 9 21:52:42.279498 containerd[1570]: time="2025-09-09T21:52:42.278433280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 21:52:42.279823 containerd[1570]: time="2025-09-09T21:52:42.278812070Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 21:52:42.279823 containerd[1570]: time="2025-09-09T21:52:42.278890948Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279032043Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279193175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279223021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279239903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279255121Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279275239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279290077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279306598Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279537290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279614816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279629163Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279670129Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279690347Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:52:42.280245 containerd[1570]: time="2025-09-09T21:52:42.279703662Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:52:42.280635 containerd[1570]: time="2025-09-09T21:52:42.279786127Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:52:42.280635 containerd[1570]: time="2025-09-09T21:52:42.279798510Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 21:52:42.280635 containerd[1570]: time="2025-09-09T21:52:42.279810272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 21:52:42.280635 containerd[1570]: time="2025-09-09T21:52:42.279827665Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 21:52:42.280635 containerd[1570]: time="2025-09-09T21:52:42.279853233Z" level=info msg="runtime interface created" Sep 9 21:52:42.280635 containerd[1570]: time="2025-09-09T21:52:42.279860777Z" level=info msg="created NRI interface" Sep 9 21:52:42.280635 containerd[1570]: time="2025-09-09T21:52:42.279869814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 21:52:42.280635 containerd[1570]: time="2025-09-09T21:52:42.279882157Z" level=info msg="Connect containerd service" Sep 9 21:52:42.280635 containerd[1570]: time="2025-09-09T21:52:42.279906312Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 21:52:42.284983 containerd[1570]: time="2025-09-09T21:52:42.282664986Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:52:42.867464 containerd[1570]: time="2025-09-09T21:52:42.867349418Z" level=info msg="Start subscribing containerd event" Sep 9 21:52:42.867464 containerd[1570]: time="2025-09-09T21:52:42.867446640Z" level=info msg="Start recovering state" Sep 9 21:52:42.867722 containerd[1570]: time="2025-09-09T21:52:42.867635905Z" level=info msg="Start event monitor" Sep 9 21:52:42.867722 containerd[1570]: time="2025-09-09T21:52:42.867668236Z" level=info msg="Start cni network conf syncer for default" Sep 9 21:52:42.867722 containerd[1570]: time="2025-09-09T21:52:42.867694805Z" level=info msg="Start streaming server" Sep 9 21:52:42.867722 containerd[1570]: time="2025-09-09T21:52:42.867711998Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 21:52:42.867722 containerd[1570]: time="2025-09-09T21:52:42.867722808Z" level=info msg="runtime interface starting up..." Sep 9 21:52:42.867888 containerd[1570]: time="2025-09-09T21:52:42.867732426Z" level=info msg="starting plugins..." Sep 9 21:52:42.867888 containerd[1570]: time="2025-09-09T21:52:42.867757854Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 21:52:42.868320 containerd[1570]: time="2025-09-09T21:52:42.868224399Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 21:52:42.868394 containerd[1570]: time="2025-09-09T21:52:42.868325989Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 21:52:42.868688 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 21:52:42.871582 containerd[1570]: time="2025-09-09T21:52:42.871518356Z" level=info msg="containerd successfully booted in 0.790654s" Sep 9 21:52:42.942341 tar[1568]: linux-amd64/LICENSE Sep 9 21:52:42.942341 tar[1568]: linux-amd64/README.md Sep 9 21:52:43.007929 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 21:52:46.624427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:52:46.633146 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 21:52:46.639919 systemd[1]: Startup finished in 11.331s (kernel) + 20.137s (initrd) + 14.008s (userspace) = 45.477s. Sep 9 21:52:46.654571 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:52:49.703911 kubelet[1679]: E0909 21:52:49.703507 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:52:49.719740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:52:49.721154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:52:49.721974 systemd[1]: kubelet.service: Consumed 3.655s CPU time, 267.2M memory peak. Sep 9 21:52:51.908473 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:55128.service - OpenSSH per-connection server daemon (10.0.0.1:55128). Sep 9 21:52:52.049479 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 55128 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:52:52.062582 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:52:52.095677 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 21:52:52.102378 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 21:52:52.129418 systemd-logind[1552]: New session 1 of user core. Sep 9 21:52:52.196688 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 21:52:52.204212 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 21:52:52.325380 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 21:52:52.334932 systemd-logind[1552]: New session c1 of user core. Sep 9 21:52:52.750201 systemd[1698]: Queued start job for default target default.target. Sep 9 21:52:52.778507 systemd[1698]: Created slice app.slice - User Application Slice. Sep 9 21:52:52.778543 systemd[1698]: Reached target paths.target - Paths. Sep 9 21:52:52.778595 systemd[1698]: Reached target timers.target - Timers. Sep 9 21:52:52.788410 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 21:52:52.859995 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 21:52:52.860116 systemd[1698]: Reached target sockets.target - Sockets. Sep 9 21:52:52.860205 systemd[1698]: Reached target basic.target - Basic System. Sep 9 21:52:52.860285 systemd[1698]: Reached target default.target - Main User Target. Sep 9 21:52:52.860361 systemd[1698]: Startup finished in 489ms. Sep 9 21:52:52.861536 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 21:52:52.880097 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 21:52:52.994249 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:55144.service - OpenSSH per-connection server daemon (10.0.0.1:55144). Sep 9 21:52:53.176990 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 55144 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:52:53.181249 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:52:53.204755 systemd-logind[1552]: New session 2 of user core. Sep 9 21:52:53.231726 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 21:52:53.362245 sshd[1712]: Connection closed by 10.0.0.1 port 55144 Sep 9 21:52:53.361776 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Sep 9 21:52:53.402484 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:55144.service: Deactivated successfully. Sep 9 21:52:53.411512 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 21:52:53.421760 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. Sep 9 21:52:53.434728 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:55156.service - OpenSSH per-connection server daemon (10.0.0.1:55156). Sep 9 21:52:53.449067 systemd-logind[1552]: Removed session 2. Sep 9 21:52:53.612734 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 55156 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:52:53.616630 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:52:53.651571 systemd-logind[1552]: New session 3 of user core. Sep 9 21:52:53.662674 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 21:52:53.737776 sshd[1721]: Connection closed by 10.0.0.1 port 55156 Sep 9 21:52:53.740811 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Sep 9 21:52:53.770829 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:55156.service: Deactivated successfully. Sep 9 21:52:53.780275 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 21:52:53.785130 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. Sep 9 21:52:53.790880 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:55166.service - OpenSSH per-connection server daemon (10.0.0.1:55166). Sep 9 21:52:53.800167 systemd-logind[1552]: Removed session 3. Sep 9 21:52:53.890297 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 55166 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:52:53.892825 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:52:53.904216 systemd-logind[1552]: New session 4 of user core. Sep 9 21:52:53.918460 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 21:52:54.014520 sshd[1730]: Connection closed by 10.0.0.1 port 55166 Sep 9 21:52:54.015075 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 9 21:52:54.029660 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:55166.service: Deactivated successfully. Sep 9 21:52:54.032362 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 21:52:54.035806 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. Sep 9 21:52:54.041840 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:55168.service - OpenSSH per-connection server daemon (10.0.0.1:55168). Sep 9 21:52:54.046209 systemd-logind[1552]: Removed session 4. Sep 9 21:52:54.129940 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 55168 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:52:54.131582 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:52:54.148616 systemd-logind[1552]: New session 5 of user core. Sep 9 21:52:54.160738 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 21:52:54.267660 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 21:52:54.268086 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:52:54.293112 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 9 21:52:54.298434 sshd[1739]: Connection closed by 10.0.0.1 port 55168 Sep 9 21:52:54.298750 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 9 21:52:54.331086 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:55168.service: Deactivated successfully. Sep 9 21:52:54.339128 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 21:52:54.355174 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Sep 9 21:52:54.371147 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:55172.service - OpenSSH per-connection server daemon (10.0.0.1:55172). Sep 9 21:52:54.377171 systemd-logind[1552]: Removed session 5. Sep 9 21:52:54.554931 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 55172 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:52:54.562961 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:52:54.583476 systemd-logind[1552]: New session 6 of user core. Sep 9 21:52:54.610197 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 21:52:54.709078 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 21:52:54.710720 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:52:54.748850 sudo[1751]: pam_unix(sudo:session): session closed for user root Sep 9 21:52:54.766549 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 21:52:54.770001 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:52:54.813109 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:52:54.986461 augenrules[1773]: No rules Sep 9 21:52:54.997811 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:52:55.001463 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:52:55.009826 sudo[1750]: pam_unix(sudo:session): session closed for user root Sep 9 21:52:55.017966 sshd[1749]: Connection closed by 10.0.0.1 port 55172 Sep 9 21:52:55.024808 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Sep 9 21:52:55.058987 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:55172.service: Deactivated successfully. Sep 9 21:52:55.072509 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 21:52:55.084387 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Sep 9 21:52:55.093401 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:55176.service - OpenSSH per-connection server daemon (10.0.0.1:55176). Sep 9 21:52:55.098714 systemd-logind[1552]: Removed session 6. Sep 9 21:52:55.277299 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 55176 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:52:55.280884 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:52:55.304991 systemd-logind[1552]: New session 7 of user core. Sep 9 21:52:55.331678 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 21:52:55.471847 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 21:52:55.476676 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:52:58.769310 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 21:52:58.800285 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 21:52:59.759046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 21:52:59.783038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:53:00.871732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:53:00.950101 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:53:01.629840 dockerd[1806]: time="2025-09-09T21:53:01.628950847Z" level=info msg="Starting up" Sep 9 21:53:01.637812 dockerd[1806]: time="2025-09-09T21:53:01.636755274Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 21:53:01.735835 dockerd[1806]: time="2025-09-09T21:53:01.735707795Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 21:53:01.904824 kubelet[1819]: E0909 21:53:01.902417 1819 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:53:01.925101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:53:01.927672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:53:01.930623 systemd[1]: kubelet.service: Consumed 1.185s CPU time, 110.4M memory peak. Sep 9 21:53:02.291654 dockerd[1806]: time="2025-09-09T21:53:02.289412529Z" level=info msg="Loading containers: start." Sep 9 21:53:02.338934 kernel: Initializing XFRM netlink socket Sep 9 21:53:03.320686 systemd-networkd[1465]: docker0: Link UP Sep 9 21:53:03.337387 dockerd[1806]: time="2025-09-09T21:53:03.334211460Z" level=info msg="Loading containers: done." Sep 9 21:53:03.441718 dockerd[1806]: time="2025-09-09T21:53:03.441127736Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 21:53:03.441718 dockerd[1806]: time="2025-09-09T21:53:03.441265424Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 21:53:03.441718 dockerd[1806]: time="2025-09-09T21:53:03.441425414Z" level=info msg="Initializing buildkit" Sep 9 21:53:03.574183 dockerd[1806]: time="2025-09-09T21:53:03.573710256Z" level=info msg="Completed buildkit initialization" Sep 9 21:53:03.606375 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 21:53:03.610906 dockerd[1806]: time="2025-09-09T21:53:03.608723217Z" level=info msg="Daemon has completed initialization" Sep 9 21:53:03.610906 dockerd[1806]: time="2025-09-09T21:53:03.608817654Z" level=info msg="API listen on /run/docker.sock" Sep 9 21:53:06.882401 containerd[1570]: time="2025-09-09T21:53:06.882040546Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 21:53:08.059605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142994359.mount: Deactivated successfully. Sep 9 21:53:11.997507 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 21:53:12.000938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:53:12.503088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:53:12.526798 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:53:12.994252 kubelet[2099]: E0909 21:53:12.994061 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:53:13.004957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:53:13.008427 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:53:13.009779 systemd[1]: kubelet.service: Consumed 638ms CPU time, 107.4M memory peak. Sep 9 21:53:13.829989 containerd[1570]: time="2025-09-09T21:53:13.829245951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:13.835941 containerd[1570]: time="2025-09-09T21:53:13.835819285Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 9 21:53:13.837362 containerd[1570]: time="2025-09-09T21:53:13.836861512Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:13.851972 containerd[1570]: time="2025-09-09T21:53:13.851442199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:13.858789 containerd[1570]: time="2025-09-09T21:53:13.855584058Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 6.973471599s" Sep 9 21:53:13.858789 containerd[1570]: time="2025-09-09T21:53:13.855640279Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 21:53:13.859976 containerd[1570]: time="2025-09-09T21:53:13.859404291Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 21:53:19.549404 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1049931641 wd_nsec: 1049931156 Sep 9 21:53:19.826829 containerd[1570]: time="2025-09-09T21:53:19.826439677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:19.837575 containerd[1570]: time="2025-09-09T21:53:19.837468561Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 9 21:53:19.848247 containerd[1570]: time="2025-09-09T21:53:19.848117153Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:19.863123 containerd[1570]: time="2025-09-09T21:53:19.862820366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:19.864566 containerd[1570]: time="2025-09-09T21:53:19.864460959Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 6.004999005s" Sep 9 21:53:19.864566 containerd[1570]: time="2025-09-09T21:53:19.864547857Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 21:53:19.876129 containerd[1570]: time="2025-09-09T21:53:19.870150995Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 21:53:23.245080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 21:53:23.256513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:53:24.182322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:53:24.262968 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:53:24.564707 kubelet[2124]: E0909 21:53:24.563496 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:53:24.569518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:53:24.569773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:53:24.570304 systemd[1]: kubelet.service: Consumed 810ms CPU time, 110.4M memory peak. Sep 9 21:53:26.186559 update_engine[1557]: I20250909 21:53:26.186393 1557 update_attempter.cc:509] Updating boot flags... Sep 9 21:53:26.279020 containerd[1570]: time="2025-09-09T21:53:26.278213704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:26.284967 containerd[1570]: time="2025-09-09T21:53:26.284779678Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 9 21:53:26.294410 containerd[1570]: time="2025-09-09T21:53:26.291552012Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:26.303406 containerd[1570]: time="2025-09-09T21:53:26.303141004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:26.304678 containerd[1570]: time="2025-09-09T21:53:26.303729106Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 6.433509405s" Sep 9 21:53:26.304678 containerd[1570]: time="2025-09-09T21:53:26.303791500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 21:53:26.311621 containerd[1570]: time="2025-09-09T21:53:26.311145794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 21:53:30.210151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2624382921.mount: Deactivated successfully. Sep 9 21:53:33.885364 containerd[1570]: time="2025-09-09T21:53:33.885237254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:33.889716 containerd[1570]: time="2025-09-09T21:53:33.889628377Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 9 21:53:33.897639 containerd[1570]: time="2025-09-09T21:53:33.896878486Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:33.901586 containerd[1570]: time="2025-09-09T21:53:33.900725692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:33.906019 containerd[1570]: time="2025-09-09T21:53:33.903595498Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 7.592395293s" Sep 9 21:53:33.906019 containerd[1570]: time="2025-09-09T21:53:33.904928076Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 21:53:33.910305 containerd[1570]: time="2025-09-09T21:53:33.907897136Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 21:53:34.744312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 21:53:34.750776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:53:34.889492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945426334.mount: Deactivated successfully. Sep 9 21:53:35.427354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:53:35.457037 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:53:35.654435 kubelet[2173]: E0909 21:53:35.654259 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:53:35.660802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:53:35.661608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:53:35.662372 systemd[1]: kubelet.service: Consumed 593ms CPU time, 110.9M memory peak. Sep 9 21:53:39.988730 containerd[1570]: time="2025-09-09T21:53:39.983709614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:40.004833 containerd[1570]: time="2025-09-09T21:53:40.004706965Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 21:53:40.034653 containerd[1570]: time="2025-09-09T21:53:40.034481345Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:40.053918 containerd[1570]: time="2025-09-09T21:53:40.053785234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:40.059887 containerd[1570]: time="2025-09-09T21:53:40.058204195Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 6.14366085s" Sep 9 21:53:40.059887 containerd[1570]: time="2025-09-09T21:53:40.058269176Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 21:53:40.059887 containerd[1570]: time="2025-09-09T21:53:40.059487192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 21:53:40.919688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1315020878.mount: Deactivated successfully. Sep 9 21:53:40.958145 containerd[1570]: time="2025-09-09T21:53:40.956128359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:53:40.960584 containerd[1570]: time="2025-09-09T21:53:40.960483192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 21:53:40.966943 containerd[1570]: time="2025-09-09T21:53:40.966213283Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:53:40.970607 containerd[1570]: time="2025-09-09T21:53:40.969978119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:53:40.973502 containerd[1570]: time="2025-09-09T21:53:40.973417548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 913.875244ms" Sep 9 21:53:40.973502 containerd[1570]: time="2025-09-09T21:53:40.973476247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 21:53:40.974906 containerd[1570]: time="2025-09-09T21:53:40.974812544Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 21:53:41.931475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977823062.mount: Deactivated successfully. Sep 9 21:53:45.745627 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 21:53:45.750572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:53:46.213742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:53:46.250580 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:53:46.377305 kubelet[2290]: E0909 21:53:46.376468 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:53:46.389939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:53:46.391071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:53:46.393173 systemd[1]: kubelet.service: Consumed 316ms CPU time, 109.8M memory peak. Sep 9 21:53:50.193417 containerd[1570]: time="2025-09-09T21:53:50.186922101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:50.193417 containerd[1570]: time="2025-09-09T21:53:50.191081299Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 9 21:53:50.199135 containerd[1570]: time="2025-09-09T21:53:50.194414685Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:50.219836 containerd[1570]: time="2025-09-09T21:53:50.211373177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:53:50.219836 containerd[1570]: time="2025-09-09T21:53:50.219155932Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 9.244309176s" Sep 9 21:53:50.219836 containerd[1570]: time="2025-09-09T21:53:50.219218028Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 21:53:55.817693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:53:55.821684 systemd[1]: kubelet.service: Consumed 316ms CPU time, 109.8M memory peak. Sep 9 21:53:55.838683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:53:55.956783 systemd[1]: Reload requested from client PID 2331 ('systemctl') (unit session-7.scope)... Sep 9 21:53:55.956813 systemd[1]: Reloading... Sep 9 21:53:56.382444 zram_generator::config[2386]: No configuration found. Sep 9 21:53:57.068863 systemd[1]: Reloading finished in 1111 ms. Sep 9 21:53:57.219597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:53:57.223642 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:53:57.227717 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 21:53:57.228113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:53:57.228186 systemd[1]: kubelet.service: Consumed 331ms CPU time, 98.4M memory peak. Sep 9 21:53:57.257305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:53:57.743608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:53:57.784970 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:53:58.044229 kubelet[2423]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:53:58.044229 kubelet[2423]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 21:53:58.055561 kubelet[2423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:53:58.055561 kubelet[2423]: I0909 21:53:58.051397 2423 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:53:58.587605 kubelet[2423]: I0909 21:53:58.587318 2423 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 21:53:58.587605 kubelet[2423]: I0909 21:53:58.587394 2423 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:53:58.599397 kubelet[2423]: I0909 21:53:58.597287 2423 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 21:53:58.704610 kubelet[2423]: I0909 21:53:58.702500 2423 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:53:58.704610 kubelet[2423]: E0909 21:53:58.704449 2423 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:53:58.762308 kubelet[2423]: I0909 21:53:58.759673 2423 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:53:58.802855 kubelet[2423]: I0909 21:53:58.802776 2423 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:53:58.803048 kubelet[2423]: I0909 21:53:58.802993 2423 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 21:53:58.803296 kubelet[2423]: I0909 21:53:58.803197 2423 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:53:58.804369 kubelet[2423]: I0909 21:53:58.803243 2423 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:53:58.804369 kubelet[2423]: I0909 21:53:58.803565 2423 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:53:58.804369 kubelet[2423]: I0909 21:53:58.803579 2423 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 21:53:58.804369 kubelet[2423]: I0909 21:53:58.803771 2423 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:53:58.810828 kubelet[2423]: I0909 21:53:58.810745 2423 kubelet.go:408] "Attempting to sync node with API server" Sep 9 21:53:58.810828 kubelet[2423]: I0909 21:53:58.810825 2423 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:53:58.811054 kubelet[2423]: I0909 21:53:58.810882 2423 kubelet.go:314] "Adding apiserver pod source" Sep 9 21:53:58.811054 kubelet[2423]: I0909 21:53:58.810909 2423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:53:58.821740 kubelet[2423]: I0909 21:53:58.821698 2423 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:53:58.822517 kubelet[2423]: I0909 21:53:58.822495 2423 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 21:53:58.822751 kubelet[2423]: W0909 21:53:58.822734 2423 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 21:53:58.828264 kubelet[2423]: W0909 21:53:58.828150 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 9 21:53:58.828264 kubelet[2423]: E0909 21:53:58.828249 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:53:58.828564 kubelet[2423]: W0909 21:53:58.828363 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 9 21:53:58.828564 kubelet[2423]: E0909 21:53:58.828420 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:53:58.837468 kubelet[2423]: I0909 21:53:58.836002 2423 server.go:1274] "Started kubelet" Sep 9 21:53:58.838984 kubelet[2423]: I0909 21:53:58.837789 2423 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:53:58.838984 kubelet[2423]: I0909 21:53:58.838211 2423 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:53:58.853012 kubelet[2423]: I0909 21:53:58.847241 2423 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:53:58.853012 kubelet[2423]: I0909 21:53:58.848969 2423 server.go:449] "Adding debug handlers to kubelet server" Sep 9 21:53:58.853012 kubelet[2423]: I0909 21:53:58.852300 2423 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:53:58.853895 kubelet[2423]: I0909 21:53:58.853870 2423 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:53:58.946813 kubelet[2423]: I0909 21:53:58.944634 2423 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 21:53:58.946813 kubelet[2423]: E0909 21:53:58.932075 2423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863bbdd519bc223 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 21:53:58.835954211 +0000 UTC m=+1.040594998,LastTimestamp:2025-09-09 21:53:58.835954211 +0000 UTC m=+1.040594998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 21:53:58.946813 kubelet[2423]: I0909 21:53:58.945180 2423 factory.go:221] Registration of the systemd container factory successfully Sep 9 21:53:58.946813 kubelet[2423]: I0909 21:53:58.945346 2423 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:53:58.949959 kubelet[2423]: E0909 21:53:58.949114 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:53:58.949959 kubelet[2423]: I0909 21:53:58.949938 2423 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 21:53:58.950121 kubelet[2423]: I0909 21:53:58.950012 2423 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:53:58.952583 kubelet[2423]: W0909 21:53:58.952488 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 9 21:53:58.952583 kubelet[2423]: E0909 21:53:58.952572 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:53:58.953804 kubelet[2423]: E0909 21:53:58.953523 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Sep 9 21:53:58.953804 kubelet[2423]: I0909 21:53:58.953616 2423 factory.go:221] Registration of the containerd container factory successfully Sep 9 21:53:58.962802 kubelet[2423]: E0909 21:53:58.962661 2423 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:53:58.991976 kubelet[2423]: I0909 21:53:58.991520 2423 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 21:53:58.991976 kubelet[2423]: I0909 21:53:58.991558 2423 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 21:53:58.991976 kubelet[2423]: I0909 21:53:58.991600 2423 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:53:59.051805 kubelet[2423]: E0909 21:53:59.051611 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:53:59.155241 kubelet[2423]: E0909 21:53:59.152224 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:53:59.155241 kubelet[2423]: E0909 21:53:59.155038 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Sep 9 21:53:59.252882 kubelet[2423]: E0909 21:53:59.252701 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:53:59.352931 kubelet[2423]: E0909 21:53:59.352862 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:53:59.453791 kubelet[2423]: E0909 21:53:59.453535 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:53:59.555877 kubelet[2423]: E0909 21:53:59.555789 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:53:59.557820 kubelet[2423]: E0909 21:53:59.557719 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Sep 9 21:53:59.656399 kubelet[2423]: E0909 21:53:59.656146 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:53:59.757964 kubelet[2423]: E0909 21:53:59.757735 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:53:59.859163 kubelet[2423]: E0909 21:53:59.858926 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:53:59.962540 kubelet[2423]: E0909 21:53:59.960808 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:54:00.055111 kubelet[2423]: I0909 21:54:00.054531 2423 policy_none.go:49] "None policy: Start" Sep 9 21:54:00.059269 kubelet[2423]: I0909 21:54:00.058735 2423 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 21:54:00.059269 kubelet[2423]: I0909 21:54:00.058785 2423 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:54:00.063105 kubelet[2423]: E0909 21:54:00.063062 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:54:00.094432 kubelet[2423]: I0909 21:54:00.089885 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 21:54:00.103004 kubelet[2423]: I0909 21:54:00.101870 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 21:54:00.103004 kubelet[2423]: I0909 21:54:00.101929 2423 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 21:54:00.103004 kubelet[2423]: I0909 21:54:00.101974 2423 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 21:54:00.103004 kubelet[2423]: E0909 21:54:00.102046 2423 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:54:00.104282 kubelet[2423]: W0909 21:54:00.104179 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 9 21:54:00.104282 kubelet[2423]: E0909 21:54:00.104275 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:54:00.143246 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 21:54:00.161029 kubelet[2423]: W0909 21:54:00.153035 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 9 21:54:00.161029 kubelet[2423]: E0909 21:54:00.156470 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:54:00.165562 kubelet[2423]: E0909 21:54:00.165481 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:54:00.183465 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 21:54:00.204550 kubelet[2423]: E0909 21:54:00.204496 2423 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 21:54:00.204805 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 21:54:00.231425 kubelet[2423]: I0909 21:54:00.231056 2423 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 21:54:00.239449 kubelet[2423]: I0909 21:54:00.231809 2423 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:54:00.239449 kubelet[2423]: I0909 21:54:00.232668 2423 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:54:00.243387 kubelet[2423]: I0909 21:54:00.240471 2423 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:54:00.243387 kubelet[2423]: E0909 21:54:00.241616 2423 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 21:54:00.305677 kubelet[2423]: W0909 21:54:00.305151 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 9 21:54:00.305677 kubelet[2423]: E0909 21:54:00.305219 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:54:00.356758 kubelet[2423]: W0909 21:54:00.351234 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 9 21:54:00.356758 kubelet[2423]: E0909 21:54:00.351343 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:54:00.364849 kubelet[2423]: E0909 21:54:00.358993 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Sep 9 21:54:00.364849 kubelet[2423]: I0909 21:54:00.361573 2423 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:54:00.365813 kubelet[2423]: E0909 21:54:00.365771 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 9 21:54:00.447156 systemd[1]: Created slice kubepods-burstable-pod36670562302dee0ca0aba0be46dcc7cd.slice - libcontainer container kubepods-burstable-pod36670562302dee0ca0aba0be46dcc7cd.slice. Sep 9 21:54:00.467228 kubelet[2423]: I0909 21:54:00.466801 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36670562302dee0ca0aba0be46dcc7cd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"36670562302dee0ca0aba0be46dcc7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:54:00.467228 kubelet[2423]: I0909 21:54:00.466859 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36670562302dee0ca0aba0be46dcc7cd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"36670562302dee0ca0aba0be46dcc7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:54:00.467228 kubelet[2423]: I0909 21:54:00.466899 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:54:00.467228 kubelet[2423]: I0909 21:54:00.466925 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:54:00.467228 kubelet[2423]: I0909 21:54:00.466958 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:54:00.467622 kubelet[2423]: I0909 21:54:00.466978 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:54:00.467622 kubelet[2423]: I0909 21:54:00.467005 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36670562302dee0ca0aba0be46dcc7cd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"36670562302dee0ca0aba0be46dcc7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:54:00.467622 kubelet[2423]: I0909 21:54:00.467024 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:54:00.467622 kubelet[2423]: I0909 21:54:00.467044 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:54:00.483402 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 21:54:00.537764 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 21:54:00.569062 kubelet[2423]: I0909 21:54:00.568477 2423 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:54:00.569062 kubelet[2423]: E0909 21:54:00.568897 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 9 21:54:00.784520 containerd[1570]: time="2025-09-09T21:54:00.784387107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:36670562302dee0ca0aba0be46dcc7cd,Namespace:kube-system,Attempt:0,}" Sep 9 21:54:00.812694 containerd[1570]: time="2025-09-09T21:54:00.811922574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 21:54:00.858884 containerd[1570]: time="2025-09-09T21:54:00.857834780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 21:54:00.868769 kubelet[2423]: E0909 21:54:00.868711 2423 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:54:00.998968 kubelet[2423]: I0909 21:54:00.995717 2423 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:54:00.999322 kubelet[2423]: W0909 21:54:00.999289 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 9 21:54:00.999524 kubelet[2423]: E0909 21:54:00.999491 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:54:01.010589 kubelet[2423]: E0909 21:54:01.007882 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 9 21:54:01.420856 containerd[1570]: time="2025-09-09T21:54:01.420756715Z" level=info msg="connecting to shim cef3e62ccad55c64f66ce6b9e312938ea19138e3e6b5c6e759e4bccc5c5b9142" address="unix:///run/containerd/s/4adf5474fcdc6ba47d2c551029f3a3c612556afeee3eeea765585d14f7152278" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:54:01.421699 containerd[1570]: time="2025-09-09T21:54:01.421541765Z" level=info msg="connecting to shim 801629b2a52d6bbb13418b6b96876775891d3358740e953ad95ae372a35cc4f9" address="unix:///run/containerd/s/58505b9a838a0b1e2b248d04eff77b2d0c45d86b8ec4c0c2ba4bf4e961e1aab0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:54:01.506406 containerd[1570]: time="2025-09-09T21:54:01.504876822Z" level=info msg="connecting to shim ba48ed8748b0fd8707f92f7b76280847a88d5ea7edfd4864507fcf8cb517c874" address="unix:///run/containerd/s/582526095f8311b66630f50bc381e1249e1b4a6eb209ec825cfac8e6f53230e9" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:54:01.793056 systemd[1]: Started cri-containerd-801629b2a52d6bbb13418b6b96876775891d3358740e953ad95ae372a35cc4f9.scope - libcontainer container 801629b2a52d6bbb13418b6b96876775891d3358740e953ad95ae372a35cc4f9. Sep 9 21:54:01.808396 systemd[1]: Started cri-containerd-cef3e62ccad55c64f66ce6b9e312938ea19138e3e6b5c6e759e4bccc5c5b9142.scope - libcontainer container cef3e62ccad55c64f66ce6b9e312938ea19138e3e6b5c6e759e4bccc5c5b9142. Sep 9 21:54:01.819250 kubelet[2423]: I0909 21:54:01.819035 2423 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:54:01.819781 kubelet[2423]: E0909 21:54:01.819578 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 9 21:54:01.915763 systemd[1]: Started cri-containerd-ba48ed8748b0fd8707f92f7b76280847a88d5ea7edfd4864507fcf8cb517c874.scope - libcontainer container ba48ed8748b0fd8707f92f7b76280847a88d5ea7edfd4864507fcf8cb517c874. Sep 9 21:54:01.967392 kubelet[2423]: E0909 21:54:01.966859 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="3.2s" Sep 9 21:54:02.038632 containerd[1570]: time="2025-09-09T21:54:02.038559329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"801629b2a52d6bbb13418b6b96876775891d3358740e953ad95ae372a35cc4f9\"" Sep 9 21:54:02.049524 containerd[1570]: time="2025-09-09T21:54:02.049371707Z" level=info msg="CreateContainer within sandbox \"801629b2a52d6bbb13418b6b96876775891d3358740e953ad95ae372a35cc4f9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 21:54:02.132368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2931148686.mount: Deactivated successfully. Sep 9 21:54:02.144441 containerd[1570]: time="2025-09-09T21:54:02.139348515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:36670562302dee0ca0aba0be46dcc7cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cef3e62ccad55c64f66ce6b9e312938ea19138e3e6b5c6e759e4bccc5c5b9142\"" Sep 9 21:54:02.144441 containerd[1570]: time="2025-09-09T21:54:02.142460039Z" level=info msg="Container e20ed1c03525ea9244a70b2cf5f6b83c89b3b76b063079667d37bb16fe604780: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:54:02.147837 containerd[1570]: time="2025-09-09T21:54:02.147751573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba48ed8748b0fd8707f92f7b76280847a88d5ea7edfd4864507fcf8cb517c874\"" Sep 9 21:54:02.149822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213479638.mount: Deactivated successfully. Sep 9 21:54:02.154769 containerd[1570]: time="2025-09-09T21:54:02.151695966Z" level=info msg="CreateContainer within sandbox \"cef3e62ccad55c64f66ce6b9e312938ea19138e3e6b5c6e759e4bccc5c5b9142\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 21:54:02.154769 containerd[1570]: time="2025-09-09T21:54:02.152945856Z" level=info msg="CreateContainer within sandbox \"ba48ed8748b0fd8707f92f7b76280847a88d5ea7edfd4864507fcf8cb517c874\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 21:54:02.162253 containerd[1570]: time="2025-09-09T21:54:02.161791283Z" level=info msg="CreateContainer within sandbox \"801629b2a52d6bbb13418b6b96876775891d3358740e953ad95ae372a35cc4f9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e20ed1c03525ea9244a70b2cf5f6b83c89b3b76b063079667d37bb16fe604780\"" Sep 9 21:54:02.167261 containerd[1570]: time="2025-09-09T21:54:02.162834325Z" level=info msg="StartContainer for \"e20ed1c03525ea9244a70b2cf5f6b83c89b3b76b063079667d37bb16fe604780\"" Sep 9 21:54:02.170365 containerd[1570]: time="2025-09-09T21:54:02.167979556Z" level=info msg="connecting to shim e20ed1c03525ea9244a70b2cf5f6b83c89b3b76b063079667d37bb16fe604780" address="unix:///run/containerd/s/58505b9a838a0b1e2b248d04eff77b2d0c45d86b8ec4c0c2ba4bf4e961e1aab0" protocol=ttrpc version=3 Sep 9 21:54:02.217314 containerd[1570]: time="2025-09-09T21:54:02.216264897Z" level=info msg="Container c07675e20343c33c431e804a2e96544a7fd721d40988511e1b47efdb6e9f44df: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:54:02.257395 kubelet[2423]: W0909 21:54:02.256945 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 9 21:54:02.257395 kubelet[2423]: E0909 21:54:02.257039 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:54:02.257626 containerd[1570]: time="2025-09-09T21:54:02.257170423Z" level=info msg="Container 6056cdcfec63d4755cfc2fda994609ff32494116a7bfb020f1fe60cfd095d294: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:54:02.267888 systemd[1]: Started cri-containerd-e20ed1c03525ea9244a70b2cf5f6b83c89b3b76b063079667d37bb16fe604780.scope - libcontainer container e20ed1c03525ea9244a70b2cf5f6b83c89b3b76b063079667d37bb16fe604780. Sep 9 21:54:02.305990 containerd[1570]: time="2025-09-09T21:54:02.304441565Z" level=info msg="CreateContainer within sandbox \"cef3e62ccad55c64f66ce6b9e312938ea19138e3e6b5c6e759e4bccc5c5b9142\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c07675e20343c33c431e804a2e96544a7fd721d40988511e1b47efdb6e9f44df\"" Sep 9 21:54:02.305990 containerd[1570]: time="2025-09-09T21:54:02.305201348Z" level=info msg="StartContainer for \"c07675e20343c33c431e804a2e96544a7fd721d40988511e1b47efdb6e9f44df\"" Sep 9 21:54:02.307917 containerd[1570]: time="2025-09-09T21:54:02.307852820Z" level=info msg="connecting to shim c07675e20343c33c431e804a2e96544a7fd721d40988511e1b47efdb6e9f44df" address="unix:///run/containerd/s/4adf5474fcdc6ba47d2c551029f3a3c612556afeee3eeea765585d14f7152278" protocol=ttrpc version=3 Sep 9 21:54:02.339296 containerd[1570]: time="2025-09-09T21:54:02.339224061Z" level=info msg="CreateContainer within sandbox \"ba48ed8748b0fd8707f92f7b76280847a88d5ea7edfd4864507fcf8cb517c874\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6056cdcfec63d4755cfc2fda994609ff32494116a7bfb020f1fe60cfd095d294\"" Sep 9 21:54:02.342112 containerd[1570]: time="2025-09-09T21:54:02.340154393Z" level=info msg="StartContainer for \"6056cdcfec63d4755cfc2fda994609ff32494116a7bfb020f1fe60cfd095d294\"" Sep 9 21:54:02.355455 containerd[1570]: time="2025-09-09T21:54:02.353708523Z" level=info msg="connecting to shim 6056cdcfec63d4755cfc2fda994609ff32494116a7bfb020f1fe60cfd095d294" address="unix:///run/containerd/s/582526095f8311b66630f50bc381e1249e1b4a6eb209ec825cfac8e6f53230e9" protocol=ttrpc version=3 Sep 9 21:54:02.400961 systemd[1]: Started cri-containerd-c07675e20343c33c431e804a2e96544a7fd721d40988511e1b47efdb6e9f44df.scope - libcontainer container c07675e20343c33c431e804a2e96544a7fd721d40988511e1b47efdb6e9f44df. Sep 9 21:54:02.439321 containerd[1570]: time="2025-09-09T21:54:02.438493543Z" level=info msg="StartContainer for \"e20ed1c03525ea9244a70b2cf5f6b83c89b3b76b063079667d37bb16fe604780\" returns successfully" Sep 9 21:54:02.440599 systemd[1]: Started cri-containerd-6056cdcfec63d4755cfc2fda994609ff32494116a7bfb020f1fe60cfd095d294.scope - libcontainer container 6056cdcfec63d4755cfc2fda994609ff32494116a7bfb020f1fe60cfd095d294. Sep 9 21:54:02.494002 kubelet[2423]: W0909 21:54:02.493870 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 9 21:54:02.494231 kubelet[2423]: E0909 21:54:02.494049 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:54:02.601103 containerd[1570]: time="2025-09-09T21:54:02.600772226Z" level=info msg="StartContainer for \"c07675e20343c33c431e804a2e96544a7fd721d40988511e1b47efdb6e9f44df\" returns successfully" Sep 9 21:54:02.601522 containerd[1570]: time="2025-09-09T21:54:02.601321915Z" level=info msg="StartContainer for \"6056cdcfec63d4755cfc2fda994609ff32494116a7bfb020f1fe60cfd095d294\" returns successfully" Sep 9 21:54:03.429360 kubelet[2423]: I0909 21:54:03.423293 2423 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:54:07.804150 kubelet[2423]: E0909 21:54:07.802538 2423 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 21:54:07.828240 kubelet[2423]: I0909 21:54:07.828039 2423 apiserver.go:52] "Watching apiserver" Sep 9 21:54:07.851236 kubelet[2423]: I0909 21:54:07.850794 2423 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 21:54:07.936679 kubelet[2423]: E0909 21:54:07.935705 2423 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1863bbdd519bc223 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 21:53:58.835954211 +0000 UTC m=+1.040594998,LastTimestamp:2025-09-09 21:53:58.835954211 +0000 UTC m=+1.040594998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 21:54:08.028532 kubelet[2423]: I0909 21:54:08.026771 2423 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 21:54:08.028532 kubelet[2423]: E0909 21:54:08.026838 2423 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 21:54:08.086834 kubelet[2423]: E0909 21:54:08.073467 2423 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1863bbdd5927a6d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 21:53:58.962562774 +0000 UTC m=+1.167203571,LastTimestamp:2025-09-09 21:53:58.962562774 +0000 UTC m=+1.167203571,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 21:54:12.655002 kubelet[2423]: I0909 21:54:12.654630 2423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.654601586 podStartE2EDuration="1.654601586s" podCreationTimestamp="2025-09-09 21:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:54:12.612280167 +0000 UTC m=+14.816920954" watchObservedRunningTime="2025-09-09 21:54:12.654601586 +0000 UTC m=+14.859242373" Sep 9 21:54:12.655002 kubelet[2423]: I0909 21:54:12.654793 2423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.654786172 podStartE2EDuration="654.786172ms" podCreationTimestamp="2025-09-09 21:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:54:12.65362181 +0000 UTC m=+14.858262597" watchObservedRunningTime="2025-09-09 21:54:12.654786172 +0000 UTC m=+14.859426959" Sep 9 21:54:13.453901 systemd[1]: Reload requested from client PID 2702 ('systemctl') (unit session-7.scope)... Sep 9 21:54:13.453930 systemd[1]: Reloading... Sep 9 21:54:13.756415 zram_generator::config[2751]: No configuration found. Sep 9 21:54:14.372274 systemd[1]: Reloading finished in 917 ms. Sep 9 21:54:14.433751 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:54:14.482161 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 21:54:14.487816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:54:14.487894 systemd[1]: kubelet.service: Consumed 2.524s CPU time, 131.5M memory peak. Sep 9 21:54:14.494765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:54:15.058062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:54:15.097927 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:54:15.264426 kubelet[2790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:54:15.264426 kubelet[2790]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 21:54:15.264426 kubelet[2790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:54:15.264426 kubelet[2790]: I0909 21:54:15.256277 2790 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:54:15.297005 kubelet[2790]: I0909 21:54:15.295875 2790 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 21:54:15.297005 kubelet[2790]: I0909 21:54:15.295949 2790 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:54:15.297005 kubelet[2790]: I0909 21:54:15.296500 2790 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 21:54:15.305935 kubelet[2790]: I0909 21:54:15.304888 2790 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 21:54:15.314064 kubelet[2790]: I0909 21:54:15.313917 2790 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:54:15.325241 kubelet[2790]: I0909 21:54:15.323688 2790 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:54:15.338966 kubelet[2790]: I0909 21:54:15.338588 2790 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:54:15.338966 kubelet[2790]: I0909 21:54:15.338910 2790 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 21:54:15.339371 kubelet[2790]: I0909 21:54:15.339156 2790 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:54:15.346300 kubelet[2790]: I0909 21:54:15.339221 2790 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:54:15.346300 kubelet[2790]: I0909 21:54:15.339690 2790 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:54:15.346300 kubelet[2790]: I0909 21:54:15.339708 2790 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 21:54:15.346300 kubelet[2790]: I0909 21:54:15.339776 2790 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:54:15.346300 kubelet[2790]: I0909 21:54:15.339984 2790 kubelet.go:408] "Attempting to sync node with API server" Sep 9 21:54:15.353275 kubelet[2790]: I0909 21:54:15.340030 2790 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:54:15.353275 kubelet[2790]: I0909 21:54:15.340075 2790 kubelet.go:314] "Adding apiserver pod source" Sep 9 21:54:15.353275 kubelet[2790]: I0909 21:54:15.340117 2790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:54:15.376715 kubelet[2790]: I0909 21:54:15.364652 2790 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:54:15.376715 kubelet[2790]: I0909 21:54:15.371095 2790 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 21:54:15.376715 kubelet[2790]: I0909 21:54:15.374802 2790 server.go:1274] "Started kubelet" Sep 9 21:54:15.378364 kubelet[2790]: I0909 21:54:15.377656 2790 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:54:15.379372 kubelet[2790]: I0909 21:54:15.379262 2790 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:54:15.380135 kubelet[2790]: I0909 21:54:15.380112 2790 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:54:15.386495 kubelet[2790]: E0909 21:54:15.386458 2790 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:54:15.394985 kubelet[2790]: I0909 21:54:15.390858 2790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:54:15.394985 kubelet[2790]: I0909 21:54:15.394448 2790 server.go:449] "Adding debug handlers to kubelet server" Sep 9 21:54:15.398401 kubelet[2790]: I0909 21:54:15.398248 2790 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 21:54:15.399630 kubelet[2790]: I0909 21:54:15.398698 2790 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 21:54:15.399630 kubelet[2790]: I0909 21:54:15.398907 2790 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:54:15.399630 kubelet[2790]: I0909 21:54:15.395543 2790 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:54:15.404964 kubelet[2790]: I0909 21:54:15.403150 2790 factory.go:221] Registration of the systemd container factory successfully Sep 9 21:54:15.412498 kubelet[2790]: I0909 21:54:15.408669 2790 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:54:15.418000 kubelet[2790]: I0909 21:54:15.416419 2790 factory.go:221] Registration of the containerd container factory successfully Sep 9 21:54:15.437245 kubelet[2790]: I0909 21:54:15.437164 2790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 21:54:15.446193 kubelet[2790]: I0909 21:54:15.446129 2790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 21:54:15.446474 kubelet[2790]: I0909 21:54:15.446304 2790 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 21:54:15.446697 kubelet[2790]: I0909 21:54:15.446583 2790 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 21:54:15.446867 kubelet[2790]: E0909 21:54:15.446810 2790 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:54:15.510707 sudo[2824]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 21:54:15.511709 sudo[2824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 21:54:15.548807 kubelet[2790]: E0909 21:54:15.548748 2790 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 21:54:16.020600 kubelet[2790]: E0909 21:54:16.020555 2790 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 21:54:16.204616 kubelet[2790]: I0909 21:54:16.204299 2790 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 21:54:16.204616 kubelet[2790]: I0909 21:54:16.204354 2790 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 21:54:16.204616 kubelet[2790]: I0909 21:54:16.204393 2790 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:54:16.211724 kubelet[2790]: I0909 21:54:16.211395 2790 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 21:54:16.211724 kubelet[2790]: I0909 21:54:16.211699 2790 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 21:54:16.211724 kubelet[2790]: I0909 21:54:16.211737 2790 policy_none.go:49] "None policy: Start" Sep 9 21:54:16.221557 kubelet[2790]: I0909 21:54:16.219048 2790 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 21:54:16.221557 kubelet[2790]: I0909 21:54:16.219111 2790 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:54:16.221557 kubelet[2790]: I0909 21:54:16.219481 2790 state_mem.go:75] "Updated machine memory state" Sep 9 21:54:16.272438 kubelet[2790]: I0909 21:54:16.272205 2790 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 21:54:16.282493 kubelet[2790]: I0909 21:54:16.278678 2790 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:54:16.282493 kubelet[2790]: I0909 21:54:16.278717 2790 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:54:16.287396 kubelet[2790]: I0909 21:54:16.287288 2790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:54:16.349135 kubelet[2790]: I0909 21:54:16.349090 2790 apiserver.go:52] "Watching apiserver" Sep 9 21:54:16.426771 kubelet[2790]: I0909 21:54:16.424535 2790 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:54:16.499776 kubelet[2790]: I0909 21:54:16.499727 2790 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 21:54:16.535051 kubelet[2790]: I0909 21:54:16.533121 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:54:16.535051 kubelet[2790]: I0909 21:54:16.533189 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:54:16.535051 kubelet[2790]: I0909 21:54:16.533225 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:54:16.535051 kubelet[2790]: I0909 21:54:16.533252 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:54:16.535051 kubelet[2790]: I0909 21:54:16.533284 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:54:16.535780 kubelet[2790]: I0909 21:54:16.533310 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36670562302dee0ca0aba0be46dcc7cd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"36670562302dee0ca0aba0be46dcc7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:54:16.535780 kubelet[2790]: I0909 21:54:16.533350 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36670562302dee0ca0aba0be46dcc7cd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"36670562302dee0ca0aba0be46dcc7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:54:16.535780 kubelet[2790]: I0909 21:54:16.533378 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:54:16.535780 kubelet[2790]: I0909 21:54:16.533402 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36670562302dee0ca0aba0be46dcc7cd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"36670562302dee0ca0aba0be46dcc7cd\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:54:16.548375 kubelet[2790]: I0909 21:54:16.547578 2790 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 21:54:16.548375 kubelet[2790]: I0909 21:54:16.547707 2790 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 21:54:16.889193 kubelet[2790]: I0909 21:54:16.888735 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.88870944 podStartE2EDuration="888.70944ms" podCreationTimestamp="2025-09-09 21:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:54:16.749192149 +0000 UTC m=+1.632400885" watchObservedRunningTime="2025-09-09 21:54:16.88870944 +0000 UTC m=+1.771918146" Sep 9 21:54:17.961864 sudo[2824]: pam_unix(sudo:session): session closed for user root Sep 9 21:54:19.035272 kubelet[2790]: I0909 21:54:19.035042 2790 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 21:54:19.036356 containerd[1570]: time="2025-09-09T21:54:19.036246989Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 21:54:19.038271 kubelet[2790]: I0909 21:54:19.037460 2790 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 21:54:19.710591 systemd[1]: Created slice kubepods-besteffort-podbda3ba34_ce02_4b11_b61d_bb766fada9cb.slice - libcontainer container kubepods-besteffort-podbda3ba34_ce02_4b11_b61d_bb766fada9cb.slice. Sep 9 21:54:19.770384 kubelet[2790]: I0909 21:54:19.767550 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bda3ba34-ce02-4b11-b61d-bb766fada9cb-kube-proxy\") pod \"kube-proxy-bbph8\" (UID: \"bda3ba34-ce02-4b11-b61d-bb766fada9cb\") " pod="kube-system/kube-proxy-bbph8" Sep 9 21:54:19.770384 kubelet[2790]: I0909 21:54:19.767659 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bda3ba34-ce02-4b11-b61d-bb766fada9cb-xtables-lock\") pod \"kube-proxy-bbph8\" (UID: \"bda3ba34-ce02-4b11-b61d-bb766fada9cb\") " pod="kube-system/kube-proxy-bbph8" Sep 9 21:54:19.770384 kubelet[2790]: I0909 21:54:19.767758 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bda3ba34-ce02-4b11-b61d-bb766fada9cb-lib-modules\") pod \"kube-proxy-bbph8\" (UID: \"bda3ba34-ce02-4b11-b61d-bb766fada9cb\") " pod="kube-system/kube-proxy-bbph8" Sep 9 21:54:19.770384 kubelet[2790]: I0909 21:54:19.767833 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdt9\" (UniqueName: \"kubernetes.io/projected/bda3ba34-ce02-4b11-b61d-bb766fada9cb-kube-api-access-htdt9\") pod \"kube-proxy-bbph8\" (UID: \"bda3ba34-ce02-4b11-b61d-bb766fada9cb\") " pod="kube-system/kube-proxy-bbph8" Sep 9 21:54:19.820628 kernel: hrtimer: interrupt took 19028812 ns Sep 9 21:54:20.093029 containerd[1570]: time="2025-09-09T21:54:20.091712996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bbph8,Uid:bda3ba34-ce02-4b11-b61d-bb766fada9cb,Namespace:kube-system,Attempt:0,}" Sep 9 21:54:20.703081 containerd[1570]: time="2025-09-09T21:54:20.703011503Z" level=info msg="connecting to shim e7c47a9cb52ee9e45be49cf08add54ae16d7e1d174075838db5b950a5b4cef5e" address="unix:///run/containerd/s/033a749874b82bbd6c4a41b8b8cb499152bc3bda83c863ee09e60008f62c2073" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:54:20.808691 systemd[1]: Started cri-containerd-e7c47a9cb52ee9e45be49cf08add54ae16d7e1d174075838db5b950a5b4cef5e.scope - libcontainer container e7c47a9cb52ee9e45be49cf08add54ae16d7e1d174075838db5b950a5b4cef5e. Sep 9 21:54:20.972372 containerd[1570]: time="2025-09-09T21:54:20.971850809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bbph8,Uid:bda3ba34-ce02-4b11-b61d-bb766fada9cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7c47a9cb52ee9e45be49cf08add54ae16d7e1d174075838db5b950a5b4cef5e\"" Sep 9 21:54:20.980601 containerd[1570]: time="2025-09-09T21:54:20.979914645Z" level=info msg="CreateContainer within sandbox \"e7c47a9cb52ee9e45be49cf08add54ae16d7e1d174075838db5b950a5b4cef5e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 21:54:21.053604 systemd[1]: Created slice kubepods-burstable-poda580a789_c1dc_4711_99dd_e16cd6835dae.slice - libcontainer container kubepods-burstable-poda580a789_c1dc_4711_99dd_e16cd6835dae.slice. Sep 9 21:54:21.069429 containerd[1570]: time="2025-09-09T21:54:21.066160446Z" level=info msg="Container 6d72a67d501cbfa18bb4067dd558c9444c1e802b4d9abb89d957c02a898179b8: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:54:21.101594 containerd[1570]: time="2025-09-09T21:54:21.101487836Z" level=info msg="CreateContainer within sandbox \"e7c47a9cb52ee9e45be49cf08add54ae16d7e1d174075838db5b950a5b4cef5e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d72a67d501cbfa18bb4067dd558c9444c1e802b4d9abb89d957c02a898179b8\"" Sep 9 21:54:21.102689 containerd[1570]: time="2025-09-09T21:54:21.102558691Z" level=info msg="StartContainer for \"6d72a67d501cbfa18bb4067dd558c9444c1e802b4d9abb89d957c02a898179b8\"" Sep 9 21:54:21.112307 containerd[1570]: time="2025-09-09T21:54:21.107929538Z" level=info msg="connecting to shim 6d72a67d501cbfa18bb4067dd558c9444c1e802b4d9abb89d957c02a898179b8" address="unix:///run/containerd/s/033a749874b82bbd6c4a41b8b8cb499152bc3bda83c863ee09e60008f62c2073" protocol=ttrpc version=3 Sep 9 21:54:21.173592 systemd[1]: Created slice kubepods-besteffort-pod42159c3f_8651_4b9a_97a8_f6ad18d81eac.slice - libcontainer container kubepods-besteffort-pod42159c3f_8651_4b9a_97a8_f6ad18d81eac.slice. Sep 9 21:54:21.202168 kubelet[2790]: I0909 21:54:21.202032 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a580a789-c1dc-4711-99dd-e16cd6835dae-hubble-tls\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.202168 kubelet[2790]: I0909 21:54:21.202089 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-run\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.202168 kubelet[2790]: I0909 21:54:21.202119 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-cgroup\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.202168 kubelet[2790]: I0909 21:54:21.202141 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-etc-cni-netd\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.202168 kubelet[2790]: I0909 21:54:21.202161 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a580a789-c1dc-4711-99dd-e16cd6835dae-clustermesh-secrets\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.202168 kubelet[2790]: I0909 21:54:21.202183 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-host-proc-sys-net\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.203084 kubelet[2790]: I0909 21:54:21.202201 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-bpf-maps\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.203084 kubelet[2790]: I0909 21:54:21.202220 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cni-path\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.203084 kubelet[2790]: I0909 21:54:21.202246 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-config-path\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.203084 kubelet[2790]: I0909 21:54:21.202270 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bct6f\" (UniqueName: \"kubernetes.io/projected/a580a789-c1dc-4711-99dd-e16cd6835dae-kube-api-access-bct6f\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.203084 kubelet[2790]: I0909 21:54:21.202296 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-host-proc-sys-kernel\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.203225 kubelet[2790]: I0909 21:54:21.202316 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42159c3f-8651-4b9a-97a8-f6ad18d81eac-cilium-config-path\") pod \"cilium-operator-5d85765b45-24zbl\" (UID: \"42159c3f-8651-4b9a-97a8-f6ad18d81eac\") " pod="kube-system/cilium-operator-5d85765b45-24zbl" Sep 9 21:54:21.207379 kubelet[2790]: I0909 21:54:21.204366 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-hostproc\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.208467 kubelet[2790]: I0909 21:54:21.208293 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-lib-modules\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.214686 kubelet[2790]: I0909 21:54:21.208465 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-xtables-lock\") pod \"cilium-psmpz\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " pod="kube-system/cilium-psmpz" Sep 9 21:54:21.214686 kubelet[2790]: I0909 21:54:21.208502 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz9zv\" (UniqueName: \"kubernetes.io/projected/42159c3f-8651-4b9a-97a8-f6ad18d81eac-kube-api-access-qz9zv\") pod \"cilium-operator-5d85765b45-24zbl\" (UID: \"42159c3f-8651-4b9a-97a8-f6ad18d81eac\") " pod="kube-system/cilium-operator-5d85765b45-24zbl" Sep 9 21:54:21.296745 systemd[1]: Started cri-containerd-6d72a67d501cbfa18bb4067dd558c9444c1e802b4d9abb89d957c02a898179b8.scope - libcontainer container 6d72a67d501cbfa18bb4067dd558c9444c1e802b4d9abb89d957c02a898179b8. Sep 9 21:54:21.491529 containerd[1570]: time="2025-09-09T21:54:21.487959477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-24zbl,Uid:42159c3f-8651-4b9a-97a8-f6ad18d81eac,Namespace:kube-system,Attempt:0,}" Sep 9 21:54:21.575687 containerd[1570]: time="2025-09-09T21:54:21.575199146Z" level=info msg="StartContainer for \"6d72a67d501cbfa18bb4067dd558c9444c1e802b4d9abb89d957c02a898179b8\" returns successfully" Sep 9 21:54:21.619365 containerd[1570]: time="2025-09-09T21:54:21.617952001Z" level=info msg="connecting to shim ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4" address="unix:///run/containerd/s/81b90d4afef747e3113ff9f0cd4e5bfbda2a4b3986dfd0918b3ef261401b31af" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:54:21.672552 containerd[1570]: time="2025-09-09T21:54:21.671022229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-psmpz,Uid:a580a789-c1dc-4711-99dd-e16cd6835dae,Namespace:kube-system,Attempt:0,}" Sep 9 21:54:21.734187 systemd[1]: Started cri-containerd-ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4.scope - libcontainer container ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4. Sep 9 21:54:21.816622 containerd[1570]: time="2025-09-09T21:54:21.813695941Z" level=info msg="connecting to shim 5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5" address="unix:///run/containerd/s/f193a865929a924eaf5ff880eb67ae3545d9a203f6a1a0aa208dd483ce09a152" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:54:21.971324 systemd[1]: Started cri-containerd-5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5.scope - libcontainer container 5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5. Sep 9 21:54:22.072477 containerd[1570]: time="2025-09-09T21:54:22.071001801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-24zbl,Uid:42159c3f-8651-4b9a-97a8-f6ad18d81eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\"" Sep 9 21:54:22.103853 containerd[1570]: time="2025-09-09T21:54:22.101129076Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 21:54:22.164654 containerd[1570]: time="2025-09-09T21:54:22.163388569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-psmpz,Uid:a580a789-c1dc-4711-99dd-e16cd6835dae,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\"" Sep 9 21:54:22.693588 kubelet[2790]: I0909 21:54:22.693143 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bbph8" podStartSLOduration=3.693114639 podStartE2EDuration="3.693114639s" podCreationTimestamp="2025-09-09 21:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:54:22.692727411 +0000 UTC m=+7.575936147" watchObservedRunningTime="2025-09-09 21:54:22.693114639 +0000 UTC m=+7.576323345" Sep 9 21:54:26.766997 containerd[1570]: time="2025-09-09T21:54:26.765376540Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:26.778832 containerd[1570]: time="2025-09-09T21:54:26.778695391Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 21:54:26.784814 containerd[1570]: time="2025-09-09T21:54:26.784740036Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:26.794747 containerd[1570]: time="2025-09-09T21:54:26.791133458Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.684969457s" Sep 9 21:54:26.794747 containerd[1570]: time="2025-09-09T21:54:26.791194248Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 21:54:26.813644 containerd[1570]: time="2025-09-09T21:54:26.813322777Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 21:54:26.817815 containerd[1570]: time="2025-09-09T21:54:26.815204100Z" level=info msg="CreateContainer within sandbox \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 21:54:26.855077 containerd[1570]: time="2025-09-09T21:54:26.855010762Z" level=info msg="Container 3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:54:26.881563 containerd[1570]: time="2025-09-09T21:54:26.881227726Z" level=info msg="CreateContainer within sandbox \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\"" Sep 9 21:54:26.887810 containerd[1570]: time="2025-09-09T21:54:26.886568981Z" level=info msg="StartContainer for \"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\"" Sep 9 21:54:26.894912 containerd[1570]: time="2025-09-09T21:54:26.891813202Z" level=info msg="connecting to shim 3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d" address="unix:///run/containerd/s/81b90d4afef747e3113ff9f0cd4e5bfbda2a4b3986dfd0918b3ef261401b31af" protocol=ttrpc version=3 Sep 9 21:54:27.089766 systemd[1]: Started cri-containerd-3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d.scope - libcontainer container 3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d. Sep 9 21:54:27.197486 containerd[1570]: time="2025-09-09T21:54:27.194431437Z" level=info msg="StartContainer for \"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\" returns successfully" Sep 9 21:54:27.710983 kubelet[2790]: I0909 21:54:27.710494 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-24zbl" podStartSLOduration=1.996903412 podStartE2EDuration="6.710461904s" podCreationTimestamp="2025-09-09 21:54:21 +0000 UTC" firstStartedPulling="2025-09-09 21:54:22.082933131 +0000 UTC m=+6.966141837" lastFinishedPulling="2025-09-09 21:54:26.796491622 +0000 UTC m=+11.679700329" observedRunningTime="2025-09-09 21:54:27.702525023 +0000 UTC m=+12.585733739" watchObservedRunningTime="2025-09-09 21:54:27.710461904 +0000 UTC m=+12.593670610" Sep 9 21:54:44.392198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount545212523.mount: Deactivated successfully. Sep 9 21:54:54.010473 containerd[1570]: time="2025-09-09T21:54:54.008879364Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:54.011409 containerd[1570]: time="2025-09-09T21:54:54.011282047Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 21:54:54.012940 containerd[1570]: time="2025-09-09T21:54:54.012807411Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:54.017703 containerd[1570]: time="2025-09-09T21:54:54.016752173Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 27.20335092s" Sep 9 21:54:54.017703 containerd[1570]: time="2025-09-09T21:54:54.016799986Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 21:54:54.032900 containerd[1570]: time="2025-09-09T21:54:54.032836556Z" level=info msg="CreateContainer within sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 21:54:54.120742 containerd[1570]: time="2025-09-09T21:54:54.119690413Z" level=info msg="Container 001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:54:54.169623 containerd[1570]: time="2025-09-09T21:54:54.169041093Z" level=info msg="CreateContainer within sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234\"" Sep 9 21:54:54.174041 containerd[1570]: time="2025-09-09T21:54:54.173902593Z" level=info msg="StartContainer for \"001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234\"" Sep 9 21:54:54.195891 containerd[1570]: time="2025-09-09T21:54:54.195189745Z" level=info msg="connecting to shim 001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234" address="unix:///run/containerd/s/f193a865929a924eaf5ff880eb67ae3545d9a203f6a1a0aa208dd483ce09a152" protocol=ttrpc version=3 Sep 9 21:54:54.332664 systemd[1]: Started cri-containerd-001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234.scope - libcontainer container 001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234. Sep 9 21:54:54.427501 containerd[1570]: time="2025-09-09T21:54:54.426555470Z" level=info msg="StartContainer for \"001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234\" returns successfully" Sep 9 21:54:54.466127 systemd[1]: cri-containerd-001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234.scope: Deactivated successfully. Sep 9 21:54:54.467070 systemd[1]: cri-containerd-001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234.scope: Consumed 42ms CPU time, 6.8M memory peak, 4K read from disk, 3.2M written to disk. Sep 9 21:54:54.481664 containerd[1570]: time="2025-09-09T21:54:54.481354595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234\" id:\"001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234\" pid:3231 exited_at:{seconds:1757454894 nanos:480446530}" Sep 9 21:54:54.481664 containerd[1570]: time="2025-09-09T21:54:54.481466094Z" level=info msg="received exit event container_id:\"001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234\" id:\"001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234\" pid:3231 exited_at:{seconds:1757454894 nanos:480446530}" Sep 9 21:54:54.542612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234-rootfs.mount: Deactivated successfully. Sep 9 21:54:55.971475 containerd[1570]: time="2025-09-09T21:54:55.971422709Z" level=info msg="CreateContainer within sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 21:54:56.119307 containerd[1570]: time="2025-09-09T21:54:56.116739054Z" level=info msg="Container d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:54:56.174194 containerd[1570]: time="2025-09-09T21:54:56.170020215Z" level=info msg="CreateContainer within sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8\"" Sep 9 21:54:56.196286 containerd[1570]: time="2025-09-09T21:54:56.191866015Z" level=info msg="StartContainer for \"d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8\"" Sep 9 21:54:56.196286 containerd[1570]: time="2025-09-09T21:54:56.193882479Z" level=info msg="connecting to shim d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8" address="unix:///run/containerd/s/f193a865929a924eaf5ff880eb67ae3545d9a203f6a1a0aa208dd483ce09a152" protocol=ttrpc version=3 Sep 9 21:54:56.255891 systemd[1]: Started cri-containerd-d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8.scope - libcontainer container d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8. Sep 9 21:54:56.398212 containerd[1570]: time="2025-09-09T21:54:56.398089597Z" level=info msg="StartContainer for \"d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8\" returns successfully" Sep 9 21:54:56.452839 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 21:54:56.455962 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:54:56.471405 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:54:56.474013 containerd[1570]: time="2025-09-09T21:54:56.473961586Z" level=info msg="received exit event container_id:\"d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8\" id:\"d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8\" pid:3273 exited_at:{seconds:1757454896 nanos:473034598}" Sep 9 21:54:56.475708 containerd[1570]: time="2025-09-09T21:54:56.475442055Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8\" id:\"d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8\" pid:3273 exited_at:{seconds:1757454896 nanos:473034598}" Sep 9 21:54:56.492280 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:54:56.495043 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 21:54:56.504995 systemd[1]: cri-containerd-d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8.scope: Deactivated successfully. Sep 9 21:54:56.605302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8-rootfs.mount: Deactivated successfully. Sep 9 21:54:56.639196 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:54:56.977747 containerd[1570]: time="2025-09-09T21:54:56.977557446Z" level=info msg="CreateContainer within sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 21:54:57.059283 containerd[1570]: time="2025-09-09T21:54:57.059155432Z" level=info msg="Container 68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:54:57.115922 containerd[1570]: time="2025-09-09T21:54:57.113819854Z" level=info msg="CreateContainer within sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f\"" Sep 9 21:54:57.115922 containerd[1570]: time="2025-09-09T21:54:57.114648356Z" level=info msg="StartContainer for \"68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f\"" Sep 9 21:54:57.166641 containerd[1570]: time="2025-09-09T21:54:57.128303567Z" level=info msg="connecting to shim 68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f" address="unix:///run/containerd/s/f193a865929a924eaf5ff880eb67ae3545d9a203f6a1a0aa208dd483ce09a152" protocol=ttrpc version=3 Sep 9 21:54:57.221950 systemd[1]: Started cri-containerd-68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f.scope - libcontainer container 68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f. Sep 9 21:54:57.388201 containerd[1570]: time="2025-09-09T21:54:57.384572471Z" level=info msg="StartContainer for \"68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f\" returns successfully" Sep 9 21:54:57.390527 systemd[1]: cri-containerd-68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f.scope: Deactivated successfully. Sep 9 21:54:57.402790 containerd[1570]: time="2025-09-09T21:54:57.398706803Z" level=info msg="received exit event container_id:\"68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f\" id:\"68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f\" pid:3322 exited_at:{seconds:1757454897 nanos:396696303}" Sep 9 21:54:57.404176 containerd[1570]: time="2025-09-09T21:54:57.404142821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f\" id:\"68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f\" pid:3322 exited_at:{seconds:1757454897 nanos:396696303}" Sep 9 21:54:57.474628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f-rootfs.mount: Deactivated successfully. Sep 9 21:54:58.013601 containerd[1570]: time="2025-09-09T21:54:58.011850382Z" level=info msg="CreateContainer within sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 21:54:58.297701 containerd[1570]: time="2025-09-09T21:54:58.295048091Z" level=info msg="Container 59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:54:58.411266 containerd[1570]: time="2025-09-09T21:54:58.411166131Z" level=info msg="CreateContainer within sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65\"" Sep 9 21:54:58.417454 containerd[1570]: time="2025-09-09T21:54:58.415447044Z" level=info msg="StartContainer for \"59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65\"" Sep 9 21:54:58.419474 containerd[1570]: time="2025-09-09T21:54:58.419131068Z" level=info msg="connecting to shim 59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65" address="unix:///run/containerd/s/f193a865929a924eaf5ff880eb67ae3545d9a203f6a1a0aa208dd483ce09a152" protocol=ttrpc version=3 Sep 9 21:54:58.541936 systemd[1]: Started cri-containerd-59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65.scope - libcontainer container 59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65. Sep 9 21:54:58.742101 systemd[1]: cri-containerd-59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65.scope: Deactivated successfully. Sep 9 21:54:58.745442 containerd[1570]: time="2025-09-09T21:54:58.744477379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65\" id:\"59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65\" pid:3363 exited_at:{seconds:1757454898 nanos:743959304}" Sep 9 21:54:58.752520 containerd[1570]: time="2025-09-09T21:54:58.752127156Z" level=info msg="received exit event container_id:\"59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65\" id:\"59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65\" pid:3363 exited_at:{seconds:1757454898 nanos:743959304}" Sep 9 21:54:58.753272 containerd[1570]: time="2025-09-09T21:54:58.753111335Z" level=info msg="StartContainer for \"59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65\" returns successfully" Sep 9 21:54:58.845513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65-rootfs.mount: Deactivated successfully. Sep 9 21:54:59.044662 containerd[1570]: time="2025-09-09T21:54:59.040059238Z" level=info msg="CreateContainer within sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 21:54:59.102717 containerd[1570]: time="2025-09-09T21:54:59.102541336Z" level=info msg="Container 6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:54:59.132236 containerd[1570]: time="2025-09-09T21:54:59.132178232Z" level=info msg="CreateContainer within sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\"" Sep 9 21:54:59.135364 containerd[1570]: time="2025-09-09T21:54:59.133572538Z" level=info msg="StartContainer for \"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\"" Sep 9 21:54:59.135364 containerd[1570]: time="2025-09-09T21:54:59.134918258Z" level=info msg="connecting to shim 6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96" address="unix:///run/containerd/s/f193a865929a924eaf5ff880eb67ae3545d9a203f6a1a0aa208dd483ce09a152" protocol=ttrpc version=3 Sep 9 21:54:59.212087 systemd[1]: Started cri-containerd-6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96.scope - libcontainer container 6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96. Sep 9 21:54:59.276125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851881744.mount: Deactivated successfully. Sep 9 21:54:59.424486 containerd[1570]: time="2025-09-09T21:54:59.424167892Z" level=info msg="StartContainer for \"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" returns successfully" Sep 9 21:54:59.723087 kubelet[2790]: I0909 21:54:59.721078 2790 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 21:54:59.731431 containerd[1570]: time="2025-09-09T21:54:59.726125640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"5932620582830142e0334788d4f2ea3b1c5ec1d11d2c9ecc719576734f7f2774\" pid:3431 exited_at:{seconds:1757454899 nanos:724197611}" Sep 9 21:54:59.870889 kubelet[2790]: W0909 21:54:59.870819 2790 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 21:54:59.871461 kubelet[2790]: E0909 21:54:59.871407 2790 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 21:54:59.912358 systemd[1]: Created slice kubepods-burstable-podcf4b138b_aaa3_434e_b019_37adc951ac74.slice - libcontainer container kubepods-burstable-podcf4b138b_aaa3_434e_b019_37adc951ac74.slice. Sep 9 21:54:59.931512 systemd[1]: Created slice kubepods-burstable-podb1a42197_163f_47fd_b36c_298d754d4e3d.slice - libcontainer container kubepods-burstable-podb1a42197_163f_47fd_b36c_298d754d4e3d.slice. Sep 9 21:55:00.063565 kubelet[2790]: I0909 21:55:00.063504 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt6mr\" (UniqueName: \"kubernetes.io/projected/b1a42197-163f-47fd-b36c-298d754d4e3d-kube-api-access-bt6mr\") pod \"coredns-7c65d6cfc9-gsmhq\" (UID: \"b1a42197-163f-47fd-b36c-298d754d4e3d\") " pod="kube-system/coredns-7c65d6cfc9-gsmhq" Sep 9 21:55:00.063565 kubelet[2790]: I0909 21:55:00.063572 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf4b138b-aaa3-434e-b019-37adc951ac74-config-volume\") pod \"coredns-7c65d6cfc9-77b4l\" (UID: \"cf4b138b-aaa3-434e-b019-37adc951ac74\") " pod="kube-system/coredns-7c65d6cfc9-77b4l" Sep 9 21:55:00.063822 kubelet[2790]: I0909 21:55:00.063600 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a42197-163f-47fd-b36c-298d754d4e3d-config-volume\") pod \"coredns-7c65d6cfc9-gsmhq\" (UID: \"b1a42197-163f-47fd-b36c-298d754d4e3d\") " pod="kube-system/coredns-7c65d6cfc9-gsmhq" Sep 9 21:55:00.063822 kubelet[2790]: I0909 21:55:00.063623 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppmkn\" (UniqueName: \"kubernetes.io/projected/cf4b138b-aaa3-434e-b019-37adc951ac74-kube-api-access-ppmkn\") pod \"coredns-7c65d6cfc9-77b4l\" (UID: \"cf4b138b-aaa3-434e-b019-37adc951ac74\") " pod="kube-system/coredns-7c65d6cfc9-77b4l" Sep 9 21:55:00.117850 kubelet[2790]: I0909 21:55:00.114794 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-psmpz" podStartSLOduration=8.262094545 podStartE2EDuration="40.114770614s" podCreationTimestamp="2025-09-09 21:54:20 +0000 UTC" firstStartedPulling="2025-09-09 21:54:22.169496404 +0000 UTC m=+7.052705110" lastFinishedPulling="2025-09-09 21:54:54.022172473 +0000 UTC m=+38.905381179" observedRunningTime="2025-09-09 21:55:00.104089863 +0000 UTC m=+44.987298579" watchObservedRunningTime="2025-09-09 21:55:00.114770614 +0000 UTC m=+44.997979320" Sep 9 21:55:00.739682 containerd[1570]: time="2025-09-09T21:55:00.739469024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"f03a452efbddce72736722c124c5671abf726eb38c814373b0a03a0df09612bc\" pid:3478 exit_status:1 exited_at:{seconds:1757454900 nanos:738857255}" Sep 9 21:55:01.170788 kubelet[2790]: E0909 21:55:01.170703 2790 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 9 21:55:01.172286 kubelet[2790]: E0909 21:55:01.171204 2790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1a42197-163f-47fd-b36c-298d754d4e3d-config-volume podName:b1a42197-163f-47fd-b36c-298d754d4e3d nodeName:}" failed. No retries permitted until 2025-09-09 21:55:01.671170473 +0000 UTC m=+46.554379179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b1a42197-163f-47fd-b36c-298d754d4e3d-config-volume") pod "coredns-7c65d6cfc9-gsmhq" (UID: "b1a42197-163f-47fd-b36c-298d754d4e3d") : failed to sync configmap cache: timed out waiting for the condition Sep 9 21:55:01.172286 kubelet[2790]: E0909 21:55:01.170703 2790 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 9 21:55:01.172286 kubelet[2790]: E0909 21:55:01.171631 2790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cf4b138b-aaa3-434e-b019-37adc951ac74-config-volume podName:cf4b138b-aaa3-434e-b019-37adc951ac74 nodeName:}" failed. No retries permitted until 2025-09-09 21:55:01.671616695 +0000 UTC m=+46.554825401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cf4b138b-aaa3-434e-b019-37adc951ac74-config-volume") pod "coredns-7c65d6cfc9-77b4l" (UID: "cf4b138b-aaa3-434e-b019-37adc951ac74") : failed to sync configmap cache: timed out waiting for the condition Sep 9 21:55:01.738285 containerd[1570]: time="2025-09-09T21:55:01.738193597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-77b4l,Uid:cf4b138b-aaa3-434e-b019-37adc951ac74,Namespace:kube-system,Attempt:0,}" Sep 9 21:55:01.753538 containerd[1570]: time="2025-09-09T21:55:01.751423249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gsmhq,Uid:b1a42197-163f-47fd-b36c-298d754d4e3d,Namespace:kube-system,Attempt:0,}" Sep 9 21:55:03.212073 containerd[1570]: time="2025-09-09T21:55:03.199789097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"6dd32dc664735ff058b607ca015895d82075a7e0b12207eb9d71a60b656213ee\" pid:3566 exit_status:1 exited_at:{seconds:1757454903 nanos:190559417}" Sep 9 21:55:03.221664 kubelet[2790]: E0909 21:55:03.213289 2790 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56890->127.0.0.1:40737: write tcp 127.0.0.1:56890->127.0.0.1:40737: write: broken pipe Sep 9 21:55:03.364060 systemd-networkd[1465]: cilium_host: Link UP Sep 9 21:55:03.368929 systemd-networkd[1465]: cilium_net: Link UP Sep 9 21:55:03.369683 systemd-networkd[1465]: cilium_host: Gained carrier Sep 9 21:55:03.370147 systemd-networkd[1465]: cilium_net: Gained carrier Sep 9 21:55:03.525131 systemd-networkd[1465]: cilium_net: Gained IPv6LL Sep 9 21:55:04.006047 systemd-networkd[1465]: cilium_vxlan: Link UP Sep 9 21:55:04.006055 systemd-networkd[1465]: cilium_vxlan: Gained carrier Sep 9 21:55:04.373784 systemd-networkd[1465]: cilium_host: Gained IPv6LL Sep 9 21:55:04.817462 kernel: NET: Registered PF_ALG protocol family Sep 9 21:55:05.714095 containerd[1570]: time="2025-09-09T21:55:05.710842371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"a945358f024bb547cf0df9eb0e7a8208ca931a1ba8079683dd1b8c50e601ede5\" pid:3705 exit_status:1 exited_at:{seconds:1757454905 nanos:710433617}" Sep 9 21:55:05.751290 kubelet[2790]: E0909 21:55:05.751231 2790 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60202->127.0.0.1:40737: write tcp 127.0.0.1:60202->127.0.0.1:40737: write: connection reset by peer Sep 9 21:55:05.911396 systemd-networkd[1465]: cilium_vxlan: Gained IPv6LL Sep 9 21:55:07.988737 systemd-networkd[1465]: lxc_health: Link UP Sep 9 21:55:07.989419 systemd-networkd[1465]: lxc_health: Gained carrier Sep 9 21:55:08.108397 containerd[1570]: time="2025-09-09T21:55:08.108317922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"2a80b865b6b1d4d4ebb55d65aee04555084f0507d324855b8ec5a1b5dc3d7dd0\" pid:3945 exit_status:1 exited_at:{seconds:1757454908 nanos:107444221}" Sep 9 21:55:08.526401 systemd-networkd[1465]: lxc80f428a180a5: Link UP Sep 9 21:55:08.527455 systemd-networkd[1465]: lxcecd606c0b791: Link UP Sep 9 21:55:08.530527 kernel: eth0: renamed from tmp6b64b Sep 9 21:55:08.532681 systemd-networkd[1465]: lxc80f428a180a5: Gained carrier Sep 9 21:55:08.534471 kernel: eth0: renamed from tmp8be0c Sep 9 21:55:08.534923 systemd-networkd[1465]: lxcecd606c0b791: Gained carrier Sep 9 21:55:09.748101 systemd-networkd[1465]: lxc80f428a180a5: Gained IPv6LL Sep 9 21:55:09.752731 systemd-networkd[1465]: lxc_health: Gained IPv6LL Sep 9 21:55:09.809595 systemd-networkd[1465]: lxcecd606c0b791: Gained IPv6LL Sep 9 21:55:10.427576 containerd[1570]: time="2025-09-09T21:55:10.427517651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"17f9f81f8d224a23987857031a089e69e91379275e4a8ab66826da32c43271c6\" pid:4003 exited_at:{seconds:1757454910 nanos:425062326}" Sep 9 21:55:12.756676 containerd[1570]: time="2025-09-09T21:55:12.756609917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"d51cf0c6f81fe14552bc2afdc91a921d43e21d82b8c268f82f7cdbfe58bf0ec2\" pid:4030 exited_at:{seconds:1757454912 nanos:756168857}" Sep 9 21:55:15.109729 containerd[1570]: time="2025-09-09T21:55:15.103195440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"54a1d08387067bd68d8e5df700de22b391372d1c2fe4f5365f170956bc0a38cb\" pid:4057 exited_at:{seconds:1757454915 nanos:102680694}" Sep 9 21:55:15.699398 containerd[1570]: time="2025-09-09T21:55:15.699113658Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"17c6e70a1d202f3b6feaa27c81f23b225d8b04eb3e50307d73a8d746c7e59c1b\" pid:4091 exited_at:{seconds:1757454915 nanos:698609448}" Sep 9 21:55:18.030696 sudo[1786]: pam_unix(sudo:session): session closed for user root Sep 9 21:55:18.052471 sshd[1785]: Connection closed by 10.0.0.1 port 55176 Sep 9 21:55:18.063875 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Sep 9 21:55:18.072047 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Sep 9 21:55:18.073820 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:55176.service: Deactivated successfully. Sep 9 21:55:18.088563 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 21:55:18.092691 systemd[1]: session-7.scope: Consumed 11.163s CPU time, 233.7M memory peak. Sep 9 21:55:18.106125 systemd-logind[1552]: Removed session 7. Sep 9 21:55:19.513784 containerd[1570]: time="2025-09-09T21:55:19.513660176Z" level=info msg="connecting to shim 8be0cfee485c2cc7b9515b2461db0eb65299a304afd8e702a1220c8de9d885ae" address="unix:///run/containerd/s/b102dca0e8fd15f7bf260225d4e8c36d3f3ee38b86640bbf5d7b997e926e88c4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:55:19.514690 containerd[1570]: time="2025-09-09T21:55:19.514585813Z" level=info msg="connecting to shim 6b64b97e3b6f61d12869025d3b6084973f388c6d62fdfe3f4e2af4ae6129c2e9" address="unix:///run/containerd/s/fd873a67ed98d78931c83df9b72b8eee63d3a1ed18e912f772cd8eca4bdbd727" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:55:19.568251 systemd[1]: Started cri-containerd-8be0cfee485c2cc7b9515b2461db0eb65299a304afd8e702a1220c8de9d885ae.scope - libcontainer container 8be0cfee485c2cc7b9515b2461db0eb65299a304afd8e702a1220c8de9d885ae. Sep 9 21:55:19.589304 systemd[1]: Started cri-containerd-6b64b97e3b6f61d12869025d3b6084973f388c6d62fdfe3f4e2af4ae6129c2e9.scope - libcontainer container 6b64b97e3b6f61d12869025d3b6084973f388c6d62fdfe3f4e2af4ae6129c2e9. Sep 9 21:55:19.610311 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:55:19.615944 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:55:19.728041 containerd[1570]: time="2025-09-09T21:55:19.726000208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-77b4l,Uid:cf4b138b-aaa3-434e-b019-37adc951ac74,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b64b97e3b6f61d12869025d3b6084973f388c6d62fdfe3f4e2af4ae6129c2e9\"" Sep 9 21:55:19.736100 containerd[1570]: time="2025-09-09T21:55:19.736022937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gsmhq,Uid:b1a42197-163f-47fd-b36c-298d754d4e3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8be0cfee485c2cc7b9515b2461db0eb65299a304afd8e702a1220c8de9d885ae\"" Sep 9 21:55:19.788315 containerd[1570]: time="2025-09-09T21:55:19.783854108Z" level=info msg="CreateContainer within sandbox \"6b64b97e3b6f61d12869025d3b6084973f388c6d62fdfe3f4e2af4ae6129c2e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:55:19.792802 containerd[1570]: time="2025-09-09T21:55:19.790412992Z" level=info msg="CreateContainer within sandbox \"8be0cfee485c2cc7b9515b2461db0eb65299a304afd8e702a1220c8de9d885ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:55:19.886105 containerd[1570]: time="2025-09-09T21:55:19.884934590Z" level=info msg="Container 1b3b4d20d7d73d0499f8cdc80888b2958f9045e8f886649d502195999f24d590: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:19.886617 containerd[1570]: time="2025-09-09T21:55:19.886575015Z" level=info msg="Container 54743e3bf9c784118c87254a21806bf60cd39052f08439c1b88d1a72359952a5: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:19.917368 containerd[1570]: time="2025-09-09T21:55:19.917278796Z" level=info msg="CreateContainer within sandbox \"6b64b97e3b6f61d12869025d3b6084973f388c6d62fdfe3f4e2af4ae6129c2e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b3b4d20d7d73d0499f8cdc80888b2958f9045e8f886649d502195999f24d590\"" Sep 9 21:55:19.941873 containerd[1570]: time="2025-09-09T21:55:19.940027323Z" level=info msg="StartContainer for \"1b3b4d20d7d73d0499f8cdc80888b2958f9045e8f886649d502195999f24d590\"" Sep 9 21:55:19.941873 containerd[1570]: time="2025-09-09T21:55:19.941398661Z" level=info msg="connecting to shim 1b3b4d20d7d73d0499f8cdc80888b2958f9045e8f886649d502195999f24d590" address="unix:///run/containerd/s/fd873a67ed98d78931c83df9b72b8eee63d3a1ed18e912f772cd8eca4bdbd727" protocol=ttrpc version=3 Sep 9 21:55:19.950746 containerd[1570]: time="2025-09-09T21:55:19.943340186Z" level=info msg="CreateContainer within sandbox \"8be0cfee485c2cc7b9515b2461db0eb65299a304afd8e702a1220c8de9d885ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54743e3bf9c784118c87254a21806bf60cd39052f08439c1b88d1a72359952a5\"" Sep 9 21:55:19.950746 containerd[1570]: time="2025-09-09T21:55:19.945712016Z" level=info msg="StartContainer for \"54743e3bf9c784118c87254a21806bf60cd39052f08439c1b88d1a72359952a5\"" Sep 9 21:55:19.971481 containerd[1570]: time="2025-09-09T21:55:19.970108981Z" level=info msg="connecting to shim 54743e3bf9c784118c87254a21806bf60cd39052f08439c1b88d1a72359952a5" address="unix:///run/containerd/s/b102dca0e8fd15f7bf260225d4e8c36d3f3ee38b86640bbf5d7b997e926e88c4" protocol=ttrpc version=3 Sep 9 21:55:20.011606 systemd[1]: Started cri-containerd-1b3b4d20d7d73d0499f8cdc80888b2958f9045e8f886649d502195999f24d590.scope - libcontainer container 1b3b4d20d7d73d0499f8cdc80888b2958f9045e8f886649d502195999f24d590. Sep 9 21:55:20.071554 systemd[1]: Started cri-containerd-54743e3bf9c784118c87254a21806bf60cd39052f08439c1b88d1a72359952a5.scope - libcontainer container 54743e3bf9c784118c87254a21806bf60cd39052f08439c1b88d1a72359952a5. Sep 9 21:55:20.192571 containerd[1570]: time="2025-09-09T21:55:20.191847479Z" level=info msg="StartContainer for \"1b3b4d20d7d73d0499f8cdc80888b2958f9045e8f886649d502195999f24d590\" returns successfully" Sep 9 21:55:20.237354 containerd[1570]: time="2025-09-09T21:55:20.237259152Z" level=info msg="StartContainer for \"54743e3bf9c784118c87254a21806bf60cd39052f08439c1b88d1a72359952a5\" returns successfully" Sep 9 21:55:20.342763 kubelet[2790]: I0909 21:55:20.341910 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gsmhq" podStartSLOduration=61.341871841 podStartE2EDuration="1m1.341871841s" podCreationTimestamp="2025-09-09 21:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:55:20.327569743 +0000 UTC m=+65.210778469" watchObservedRunningTime="2025-09-09 21:55:20.341871841 +0000 UTC m=+65.225080547" Sep 9 21:55:20.405287 kubelet[2790]: I0909 21:55:20.404373 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-77b4l" podStartSLOduration=61.404312791 podStartE2EDuration="1m1.404312791s" podCreationTimestamp="2025-09-09 21:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:55:20.394227332 +0000 UTC m=+65.277436058" watchObservedRunningTime="2025-09-09 21:55:20.404312791 +0000 UTC m=+65.287521497" Sep 9 21:55:20.437289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573602404.mount: Deactivated successfully. Sep 9 21:56:35.749529 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:51216.service - OpenSSH per-connection server daemon (10.0.0.1:51216). Sep 9 21:56:35.959524 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 51216 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:35.959121 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:35.986403 systemd-logind[1552]: New session 8 of user core. Sep 9 21:56:36.010967 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 21:56:36.451712 sshd[4321]: Connection closed by 10.0.0.1 port 51216 Sep 9 21:56:36.453546 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:36.489032 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:51216.service: Deactivated successfully. Sep 9 21:56:36.522141 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 21:56:36.534598 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. Sep 9 21:56:36.544656 systemd-logind[1552]: Removed session 8. Sep 9 21:56:41.505662 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:55784.service - OpenSSH per-connection server daemon (10.0.0.1:55784). Sep 9 21:56:41.676115 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 55784 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:41.682624 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:41.713751 systemd-logind[1552]: New session 9 of user core. Sep 9 21:56:41.739666 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 21:56:42.203597 sshd[4338]: Connection closed by 10.0.0.1 port 55784 Sep 9 21:56:42.203046 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:42.211536 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:55784.service: Deactivated successfully. Sep 9 21:56:42.224558 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 21:56:42.238420 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. Sep 9 21:56:42.241910 systemd-logind[1552]: Removed session 9. Sep 9 21:56:47.238871 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:55788.service - OpenSSH per-connection server daemon (10.0.0.1:55788). Sep 9 21:56:47.380029 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 55788 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:47.383941 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:47.412474 systemd-logind[1552]: New session 10 of user core. Sep 9 21:56:47.437893 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 21:56:47.725511 sshd[4355]: Connection closed by 10.0.0.1 port 55788 Sep 9 21:56:47.724670 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:47.736131 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:55788.service: Deactivated successfully. Sep 9 21:56:47.741896 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 21:56:47.747477 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. Sep 9 21:56:47.752238 systemd-logind[1552]: Removed session 10. Sep 9 21:56:52.768559 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:55408.service - OpenSSH per-connection server daemon (10.0.0.1:55408). Sep 9 21:56:52.970814 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 55408 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:52.979651 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:53.006128 systemd-logind[1552]: New session 11 of user core. Sep 9 21:56:53.020215 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 21:56:53.380533 sshd[4376]: Connection closed by 10.0.0.1 port 55408 Sep 9 21:56:53.380776 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:53.405458 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:55408.service: Deactivated successfully. Sep 9 21:56:53.413561 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 21:56:53.416694 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. Sep 9 21:56:53.418377 systemd-logind[1552]: Removed session 11. Sep 9 21:56:58.435892 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:55414.service - OpenSSH per-connection server daemon (10.0.0.1:55414). Sep 9 21:56:58.594043 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 55414 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:56:58.598659 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:58.628870 systemd-logind[1552]: New session 12 of user core. Sep 9 21:56:58.644292 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 21:56:59.061400 sshd[4393]: Connection closed by 10.0.0.1 port 55414 Sep 9 21:56:59.062454 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:59.082472 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:55414.service: Deactivated successfully. Sep 9 21:56:59.086959 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 21:56:59.088583 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. Sep 9 21:56:59.114234 systemd-logind[1552]: Removed session 12. Sep 9 21:57:04.083066 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:58448.service - OpenSSH per-connection server daemon (10.0.0.1:58448). Sep 9 21:57:04.222710 sshd[4407]: Accepted publickey for core from 10.0.0.1 port 58448 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:04.228692 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:04.256843 systemd-logind[1552]: New session 13 of user core. Sep 9 21:57:04.271094 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 21:57:04.646306 sshd[4410]: Connection closed by 10.0.0.1 port 58448 Sep 9 21:57:04.648863 sshd-session[4407]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:04.666924 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:58448.service: Deactivated successfully. Sep 9 21:57:04.685118 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 21:57:04.692754 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. Sep 9 21:57:04.702667 systemd-logind[1552]: Removed session 13. Sep 9 21:57:09.675901 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:58450.service - OpenSSH per-connection server daemon (10.0.0.1:58450). Sep 9 21:57:09.815197 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 58450 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:09.822210 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:09.844963 systemd-logind[1552]: New session 14 of user core. Sep 9 21:57:09.851680 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 21:57:10.298719 sshd[4427]: Connection closed by 10.0.0.1 port 58450 Sep 9 21:57:10.299665 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:10.305414 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:58450.service: Deactivated successfully. Sep 9 21:57:10.312144 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 21:57:10.317294 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. Sep 9 21:57:10.319638 systemd-logind[1552]: Removed session 14. Sep 9 21:57:15.354115 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:36318.service - OpenSSH per-connection server daemon (10.0.0.1:36318). Sep 9 21:57:15.548870 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 36318 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:15.551201 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:15.582061 systemd-logind[1552]: New session 15 of user core. Sep 9 21:57:15.593882 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 21:57:15.935678 sshd[4446]: Connection closed by 10.0.0.1 port 36318 Sep 9 21:57:15.942991 sshd-session[4441]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:15.964911 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:36318.service: Deactivated successfully. Sep 9 21:57:15.975076 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 21:57:16.004888 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. Sep 9 21:57:16.006491 systemd-logind[1552]: Removed session 15. Sep 9 21:57:20.979031 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:39898.service - OpenSSH per-connection server daemon (10.0.0.1:39898). Sep 9 21:57:21.130874 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 39898 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:21.133945 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:21.150805 systemd-logind[1552]: New session 16 of user core. Sep 9 21:57:21.164110 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 21:57:21.481277 sshd[4464]: Connection closed by 10.0.0.1 port 39898 Sep 9 21:57:21.482842 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:21.497246 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:39898.service: Deactivated successfully. Sep 9 21:57:21.502445 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 21:57:21.515117 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. Sep 9 21:57:21.526184 systemd-logind[1552]: Removed session 16. Sep 9 21:57:26.514780 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:39902.service - OpenSSH per-connection server daemon (10.0.0.1:39902). Sep 9 21:57:26.638643 sshd[4480]: Accepted publickey for core from 10.0.0.1 port 39902 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:26.649500 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:26.695864 systemd-logind[1552]: New session 17 of user core. Sep 9 21:57:26.716845 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 21:57:27.101694 sshd[4483]: Connection closed by 10.0.0.1 port 39902 Sep 9 21:57:27.100931 sshd-session[4480]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:27.144366 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:39902.service: Deactivated successfully. Sep 9 21:57:27.149201 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 21:57:27.158105 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. Sep 9 21:57:27.178729 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:39912.service - OpenSSH per-connection server daemon (10.0.0.1:39912). Sep 9 21:57:27.187269 systemd-logind[1552]: Removed session 17. Sep 9 21:57:27.326502 sshd[4498]: Accepted publickey for core from 10.0.0.1 port 39912 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:27.331007 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:27.368623 systemd-logind[1552]: New session 18 of user core. Sep 9 21:57:27.391099 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 21:57:27.967239 sshd[4501]: Connection closed by 10.0.0.1 port 39912 Sep 9 21:57:27.970728 sshd-session[4498]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:28.052738 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:39912.service: Deactivated successfully. Sep 9 21:57:28.099961 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 21:57:28.104035 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. Sep 9 21:57:28.123366 systemd-logind[1552]: Removed session 18. Sep 9 21:57:28.144055 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:39924.service - OpenSSH per-connection server daemon (10.0.0.1:39924). Sep 9 21:57:28.338180 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 39924 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:28.343681 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:28.361866 systemd-logind[1552]: New session 19 of user core. Sep 9 21:57:28.383728 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 21:57:28.767178 sshd[4519]: Connection closed by 10.0.0.1 port 39924 Sep 9 21:57:28.766671 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:28.787781 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:39924.service: Deactivated successfully. Sep 9 21:57:28.803933 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 21:57:28.812101 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. Sep 9 21:57:28.817642 systemd-logind[1552]: Removed session 19. Sep 9 21:57:33.795521 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:42152.service - OpenSSH per-connection server daemon (10.0.0.1:42152). Sep 9 21:57:33.948620 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 42152 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:33.942988 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:33.984370 systemd-logind[1552]: New session 20 of user core. Sep 9 21:57:33.999493 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 21:57:34.585948 sshd[4535]: Connection closed by 10.0.0.1 port 42152 Sep 9 21:57:34.586842 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:34.625363 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:42152.service: Deactivated successfully. Sep 9 21:57:34.642870 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 21:57:34.648631 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. Sep 9 21:57:34.661252 systemd-logind[1552]: Removed session 20. Sep 9 21:57:39.621721 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:42160.service - OpenSSH per-connection server daemon (10.0.0.1:42160). Sep 9 21:57:39.741778 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 42160 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:39.743562 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:39.766356 systemd-logind[1552]: New session 21 of user core. Sep 9 21:57:39.777670 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 21:57:39.982704 sshd[4556]: Connection closed by 10.0.0.1 port 42160 Sep 9 21:57:39.983051 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:39.989763 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:42160.service: Deactivated successfully. Sep 9 21:57:39.996312 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 21:57:40.004048 systemd-logind[1552]: Session 21 logged out. Waiting for processes to exit. Sep 9 21:57:40.005928 systemd-logind[1552]: Removed session 21. Sep 9 21:57:45.042140 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:37906.service - OpenSSH per-connection server daemon (10.0.0.1:37906). Sep 9 21:57:45.147730 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 37906 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:45.149440 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:45.185269 systemd-logind[1552]: New session 22 of user core. Sep 9 21:57:45.189841 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 21:57:45.506629 sshd[4572]: Connection closed by 10.0.0.1 port 37906 Sep 9 21:57:45.507718 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:45.525354 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:37906.service: Deactivated successfully. Sep 9 21:57:45.537583 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 21:57:45.549074 systemd-logind[1552]: Session 22 logged out. Waiting for processes to exit. Sep 9 21:57:45.556211 systemd-logind[1552]: Removed session 22. Sep 9 21:57:50.549574 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:44058.service - OpenSSH per-connection server daemon (10.0.0.1:44058). Sep 9 21:57:50.676896 sshd[4585]: Accepted publickey for core from 10.0.0.1 port 44058 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:50.683028 sshd-session[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:50.712246 systemd-logind[1552]: New session 23 of user core. Sep 9 21:57:50.719738 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 21:57:51.082620 sshd[4588]: Connection closed by 10.0.0.1 port 44058 Sep 9 21:57:51.084651 sshd-session[4585]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:51.106639 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:44058.service: Deactivated successfully. Sep 9 21:57:51.111235 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 21:57:51.118959 systemd-logind[1552]: Session 23 logged out. Waiting for processes to exit. Sep 9 21:57:51.125638 systemd-logind[1552]: Removed session 23. Sep 9 21:57:56.118147 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:44072.service - OpenSSH per-connection server daemon (10.0.0.1:44072). Sep 9 21:57:56.310778 sshd[4603]: Accepted publickey for core from 10.0.0.1 port 44072 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:57:56.327926 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:56.360683 systemd-logind[1552]: New session 24 of user core. Sep 9 21:57:56.382729 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 21:57:56.707364 sshd[4606]: Connection closed by 10.0.0.1 port 44072 Sep 9 21:57:56.707866 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:56.716102 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:44072.service: Deactivated successfully. Sep 9 21:57:56.723833 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 21:57:56.730799 systemd-logind[1552]: Session 24 logged out. Waiting for processes to exit. Sep 9 21:57:56.746672 systemd-logind[1552]: Removed session 24. Sep 9 21:58:01.761200 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:59056.service - OpenSSH per-connection server daemon (10.0.0.1:59056). Sep 9 21:58:01.902369 sshd[4620]: Accepted publickey for core from 10.0.0.1 port 59056 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:01.915997 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:01.949135 systemd-logind[1552]: New session 25 of user core. Sep 9 21:58:01.967608 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 21:58:02.259064 sshd[4623]: Connection closed by 10.0.0.1 port 59056 Sep 9 21:58:02.257539 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:02.283242 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:59056.service: Deactivated successfully. Sep 9 21:58:02.290635 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 21:58:02.293280 systemd-logind[1552]: Session 25 logged out. Waiting for processes to exit. Sep 9 21:58:02.303065 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:59068.service - OpenSSH per-connection server daemon (10.0.0.1:59068). Sep 9 21:58:02.307113 systemd-logind[1552]: Removed session 25. Sep 9 21:58:02.408855 sshd[4636]: Accepted publickey for core from 10.0.0.1 port 59068 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:02.414207 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:02.442433 systemd-logind[1552]: New session 26 of user core. Sep 9 21:58:02.459719 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 21:58:03.369486 sshd[4639]: Connection closed by 10.0.0.1 port 59068 Sep 9 21:58:03.372683 sshd-session[4636]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:03.401765 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:59068.service: Deactivated successfully. Sep 9 21:58:03.424126 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 21:58:03.434735 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:59084.service - OpenSSH per-connection server daemon (10.0.0.1:59084). Sep 9 21:58:03.438655 systemd-logind[1552]: Session 26 logged out. Waiting for processes to exit. Sep 9 21:58:03.443434 systemd-logind[1552]: Removed session 26. Sep 9 21:58:03.609903 sshd[4650]: Accepted publickey for core from 10.0.0.1 port 59084 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:03.614570 sshd-session[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:03.628570 systemd-logind[1552]: New session 27 of user core. Sep 9 21:58:03.648358 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 21:58:07.329365 sshd[4653]: Connection closed by 10.0.0.1 port 59084 Sep 9 21:58:07.330558 sshd-session[4650]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:07.368495 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:59084.service: Deactivated successfully. Sep 9 21:58:07.372255 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 21:58:07.378264 systemd[1]: session-27.scope: Consumed 896ms CPU time, 64.8M memory peak. Sep 9 21:58:07.385042 systemd-logind[1552]: Session 27 logged out. Waiting for processes to exit. Sep 9 21:58:07.403363 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:59086.service - OpenSSH per-connection server daemon (10.0.0.1:59086). Sep 9 21:58:07.413541 systemd-logind[1552]: Removed session 27. Sep 9 21:58:07.699230 sshd[4682]: Accepted publickey for core from 10.0.0.1 port 59086 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:07.706600 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:07.727142 systemd-logind[1552]: New session 28 of user core. Sep 9 21:58:07.743756 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 21:58:08.917556 sshd[4685]: Connection closed by 10.0.0.1 port 59086 Sep 9 21:58:08.918451 sshd-session[4682]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:08.952221 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:59086.service: Deactivated successfully. Sep 9 21:58:08.957084 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 21:58:08.966525 systemd-logind[1552]: Session 28 logged out. Waiting for processes to exit. Sep 9 21:58:08.977317 systemd[1]: Started sshd@29-10.0.0.15:22-10.0.0.1:59088.service - OpenSSH per-connection server daemon (10.0.0.1:59088). Sep 9 21:58:08.983227 systemd-logind[1552]: Removed session 28. Sep 9 21:58:09.146515 sshd[4697]: Accepted publickey for core from 10.0.0.1 port 59088 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:09.149312 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:09.163627 systemd-logind[1552]: New session 29 of user core. Sep 9 21:58:09.192848 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 21:58:09.590570 sshd[4700]: Connection closed by 10.0.0.1 port 59088 Sep 9 21:58:09.594183 sshd-session[4697]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:09.618982 systemd[1]: sshd@29-10.0.0.15:22-10.0.0.1:59088.service: Deactivated successfully. Sep 9 21:58:09.630124 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 21:58:09.644386 systemd-logind[1552]: Session 29 logged out. Waiting for processes to exit. Sep 9 21:58:09.665038 systemd-logind[1552]: Removed session 29. Sep 9 21:58:14.642565 systemd[1]: Started sshd@30-10.0.0.15:22-10.0.0.1:47054.service - OpenSSH per-connection server daemon (10.0.0.1:47054). Sep 9 21:58:14.863296 sshd[4715]: Accepted publickey for core from 10.0.0.1 port 47054 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:14.877379 sshd-session[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:14.919444 systemd-logind[1552]: New session 30 of user core. Sep 9 21:58:14.950998 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 21:58:15.261586 sshd[4718]: Connection closed by 10.0.0.1 port 47054 Sep 9 21:58:15.262083 sshd-session[4715]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:15.287752 systemd[1]: sshd@30-10.0.0.15:22-10.0.0.1:47054.service: Deactivated successfully. Sep 9 21:58:15.303944 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 21:58:15.312296 systemd-logind[1552]: Session 30 logged out. Waiting for processes to exit. Sep 9 21:58:15.328102 systemd-logind[1552]: Removed session 30. Sep 9 21:58:20.328163 systemd[1]: Started sshd@31-10.0.0.15:22-10.0.0.1:37508.service - OpenSSH per-connection server daemon (10.0.0.1:37508). Sep 9 21:58:20.529624 sshd[4733]: Accepted publickey for core from 10.0.0.1 port 37508 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:20.542119 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:20.589816 systemd-logind[1552]: New session 31 of user core. Sep 9 21:58:20.609873 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 9 21:58:20.884139 sshd[4736]: Connection closed by 10.0.0.1 port 37508 Sep 9 21:58:20.883657 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:20.905096 systemd[1]: sshd@31-10.0.0.15:22-10.0.0.1:37508.service: Deactivated successfully. Sep 9 21:58:20.911641 systemd[1]: session-31.scope: Deactivated successfully. Sep 9 21:58:20.922638 systemd-logind[1552]: Session 31 logged out. Waiting for processes to exit. Sep 9 21:58:20.933222 systemd-logind[1552]: Removed session 31. Sep 9 21:58:25.923802 systemd[1]: Started sshd@32-10.0.0.15:22-10.0.0.1:37516.service - OpenSSH per-connection server daemon (10.0.0.1:37516). Sep 9 21:58:26.109947 sshd[4752]: Accepted publickey for core from 10.0.0.1 port 37516 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:26.112102 sshd-session[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:26.147917 systemd-logind[1552]: New session 32 of user core. Sep 9 21:58:26.163807 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 9 21:58:26.573936 sshd[4755]: Connection closed by 10.0.0.1 port 37516 Sep 9 21:58:26.573544 sshd-session[4752]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:26.594288 systemd[1]: sshd@32-10.0.0.15:22-10.0.0.1:37516.service: Deactivated successfully. Sep 9 21:58:26.601783 systemd[1]: session-32.scope: Deactivated successfully. Sep 9 21:58:26.608644 systemd-logind[1552]: Session 32 logged out. Waiting for processes to exit. Sep 9 21:58:26.617501 systemd-logind[1552]: Removed session 32. Sep 9 21:58:31.642075 systemd[1]: Started sshd@33-10.0.0.15:22-10.0.0.1:51142.service - OpenSSH per-connection server daemon (10.0.0.1:51142). Sep 9 21:58:31.853269 sshd[4771]: Accepted publickey for core from 10.0.0.1 port 51142 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:31.858960 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:31.889472 systemd-logind[1552]: New session 33 of user core. Sep 9 21:58:31.927311 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 9 21:58:32.362290 sshd[4774]: Connection closed by 10.0.0.1 port 51142 Sep 9 21:58:32.363141 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:32.376669 systemd-logind[1552]: Session 33 logged out. Waiting for processes to exit. Sep 9 21:58:32.382155 systemd[1]: sshd@33-10.0.0.15:22-10.0.0.1:51142.service: Deactivated successfully. Sep 9 21:58:32.398724 systemd[1]: session-33.scope: Deactivated successfully. Sep 9 21:58:32.420776 systemd-logind[1552]: Removed session 33. Sep 9 21:58:37.383435 systemd[1]: Started sshd@34-10.0.0.15:22-10.0.0.1:51154.service - OpenSSH per-connection server daemon (10.0.0.1:51154). Sep 9 21:58:37.529320 sshd[4787]: Accepted publickey for core from 10.0.0.1 port 51154 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:37.531512 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:37.550399 systemd-logind[1552]: New session 34 of user core. Sep 9 21:58:37.566108 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 9 21:58:38.034823 sshd[4790]: Connection closed by 10.0.0.1 port 51154 Sep 9 21:58:38.039673 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:38.067841 systemd-logind[1552]: Session 34 logged out. Waiting for processes to exit. Sep 9 21:58:38.079465 systemd[1]: sshd@34-10.0.0.15:22-10.0.0.1:51154.service: Deactivated successfully. Sep 9 21:58:38.088820 systemd[1]: session-34.scope: Deactivated successfully. Sep 9 21:58:38.092666 systemd-logind[1552]: Removed session 34. Sep 9 21:58:42.875797 systemd[1]: Started sshd@35-10.0.0.15:22-10.0.0.1:53948.service - OpenSSH per-connection server daemon (10.0.0.1:53948). Sep 9 21:58:43.028085 sshd[4803]: Accepted publickey for core from 10.0.0.1 port 53948 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:43.033676 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:43.044784 systemd-logind[1552]: New session 35 of user core. Sep 9 21:58:43.053716 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 9 21:58:43.414270 sshd[4806]: Connection closed by 10.0.0.1 port 53948 Sep 9 21:58:43.419033 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:43.433736 systemd[1]: sshd@35-10.0.0.15:22-10.0.0.1:53948.service: Deactivated successfully. Sep 9 21:58:43.445756 systemd[1]: session-35.scope: Deactivated successfully. Sep 9 21:58:43.456136 systemd-logind[1552]: Session 35 logged out. Waiting for processes to exit. Sep 9 21:58:43.459997 systemd-logind[1552]: Removed session 35. Sep 9 21:58:48.442130 systemd[1]: Started sshd@36-10.0.0.15:22-10.0.0.1:53964.service - OpenSSH per-connection server daemon (10.0.0.1:53964). Sep 9 21:58:48.619103 sshd[4820]: Accepted publickey for core from 10.0.0.1 port 53964 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:48.624814 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:48.650746 systemd-logind[1552]: New session 36 of user core. Sep 9 21:58:48.662082 systemd[1]: Started session-36.scope - Session 36 of User core. Sep 9 21:58:48.899707 sshd[4823]: Connection closed by 10.0.0.1 port 53964 Sep 9 21:58:48.900416 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:48.918768 systemd[1]: sshd@36-10.0.0.15:22-10.0.0.1:53964.service: Deactivated successfully. Sep 9 21:58:48.922527 systemd[1]: session-36.scope: Deactivated successfully. Sep 9 21:58:48.934874 systemd-logind[1552]: Session 36 logged out. Waiting for processes to exit. Sep 9 21:58:48.951130 systemd[1]: Started sshd@37-10.0.0.15:22-10.0.0.1:53966.service - OpenSSH per-connection server daemon (10.0.0.1:53966). Sep 9 21:58:48.954470 systemd-logind[1552]: Removed session 36. Sep 9 21:58:49.072514 sshd[4837]: Accepted publickey for core from 10.0.0.1 port 53966 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:49.074227 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:49.097663 systemd-logind[1552]: New session 37 of user core. Sep 9 21:58:49.116224 systemd[1]: Started session-37.scope - Session 37 of User core. Sep 9 21:58:51.417319 containerd[1570]: time="2025-09-09T21:58:51.417241061Z" level=info msg="StopContainer for \"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\" with timeout 30 (s)" Sep 9 21:58:51.474392 containerd[1570]: time="2025-09-09T21:58:51.474311674Z" level=info msg="Stop container \"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\" with signal terminated" Sep 9 21:58:51.504930 systemd[1]: cri-containerd-3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d.scope: Deactivated successfully. Sep 9 21:58:51.513125 containerd[1570]: time="2025-09-09T21:58:51.511911227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\" id:\"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\" pid:3165 exited_at:{seconds:1757455131 nanos:511272124}" Sep 9 21:58:51.513125 containerd[1570]: time="2025-09-09T21:58:51.512025497Z" level=info msg="received exit event container_id:\"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\" id:\"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\" pid:3165 exited_at:{seconds:1757455131 nanos:511272124}" Sep 9 21:58:51.514350 systemd[1]: cri-containerd-3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d.scope: Consumed 1.311s CPU time, 25.1M memory peak, 584K read from disk, 4K written to disk. Sep 9 21:58:51.561758 containerd[1570]: time="2025-09-09T21:58:51.561677460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"bd5cc7bc2e3fb0c8a17584754a033dca405382a0519f30d172e8879320aaeae8\" pid:4867 exited_at:{seconds:1757455131 nanos:559058408}" Sep 9 21:58:51.562904 containerd[1570]: time="2025-09-09T21:58:51.562765098Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:58:51.578361 containerd[1570]: time="2025-09-09T21:58:51.578210282Z" level=info msg="StopContainer for \"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" with timeout 2 (s)" Sep 9 21:58:51.579181 containerd[1570]: time="2025-09-09T21:58:51.579140236Z" level=info msg="Stop container \"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" with signal terminated" Sep 9 21:58:51.594860 systemd-networkd[1465]: lxc_health: Link DOWN Sep 9 21:58:51.594875 systemd-networkd[1465]: lxc_health: Lost carrier Sep 9 21:58:51.601759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d-rootfs.mount: Deactivated successfully. Sep 9 21:58:51.654622 systemd[1]: cri-containerd-6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96.scope: Deactivated successfully. Sep 9 21:58:51.655131 systemd[1]: cri-containerd-6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96.scope: Consumed 15.814s CPU time, 138.9M memory peak, 212K read from disk, 13.3M written to disk. Sep 9 21:58:51.661416 containerd[1570]: time="2025-09-09T21:58:51.661357950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" pid:3399 exited_at:{seconds:1757455131 nanos:660852065}" Sep 9 21:58:51.661790 containerd[1570]: time="2025-09-09T21:58:51.661730570Z" level=info msg="received exit event container_id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" id:\"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" pid:3399 exited_at:{seconds:1757455131 nanos:660852065}" Sep 9 21:58:51.706224 containerd[1570]: time="2025-09-09T21:58:51.704975573Z" level=info msg="StopContainer for \"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\" returns successfully" Sep 9 21:58:51.709954 containerd[1570]: time="2025-09-09T21:58:51.709877920Z" level=info msg="StopPodSandbox for \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\"" Sep 9 21:58:51.717261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96-rootfs.mount: Deactivated successfully. Sep 9 21:58:51.722079 containerd[1570]: time="2025-09-09T21:58:51.722005234Z" level=info msg="Container to stop \"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:58:51.739872 systemd[1]: cri-containerd-ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4.scope: Deactivated successfully. Sep 9 21:58:51.747623 containerd[1570]: time="2025-09-09T21:58:51.747560853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" id:\"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" pid:2954 exit_status:137 exited_at:{seconds:1757455131 nanos:746965745}" Sep 9 21:58:51.768765 containerd[1570]: time="2025-09-09T21:58:51.768653222Z" level=info msg="StopContainer for \"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" returns successfully" Sep 9 21:58:51.769482 containerd[1570]: time="2025-09-09T21:58:51.769437685Z" level=info msg="StopPodSandbox for \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\"" Sep 9 21:58:51.769927 containerd[1570]: time="2025-09-09T21:58:51.769651969Z" level=info msg="Container to stop \"001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:58:51.769927 containerd[1570]: time="2025-09-09T21:58:51.769676826Z" level=info msg="Container to stop \"d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:58:51.769927 containerd[1570]: time="2025-09-09T21:58:51.769687858Z" level=info msg="Container to stop \"68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:58:51.769927 containerd[1570]: time="2025-09-09T21:58:51.769697376Z" level=info msg="Container to stop \"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:58:51.769927 containerd[1570]: time="2025-09-09T21:58:51.769708307Z" level=info msg="Container to stop \"59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:58:51.786228 systemd[1]: cri-containerd-5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5.scope: Deactivated successfully. Sep 9 21:58:51.813758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4-rootfs.mount: Deactivated successfully. Sep 9 21:58:51.823241 containerd[1570]: time="2025-09-09T21:58:51.823116806Z" level=info msg="shim disconnected" id=ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4 namespace=k8s.io Sep 9 21:58:51.823241 containerd[1570]: time="2025-09-09T21:58:51.823161401Z" level=warning msg="cleaning up after shim disconnected" id=ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4 namespace=k8s.io Sep 9 21:58:51.832676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5-rootfs.mount: Deactivated successfully. Sep 9 21:58:51.886238 containerd[1570]: time="2025-09-09T21:58:51.823172373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 21:58:51.886238 containerd[1570]: time="2025-09-09T21:58:51.869405420Z" level=info msg="shim disconnected" id=5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5 namespace=k8s.io Sep 9 21:58:51.886238 containerd[1570]: time="2025-09-09T21:58:51.883486453Z" level=warning msg="cleaning up after shim disconnected" id=5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5 namespace=k8s.io Sep 9 21:58:51.886238 containerd[1570]: time="2025-09-09T21:58:51.883499709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 21:58:51.952823 containerd[1570]: time="2025-09-09T21:58:51.949373174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" id:\"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" pid:2999 exit_status:137 exited_at:{seconds:1757455131 nanos:786009583}" Sep 9 21:58:51.960580 containerd[1570]: time="2025-09-09T21:58:51.960415977Z" level=info msg="TearDown network for sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" successfully" Sep 9 21:58:51.960580 containerd[1570]: time="2025-09-09T21:58:51.960468828Z" level=info msg="StopPodSandbox for \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" returns successfully" Sep 9 21:58:51.961635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5-shm.mount: Deactivated successfully. Sep 9 21:58:51.962852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4-shm.mount: Deactivated successfully. Sep 9 21:58:51.968003 containerd[1570]: time="2025-09-09T21:58:51.967929601Z" level=info msg="TearDown network for sandbox \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" successfully" Sep 9 21:58:51.968003 containerd[1570]: time="2025-09-09T21:58:51.967988073Z" level=info msg="StopPodSandbox for \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" returns successfully" Sep 9 21:58:51.971806 containerd[1570]: time="2025-09-09T21:58:51.971739480Z" level=info msg="received exit event sandbox_id:\"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" exit_status:137 exited_at:{seconds:1757455131 nanos:786009583}" Sep 9 21:58:51.974165 containerd[1570]: time="2025-09-09T21:58:51.973630549Z" level=info msg="received exit event sandbox_id:\"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" exit_status:137 exited_at:{seconds:1757455131 nanos:746965745}" Sep 9 21:58:52.062215 kubelet[2790]: I0909 21:58:52.062121 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a580a789-c1dc-4711-99dd-e16cd6835dae-clustermesh-secrets\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.062215 kubelet[2790]: I0909 21:58:52.062195 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-config-path\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.062215 kubelet[2790]: I0909 21:58:52.062229 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-bpf-maps\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.063590 kubelet[2790]: I0909 21:58:52.062250 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-host-proc-sys-net\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.063590 kubelet[2790]: I0909 21:58:52.062270 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-host-proc-sys-kernel\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.063590 kubelet[2790]: I0909 21:58:52.062288 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-hostproc\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.063590 kubelet[2790]: I0909 21:58:52.062307 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-run\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.063590 kubelet[2790]: I0909 21:58:52.062348 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-etc-cni-netd\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.063590 kubelet[2790]: I0909 21:58:52.062372 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cni-path\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.063887 kubelet[2790]: I0909 21:58:52.062390 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-lib-modules\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.063887 kubelet[2790]: I0909 21:58:52.062415 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bct6f\" (UniqueName: \"kubernetes.io/projected/a580a789-c1dc-4711-99dd-e16cd6835dae-kube-api-access-bct6f\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.063887 kubelet[2790]: I0909 21:58:52.062439 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42159c3f-8651-4b9a-97a8-f6ad18d81eac-cilium-config-path\") pod \"42159c3f-8651-4b9a-97a8-f6ad18d81eac\" (UID: \"42159c3f-8651-4b9a-97a8-f6ad18d81eac\") " Sep 9 21:58:52.063887 kubelet[2790]: I0909 21:58:52.062466 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz9zv\" (UniqueName: \"kubernetes.io/projected/42159c3f-8651-4b9a-97a8-f6ad18d81eac-kube-api-access-qz9zv\") pod \"42159c3f-8651-4b9a-97a8-f6ad18d81eac\" (UID: \"42159c3f-8651-4b9a-97a8-f6ad18d81eac\") " Sep 9 21:58:52.063887 kubelet[2790]: I0909 21:58:52.062489 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-cgroup\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.063887 kubelet[2790]: I0909 21:58:52.062524 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a580a789-c1dc-4711-99dd-e16cd6835dae-hubble-tls\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.064125 kubelet[2790]: I0909 21:58:52.062548 2790 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-xtables-lock\") pod \"a580a789-c1dc-4711-99dd-e16cd6835dae\" (UID: \"a580a789-c1dc-4711-99dd-e16cd6835dae\") " Sep 9 21:58:52.064125 kubelet[2790]: I0909 21:58:52.062653 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:58:52.064125 kubelet[2790]: I0909 21:58:52.063098 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:58:52.068634 kubelet[2790]: I0909 21:58:52.068382 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:58:52.068634 kubelet[2790]: I0909 21:58:52.068451 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:58:52.068634 kubelet[2790]: I0909 21:58:52.068453 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cni-path" (OuterVolumeSpecName: "cni-path") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:58:52.068634 kubelet[2790]: I0909 21:58:52.068473 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:58:52.068634 kubelet[2790]: I0909 21:58:52.068498 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-hostproc" (OuterVolumeSpecName: "hostproc") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:58:52.072639 kubelet[2790]: I0909 21:58:52.068511 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:58:52.072639 kubelet[2790]: I0909 21:58:52.068518 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:58:52.072639 kubelet[2790]: I0909 21:58:52.068543 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:58:52.091006 kubelet[2790]: I0909 21:58:52.090943 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 21:58:52.105246 kubelet[2790]: I0909 21:58:52.103103 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42159c3f-8651-4b9a-97a8-f6ad18d81eac-kube-api-access-qz9zv" (OuterVolumeSpecName: "kube-api-access-qz9zv") pod "42159c3f-8651-4b9a-97a8-f6ad18d81eac" (UID: "42159c3f-8651-4b9a-97a8-f6ad18d81eac"). InnerVolumeSpecName "kube-api-access-qz9zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 21:58:52.108599 kubelet[2790]: I0909 21:58:52.108522 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a580a789-c1dc-4711-99dd-e16cd6835dae-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 21:58:52.110609 kubelet[2790]: I0909 21:58:52.109052 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42159c3f-8651-4b9a-97a8-f6ad18d81eac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42159c3f-8651-4b9a-97a8-f6ad18d81eac" (UID: "42159c3f-8651-4b9a-97a8-f6ad18d81eac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 21:58:52.116184 kubelet[2790]: I0909 21:58:52.116119 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a580a789-c1dc-4711-99dd-e16cd6835dae-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 21:58:52.116432 kubelet[2790]: I0909 21:58:52.116266 2790 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a580a789-c1dc-4711-99dd-e16cd6835dae-kube-api-access-bct6f" (OuterVolumeSpecName: "kube-api-access-bct6f") pod "a580a789-c1dc-4711-99dd-e16cd6835dae" (UID: "a580a789-c1dc-4711-99dd-e16cd6835dae"). InnerVolumeSpecName "kube-api-access-bct6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 21:58:52.165816 kubelet[2790]: I0909 21:58:52.163704 2790 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.165816 kubelet[2790]: I0909 21:58:52.164454 2790 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a580a789-c1dc-4711-99dd-e16cd6835dae-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.165816 kubelet[2790]: I0909 21:58:52.165163 2790 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.165816 kubelet[2790]: I0909 21:58:52.165182 2790 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a580a789-c1dc-4711-99dd-e16cd6835dae-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.165816 kubelet[2790]: I0909 21:58:52.165199 2790 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.165816 kubelet[2790]: I0909 21:58:52.165211 2790 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.165816 kubelet[2790]: I0909 21:58:52.165222 2790 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.165816 kubelet[2790]: I0909 21:58:52.165232 2790 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.166255 kubelet[2790]: I0909 21:58:52.165244 2790 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.166255 kubelet[2790]: I0909 21:58:52.165255 2790 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.166255 kubelet[2790]: I0909 21:58:52.165272 2790 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.166255 kubelet[2790]: I0909 21:58:52.165283 2790 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.166255 kubelet[2790]: I0909 21:58:52.165296 2790 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a580a789-c1dc-4711-99dd-e16cd6835dae-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.166255 kubelet[2790]: I0909 21:58:52.165308 2790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bct6f\" (UniqueName: \"kubernetes.io/projected/a580a789-c1dc-4711-99dd-e16cd6835dae-kube-api-access-bct6f\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.166255 kubelet[2790]: I0909 21:58:52.165322 2790 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42159c3f-8651-4b9a-97a8-f6ad18d81eac-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.166255 kubelet[2790]: I0909 21:58:52.165853 2790 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz9zv\" (UniqueName: \"kubernetes.io/projected/42159c3f-8651-4b9a-97a8-f6ad18d81eac-kube-api-access-qz9zv\") on node \"localhost\" DevicePath \"\"" Sep 9 21:58:52.618479 systemd[1]: var-lib-kubelet-pods-a580a789\x2dc1dc\x2d4711\x2d99dd\x2de16cd6835dae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbct6f.mount: Deactivated successfully. Sep 9 21:58:52.619285 systemd[1]: var-lib-kubelet-pods-42159c3f\x2d8651\x2d4b9a\x2d97a8\x2df6ad18d81eac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqz9zv.mount: Deactivated successfully. Sep 9 21:58:52.619419 systemd[1]: var-lib-kubelet-pods-a580a789\x2dc1dc\x2d4711\x2d99dd\x2de16cd6835dae-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 21:58:52.619525 systemd[1]: var-lib-kubelet-pods-a580a789\x2dc1dc\x2d4711\x2d99dd\x2de16cd6835dae-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 21:58:52.966014 kubelet[2790]: I0909 21:58:52.964355 2790 scope.go:117] "RemoveContainer" containerID="6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96" Sep 9 21:58:52.980676 systemd[1]: Removed slice kubepods-burstable-poda580a789_c1dc_4711_99dd_e16cd6835dae.slice - libcontainer container kubepods-burstable-poda580a789_c1dc_4711_99dd_e16cd6835dae.slice. Sep 9 21:58:52.983021 systemd[1]: kubepods-burstable-poda580a789_c1dc_4711_99dd_e16cd6835dae.slice: Consumed 16.035s CPU time, 139.2M memory peak, 216K read from disk, 16.6M written to disk. Sep 9 21:58:52.983637 containerd[1570]: time="2025-09-09T21:58:52.982144269Z" level=info msg="RemoveContainer for \"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\"" Sep 9 21:58:53.003248 systemd[1]: Removed slice kubepods-besteffort-pod42159c3f_8651_4b9a_97a8_f6ad18d81eac.slice - libcontainer container kubepods-besteffort-pod42159c3f_8651_4b9a_97a8_f6ad18d81eac.slice. Sep 9 21:58:53.003392 systemd[1]: kubepods-besteffort-pod42159c3f_8651_4b9a_97a8_f6ad18d81eac.slice: Consumed 1.381s CPU time, 25.3M memory peak, 584K read from disk, 4K written to disk. Sep 9 21:58:53.264485 containerd[1570]: time="2025-09-09T21:58:53.264269968Z" level=info msg="RemoveContainer for \"6048b0aeeaecfb26b6ebcba720c49de90645e14d5475e46094410cbc5b08ac96\" returns successfully" Sep 9 21:58:53.274635 kubelet[2790]: I0909 21:58:53.272761 2790 scope.go:117] "RemoveContainer" containerID="59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65" Sep 9 21:58:53.289436 containerd[1570]: time="2025-09-09T21:58:53.289166973Z" level=info msg="RemoveContainer for \"59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65\"" Sep 9 21:58:53.332117 sshd[4840]: Connection closed by 10.0.0.1 port 53966 Sep 9 21:58:53.335607 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:53.357705 systemd[1]: sshd@37-10.0.0.15:22-10.0.0.1:53966.service: Deactivated successfully. Sep 9 21:58:53.362705 systemd[1]: session-37.scope: Deactivated successfully. Sep 9 21:58:53.365829 systemd-logind[1552]: Session 37 logged out. Waiting for processes to exit. Sep 9 21:58:53.374113 systemd[1]: Started sshd@38-10.0.0.15:22-10.0.0.1:42100.service - OpenSSH per-connection server daemon (10.0.0.1:42100). Sep 9 21:58:53.380219 systemd-logind[1552]: Removed session 37. Sep 9 21:58:53.434371 containerd[1570]: time="2025-09-09T21:58:53.434240852Z" level=info msg="RemoveContainer for \"59df51a9eeebd696d73004a3033eafb81161a7c02d87924b116cb2263cd1ec65\" returns successfully" Sep 9 21:58:53.435800 kubelet[2790]: I0909 21:58:53.435734 2790 scope.go:117] "RemoveContainer" containerID="68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f" Sep 9 21:58:53.454597 containerd[1570]: time="2025-09-09T21:58:53.453759151Z" level=info msg="RemoveContainer for \"68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f\"" Sep 9 21:58:53.466099 kubelet[2790]: I0909 21:58:53.465802 2790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a580a789-c1dc-4711-99dd-e16cd6835dae" path="/var/lib/kubelet/pods/a580a789-c1dc-4711-99dd-e16cd6835dae/volumes" Sep 9 21:58:53.487854 containerd[1570]: time="2025-09-09T21:58:53.485323665Z" level=info msg="RemoveContainer for \"68822c296ca6847b436a9d71b332dc98a7510185dacef3fe54adcafad9343d5f\" returns successfully" Sep 9 21:58:53.488013 kubelet[2790]: I0909 21:58:53.485734 2790 scope.go:117] "RemoveContainer" containerID="d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8" Sep 9 21:58:53.489835 containerd[1570]: time="2025-09-09T21:58:53.488088141Z" level=info msg="RemoveContainer for \"d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8\"" Sep 9 21:58:53.513717 containerd[1570]: time="2025-09-09T21:58:53.513598569Z" level=info msg="RemoveContainer for \"d208d88a513c73cd42a63d2d9ecc6031c6bd20b5558554764598ed72e8c186f8\" returns successfully" Sep 9 21:58:53.547098 kubelet[2790]: I0909 21:58:53.546481 2790 scope.go:117] "RemoveContainer" containerID="001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234" Sep 9 21:58:53.563567 containerd[1570]: time="2025-09-09T21:58:53.552747226Z" level=info msg="RemoveContainer for \"001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234\"" Sep 9 21:58:53.594548 containerd[1570]: time="2025-09-09T21:58:53.594472688Z" level=info msg="RemoveContainer for \"001ccb021a6e77aa9745018492adf11cc6fd6eeab5ce840c9f728947c74b7234\" returns successfully" Sep 9 21:58:53.600700 kubelet[2790]: I0909 21:58:53.600559 2790 scope.go:117] "RemoveContainer" containerID="3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d" Sep 9 21:58:53.608827 containerd[1570]: time="2025-09-09T21:58:53.608710734Z" level=info msg="RemoveContainer for \"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\"" Sep 9 21:58:53.640175 containerd[1570]: time="2025-09-09T21:58:53.638695402Z" level=info msg="RemoveContainer for \"3fef84cec25b7048ed08ce1c554dd09b8424d79874f6c6c96072a4e1450d762d\" returns successfully" Sep 9 21:58:53.681827 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 42100 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:53.685036 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:53.712944 systemd-logind[1552]: New session 38 of user core. Sep 9 21:58:53.744252 systemd[1]: Started session-38.scope - Session 38 of User core. Sep 9 21:58:55.467492 sshd[4996]: Connection closed by 10.0.0.1 port 42100 Sep 9 21:58:55.469249 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:55.504730 kubelet[2790]: I0909 21:58:55.502271 2790 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42159c3f-8651-4b9a-97a8-f6ad18d81eac" path="/var/lib/kubelet/pods/42159c3f-8651-4b9a-97a8-f6ad18d81eac/volumes" Sep 9 21:58:55.538401 systemd[1]: sshd@38-10.0.0.15:22-10.0.0.1:42100.service: Deactivated successfully. Sep 9 21:58:55.558957 systemd[1]: session-38.scope: Deactivated successfully. Sep 9 21:58:55.572483 systemd-logind[1552]: Session 38 logged out. Waiting for processes to exit. Sep 9 21:58:55.578893 systemd[1]: Started sshd@39-10.0.0.15:22-10.0.0.1:42114.service - OpenSSH per-connection server daemon (10.0.0.1:42114). Sep 9 21:58:55.592554 systemd-logind[1552]: Removed session 38. Sep 9 21:58:55.798632 sshd[5008]: Accepted publickey for core from 10.0.0.1 port 42114 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:55.800970 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:55.820419 systemd-logind[1552]: New session 39 of user core. Sep 9 21:58:55.845358 systemd[1]: Started session-39.scope - Session 39 of User core. Sep 9 21:58:55.927993 sshd[5011]: Connection closed by 10.0.0.1 port 42114 Sep 9 21:58:55.928858 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Sep 9 21:58:55.948590 systemd[1]: sshd@39-10.0.0.15:22-10.0.0.1:42114.service: Deactivated successfully. Sep 9 21:58:55.965497 systemd[1]: session-39.scope: Deactivated successfully. Sep 9 21:58:55.969242 systemd-logind[1552]: Session 39 logged out. Waiting for processes to exit. Sep 9 21:58:55.972218 systemd[1]: Started sshd@40-10.0.0.15:22-10.0.0.1:42122.service - OpenSSH per-connection server daemon (10.0.0.1:42122). Sep 9 21:58:55.979375 systemd-logind[1552]: Removed session 39. Sep 9 21:58:56.092030 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 42122 ssh2: RSA SHA256:ktqLwzbN69cSIHILAOlmtKU0r/jJENVejEBEkVUVIT8 Sep 9 21:58:56.094673 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:58:56.127262 systemd-logind[1552]: New session 40 of user core. Sep 9 21:58:56.137602 systemd[1]: Started session-40.scope - Session 40 of User core. Sep 9 21:58:56.363591 kubelet[2790]: E0909 21:58:56.339059 2790 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42159c3f-8651-4b9a-97a8-f6ad18d81eac" containerName="cilium-operator" Sep 9 21:58:56.363591 kubelet[2790]: E0909 21:58:56.339109 2790 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a580a789-c1dc-4711-99dd-e16cd6835dae" containerName="mount-bpf-fs" Sep 9 21:58:56.363591 kubelet[2790]: E0909 21:58:56.339118 2790 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a580a789-c1dc-4711-99dd-e16cd6835dae" containerName="clean-cilium-state" Sep 9 21:58:56.363591 kubelet[2790]: E0909 21:58:56.339125 2790 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a580a789-c1dc-4711-99dd-e16cd6835dae" containerName="mount-cgroup" Sep 9 21:58:56.363591 kubelet[2790]: E0909 21:58:56.339133 2790 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a580a789-c1dc-4711-99dd-e16cd6835dae" containerName="apply-sysctl-overwrites" Sep 9 21:58:56.363591 kubelet[2790]: E0909 21:58:56.339140 2790 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a580a789-c1dc-4711-99dd-e16cd6835dae" containerName="cilium-agent" Sep 9 21:58:56.363591 kubelet[2790]: I0909 21:58:56.339183 2790 memory_manager.go:354] "RemoveStaleState removing state" podUID="42159c3f-8651-4b9a-97a8-f6ad18d81eac" containerName="cilium-operator" Sep 9 21:58:56.363591 kubelet[2790]: I0909 21:58:56.339192 2790 memory_manager.go:354] "RemoveStaleState removing state" podUID="a580a789-c1dc-4711-99dd-e16cd6835dae" containerName="cilium-agent" Sep 9 21:58:56.363591 kubelet[2790]: W0909 21:58:56.354182 2790 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 21:58:56.363994 kubelet[2790]: E0909 21:58:56.354237 2790 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 21:58:56.363994 kubelet[2790]: W0909 21:58:56.354319 2790 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 21:58:56.363994 kubelet[2790]: E0909 21:58:56.354347 2790 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 21:58:56.363994 kubelet[2790]: W0909 21:58:56.354380 2790 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 21:58:56.363994 kubelet[2790]: E0909 21:58:56.354392 2790 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 21:58:56.381780 systemd[1]: Created slice kubepods-burstable-podf1ae08f1_8706_4b0f_8ffc_6bad836a2da0.slice - libcontainer container kubepods-burstable-podf1ae08f1_8706_4b0f_8ffc_6bad836a2da0.slice. Sep 9 21:58:56.475914 kubelet[2790]: I0909 21:58:56.475496 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-host-proc-sys-net\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.475914 kubelet[2790]: I0909 21:58:56.475569 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-hubble-tls\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.475914 kubelet[2790]: I0909 21:58:56.475601 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-cni-path\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.475914 kubelet[2790]: I0909 21:58:56.475621 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-etc-cni-netd\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.475914 kubelet[2790]: I0909 21:58:56.475647 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-cilium-config-path\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.475914 kubelet[2790]: I0909 21:58:56.475670 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-lib-modules\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.476379 kubelet[2790]: I0909 21:58:56.475693 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-xtables-lock\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.476379 kubelet[2790]: I0909 21:58:56.475717 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-clustermesh-secrets\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.476379 kubelet[2790]: I0909 21:58:56.475747 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-host-proc-sys-kernel\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.476379 kubelet[2790]: I0909 21:58:56.475771 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-cilium-run\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.476379 kubelet[2790]: I0909 21:58:56.475793 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-bpf-maps\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.476379 kubelet[2790]: I0909 21:58:56.475819 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-cilium-ipsec-secrets\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.480603 kubelet[2790]: I0909 21:58:56.475841 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5969r\" (UniqueName: \"kubernetes.io/projected/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-kube-api-access-5969r\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.480603 kubelet[2790]: I0909 21:58:56.480015 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-hostproc\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.480603 kubelet[2790]: I0909 21:58:56.480047 2790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-cilium-cgroup\") pod \"cilium-cssl5\" (UID: \"f1ae08f1-8706-4b0f-8ffc-6bad836a2da0\") " pod="kube-system/cilium-cssl5" Sep 9 21:58:56.525714 kubelet[2790]: E0909 21:58:56.525580 2790 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 21:58:57.592058 kubelet[2790]: E0909 21:58:57.590931 2790 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Sep 9 21:58:57.592058 kubelet[2790]: E0909 21:58:57.591084 2790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-cilium-ipsec-secrets podName:f1ae08f1-8706-4b0f-8ffc-6bad836a2da0 nodeName:}" failed. No retries permitted until 2025-09-09 21:58:58.091047853 +0000 UTC m=+282.974256559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/f1ae08f1-8706-4b0f-8ffc-6bad836a2da0-cilium-ipsec-secrets") pod "cilium-cssl5" (UID: "f1ae08f1-8706-4b0f-8ffc-6bad836a2da0") : failed to sync secret cache: timed out waiting for the condition Sep 9 21:58:58.187121 containerd[1570]: time="2025-09-09T21:58:58.187012566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cssl5,Uid:f1ae08f1-8706-4b0f-8ffc-6bad836a2da0,Namespace:kube-system,Attempt:0,}" Sep 9 21:58:58.783514 containerd[1570]: time="2025-09-09T21:58:58.781862925Z" level=info msg="connecting to shim 10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b" address="unix:///run/containerd/s/20ec6a37a66852f59163724f0026527d711271c4dc54c2a0278011c47af33693" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:58:58.932779 systemd[1]: Started cri-containerd-10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b.scope - libcontainer container 10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b. Sep 9 21:58:59.115166 containerd[1570]: time="2025-09-09T21:58:59.113483762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cssl5,Uid:f1ae08f1-8706-4b0f-8ffc-6bad836a2da0,Namespace:kube-system,Attempt:0,} returns sandbox id \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\"" Sep 9 21:58:59.142032 containerd[1570]: time="2025-09-09T21:58:59.141599960Z" level=info msg="CreateContainer within sandbox \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 21:58:59.190772 containerd[1570]: time="2025-09-09T21:58:59.190686661Z" level=info msg="Container 89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:58:59.223503 containerd[1570]: time="2025-09-09T21:58:59.219852426Z" level=info msg="CreateContainer within sandbox \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae\"" Sep 9 21:58:59.223503 containerd[1570]: time="2025-09-09T21:58:59.221049636Z" level=info msg="StartContainer for \"89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae\"" Sep 9 21:58:59.224310 containerd[1570]: time="2025-09-09T21:58:59.224238000Z" level=info msg="connecting to shim 89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae" address="unix:///run/containerd/s/20ec6a37a66852f59163724f0026527d711271c4dc54c2a0278011c47af33693" protocol=ttrpc version=3 Sep 9 21:58:59.314422 systemd[1]: Started cri-containerd-89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae.scope - libcontainer container 89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae. Sep 9 21:58:59.411751 containerd[1570]: time="2025-09-09T21:58:59.411600277Z" level=info msg="StartContainer for \"89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae\" returns successfully" Sep 9 21:58:59.448635 systemd[1]: cri-containerd-89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae.scope: Deactivated successfully. Sep 9 21:58:59.450310 containerd[1570]: time="2025-09-09T21:58:59.450268195Z" level=info msg="received exit event container_id:\"89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae\" id:\"89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae\" pid:5094 exited_at:{seconds:1757455139 nanos:449829162}" Sep 9 21:58:59.452255 containerd[1570]: time="2025-09-09T21:58:59.450984110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae\" id:\"89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae\" pid:5094 exited_at:{seconds:1757455139 nanos:449829162}" Sep 9 21:58:59.516772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89cefddd29658ade824607bffbbea946816a5c136d7d109efe15b84bca031bae-rootfs.mount: Deactivated successfully. Sep 9 21:59:00.103105 containerd[1570]: time="2025-09-09T21:59:00.099075453Z" level=info msg="CreateContainer within sandbox \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 21:59:00.315026 containerd[1570]: time="2025-09-09T21:59:00.305783014Z" level=info msg="Container 58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:59:00.551369 containerd[1570]: time="2025-09-09T21:59:00.550547563Z" level=info msg="CreateContainer within sandbox \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b\"" Sep 9 21:59:00.558612 containerd[1570]: time="2025-09-09T21:59:00.558550096Z" level=info msg="StartContainer for \"58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b\"" Sep 9 21:59:00.580305 containerd[1570]: time="2025-09-09T21:59:00.568252926Z" level=info msg="connecting to shim 58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b" address="unix:///run/containerd/s/20ec6a37a66852f59163724f0026527d711271c4dc54c2a0278011c47af33693" protocol=ttrpc version=3 Sep 9 21:59:00.681120 systemd[1]: Started cri-containerd-58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b.scope - libcontainer container 58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b. Sep 9 21:59:00.770155 containerd[1570]: time="2025-09-09T21:59:00.769621336Z" level=info msg="StartContainer for \"58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b\" returns successfully" Sep 9 21:59:00.774628 systemd[1]: cri-containerd-58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b.scope: Deactivated successfully. Sep 9 21:59:00.782952 containerd[1570]: time="2025-09-09T21:59:00.782287013Z" level=info msg="received exit event container_id:\"58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b\" id:\"58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b\" pid:5140 exited_at:{seconds:1757455140 nanos:781913517}" Sep 9 21:59:00.782952 containerd[1570]: time="2025-09-09T21:59:00.782326269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b\" id:\"58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b\" pid:5140 exited_at:{seconds:1757455140 nanos:781913517}" Sep 9 21:59:00.830768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58f507f359195b9335df574b3c90f3e339a29cbecb70eba5896aa3dbfffc799b-rootfs.mount: Deactivated successfully. Sep 9 21:59:01.115902 containerd[1570]: time="2025-09-09T21:59:01.111609007Z" level=info msg="CreateContainer within sandbox \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 21:59:01.161479 containerd[1570]: time="2025-09-09T21:59:01.160846695Z" level=info msg="Container 87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:59:01.303129 containerd[1570]: time="2025-09-09T21:59:01.298841711Z" level=info msg="CreateContainer within sandbox \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6\"" Sep 9 21:59:01.305837 containerd[1570]: time="2025-09-09T21:59:01.304844431Z" level=info msg="StartContainer for \"87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6\"" Sep 9 21:59:01.313732 containerd[1570]: time="2025-09-09T21:59:01.312522756Z" level=info msg="connecting to shim 87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6" address="unix:///run/containerd/s/20ec6a37a66852f59163724f0026527d711271c4dc54c2a0278011c47af33693" protocol=ttrpc version=3 Sep 9 21:59:01.397568 systemd[1]: Started cri-containerd-87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6.scope - libcontainer container 87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6. Sep 9 21:59:01.530732 kubelet[2790]: E0909 21:59:01.530622 2790 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 21:59:01.533279 containerd[1570]: time="2025-09-09T21:59:01.533199821Z" level=info msg="StartContainer for \"87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6\" returns successfully" Sep 9 21:59:01.538026 systemd[1]: cri-containerd-87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6.scope: Deactivated successfully. Sep 9 21:59:01.548024 containerd[1570]: time="2025-09-09T21:59:01.545378111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6\" id:\"87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6\" pid:5183 exited_at:{seconds:1757455141 nanos:544967493}" Sep 9 21:59:01.548024 containerd[1570]: time="2025-09-09T21:59:01.545581191Z" level=info msg="received exit event container_id:\"87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6\" id:\"87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6\" pid:5183 exited_at:{seconds:1757455141 nanos:544967493}" Sep 9 21:59:01.678483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87c526858a6c9a4d744718b5c382b9e5639d853942d77f0625cfe47cda64daf6-rootfs.mount: Deactivated successfully. Sep 9 21:59:01.991881 kubelet[2790]: I0909 21:59:01.987688 2790 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T21:59:01Z","lastTransitionTime":"2025-09-09T21:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 21:59:02.042760 containerd[1570]: time="2025-09-09T21:59:02.042496848Z" level=warning msg="container event discarded" container=801629b2a52d6bbb13418b6b96876775891d3358740e953ad95ae372a35cc4f9 type=CONTAINER_CREATED_EVENT Sep 9 21:59:02.042760 containerd[1570]: time="2025-09-09T21:59:02.042601559Z" level=warning msg="container event discarded" container=801629b2a52d6bbb13418b6b96876775891d3358740e953ad95ae372a35cc4f9 type=CONTAINER_STARTED_EVENT Sep 9 21:59:02.136601 containerd[1570]: time="2025-09-09T21:59:02.134722196Z" level=info msg="CreateContainer within sandbox \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 21:59:02.151285 containerd[1570]: time="2025-09-09T21:59:02.151056137Z" level=warning msg="container event discarded" container=cef3e62ccad55c64f66ce6b9e312938ea19138e3e6b5c6e759e4bccc5c5b9142 type=CONTAINER_CREATED_EVENT Sep 9 21:59:02.151285 containerd[1570]: time="2025-09-09T21:59:02.151128044Z" level=warning msg="container event discarded" container=cef3e62ccad55c64f66ce6b9e312938ea19138e3e6b5c6e759e4bccc5c5b9142 type=CONTAINER_STARTED_EVENT Sep 9 21:59:02.151285 containerd[1570]: time="2025-09-09T21:59:02.151140327Z" level=warning msg="container event discarded" container=ba48ed8748b0fd8707f92f7b76280847a88d5ea7edfd4864507fcf8cb517c874 type=CONTAINER_CREATED_EVENT Sep 9 21:59:02.151285 containerd[1570]: time="2025-09-09T21:59:02.151149876Z" level=warning msg="container event discarded" container=ba48ed8748b0fd8707f92f7b76280847a88d5ea7edfd4864507fcf8cb517c874 type=CONTAINER_STARTED_EVENT Sep 9 21:59:02.178054 containerd[1570]: time="2025-09-09T21:59:02.170409619Z" level=warning msg="container event discarded" container=e20ed1c03525ea9244a70b2cf5f6b83c89b3b76b063079667d37bb16fe604780 type=CONTAINER_CREATED_EVENT Sep 9 21:59:02.192411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579086982.mount: Deactivated successfully. Sep 9 21:59:02.197617 containerd[1570]: time="2025-09-09T21:59:02.196366878Z" level=info msg="Container ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:59:02.238128 containerd[1570]: time="2025-09-09T21:59:02.237966850Z" level=info msg="CreateContainer within sandbox \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2\"" Sep 9 21:59:02.243110 containerd[1570]: time="2025-09-09T21:59:02.240177020Z" level=info msg="StartContainer for \"ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2\"" Sep 9 21:59:02.243110 containerd[1570]: time="2025-09-09T21:59:02.241550625Z" level=info msg="connecting to shim ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2" address="unix:///run/containerd/s/20ec6a37a66852f59163724f0026527d711271c4dc54c2a0278011c47af33693" protocol=ttrpc version=3 Sep 9 21:59:02.299490 containerd[1570]: time="2025-09-09T21:59:02.299420815Z" level=warning msg="container event discarded" container=c07675e20343c33c431e804a2e96544a7fd721d40988511e1b47efdb6e9f44df type=CONTAINER_CREATED_EVENT Sep 9 21:59:02.307854 systemd[1]: Started cri-containerd-ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2.scope - libcontainer container ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2. Sep 9 21:59:02.349718 containerd[1570]: time="2025-09-09T21:59:02.349618971Z" level=warning msg="container event discarded" container=6056cdcfec63d4755cfc2fda994609ff32494116a7bfb020f1fe60cfd095d294 type=CONTAINER_CREATED_EVENT Sep 9 21:59:02.424163 systemd[1]: cri-containerd-ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2.scope: Deactivated successfully. Sep 9 21:59:02.431936 containerd[1570]: time="2025-09-09T21:59:02.428851638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2\" id:\"ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2\" pid:5221 exited_at:{seconds:1757455142 nanos:428452062}" Sep 9 21:59:02.445364 containerd[1570]: time="2025-09-09T21:59:02.444527816Z" level=warning msg="container event discarded" container=e20ed1c03525ea9244a70b2cf5f6b83c89b3b76b063079667d37bb16fe604780 type=CONTAINER_STARTED_EVENT Sep 9 21:59:02.561921 containerd[1570]: time="2025-09-09T21:59:02.561799626Z" level=info msg="received exit event container_id:\"ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2\" id:\"ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2\" pid:5221 exited_at:{seconds:1757455142 nanos:428452062}" Sep 9 21:59:02.605281 containerd[1570]: time="2025-09-09T21:59:02.605214560Z" level=info msg="StartContainer for \"ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2\" returns successfully" Sep 9 21:59:02.608894 containerd[1570]: time="2025-09-09T21:59:02.605508454Z" level=warning msg="container event discarded" container=c07675e20343c33c431e804a2e96544a7fd721d40988511e1b47efdb6e9f44df type=CONTAINER_STARTED_EVENT Sep 9 21:59:02.608894 containerd[1570]: time="2025-09-09T21:59:02.605539023Z" level=warning msg="container event discarded" container=6056cdcfec63d4755cfc2fda994609ff32494116a7bfb020f1fe60cfd095d294 type=CONTAINER_STARTED_EVENT Sep 9 21:59:02.660449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab28acbe6ccf3c5d0e462363b2eb85d6e649194c4ea400a2baedf8f4bdc870c2-rootfs.mount: Deactivated successfully. Sep 9 21:59:03.167033 containerd[1570]: time="2025-09-09T21:59:03.166717744Z" level=info msg="CreateContainer within sandbox \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 21:59:03.343735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169670561.mount: Deactivated successfully. Sep 9 21:59:03.353520 containerd[1570]: time="2025-09-09T21:59:03.353406203Z" level=info msg="Container ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:59:03.398737 containerd[1570]: time="2025-09-09T21:59:03.395374214Z" level=info msg="CreateContainer within sandbox \"10c4bbf288a504dbbe10e7476cca9996275f5c14766c26596ff070e2c3dad08b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276\"" Sep 9 21:59:03.405106 containerd[1570]: time="2025-09-09T21:59:03.401899595Z" level=info msg="StartContainer for \"ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276\"" Sep 9 21:59:03.410105 containerd[1570]: time="2025-09-09T21:59:03.410018139Z" level=info msg="connecting to shim ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276" address="unix:///run/containerd/s/20ec6a37a66852f59163724f0026527d711271c4dc54c2a0278011c47af33693" protocol=ttrpc version=3 Sep 9 21:59:03.474952 systemd[1]: Started cri-containerd-ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276.scope - libcontainer container ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276. Sep 9 21:59:03.669133 containerd[1570]: time="2025-09-09T21:59:03.669051788Z" level=info msg="StartContainer for \"ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276\" returns successfully" Sep 9 21:59:03.872050 containerd[1570]: time="2025-09-09T21:59:03.871278187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276\" id:\"cccc984d29ea1640c27f44f9f7bc4ac394bdc68933c98f639e24eccb49bde0da\" pid:5287 exited_at:{seconds:1757455143 nanos:869927507}" Sep 9 21:59:05.218298 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 21:59:06.435562 containerd[1570]: time="2025-09-09T21:59:06.435486602Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276\" id:\"aed20bab6c23b3c6422a6a1c5b656c36de743b90d81d37d6ad13c19f9053aceb\" pid:5363 exit_status:1 exited_at:{seconds:1757455146 nanos:434582301}" Sep 9 21:59:08.948157 containerd[1570]: time="2025-09-09T21:59:08.948070039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276\" id:\"e57da8beace9957e779637595cae4f38b2100c489a32de5bc02b344ffce2cb35\" pid:5473 exit_status:1 exited_at:{seconds:1757455148 nanos:933455995}" Sep 9 21:59:11.456943 containerd[1570]: time="2025-09-09T21:59:11.456646647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276\" id:\"b03fff27a61fdab7cfbf07e7dd481c3767e7b50d7e3fd3469a8a10e8bc943482\" pid:5706 exit_status:1 exited_at:{seconds:1757455151 nanos:455303872}" Sep 9 21:59:12.875395 systemd-networkd[1465]: lxc_health: Link UP Sep 9 21:59:12.883473 systemd-networkd[1465]: lxc_health: Gained carrier Sep 9 21:59:14.165502 systemd-networkd[1465]: lxc_health: Gained IPv6LL Sep 9 21:59:14.466142 kubelet[2790]: I0909 21:59:14.462354 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cssl5" podStartSLOduration=19.462312239 podStartE2EDuration="19.462312239s" podCreationTimestamp="2025-09-09 21:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:59:04.500028844 +0000 UTC m=+289.383237580" watchObservedRunningTime="2025-09-09 21:59:14.462312239 +0000 UTC m=+299.345520945" Sep 9 21:59:14.582393 containerd[1570]: time="2025-09-09T21:59:14.582271154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276\" id:\"86ab9edf11f5ac886330a8edddfd8566d5cee4a6fa568a715cdb1d8365c8d7c4\" pid:5864 exited_at:{seconds:1757455154 nanos:581656792}" Sep 9 21:59:15.499005 containerd[1570]: time="2025-09-09T21:59:15.498938642Z" level=info msg="StopPodSandbox for \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\"" Sep 9 21:59:15.499421 containerd[1570]: time="2025-09-09T21:59:15.499389071Z" level=info msg="TearDown network for sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" successfully" Sep 9 21:59:15.499535 containerd[1570]: time="2025-09-09T21:59:15.499502157Z" level=info msg="StopPodSandbox for \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" returns successfully" Sep 9 21:59:15.500515 containerd[1570]: time="2025-09-09T21:59:15.500421280Z" level=info msg="RemovePodSandbox for \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\"" Sep 9 21:59:15.516196 containerd[1570]: time="2025-09-09T21:59:15.514466590Z" level=info msg="Forcibly stopping sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\"" Sep 9 21:59:15.516196 containerd[1570]: time="2025-09-09T21:59:15.514777813Z" level=info msg="TearDown network for sandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" successfully" Sep 9 21:59:15.522181 containerd[1570]: time="2025-09-09T21:59:15.521482822Z" level=info msg="Ensure that sandbox 5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5 in task-service has been cleanup successfully" Sep 9 21:59:15.594495 containerd[1570]: time="2025-09-09T21:59:15.594210441Z" level=info msg="RemovePodSandbox \"5ee394e96310175e796a73c27ed14cfd458bfab289a6e4020af21b82cc496ca5\" returns successfully" Sep 9 21:59:15.600452 containerd[1570]: time="2025-09-09T21:59:15.597201546Z" level=info msg="StopPodSandbox for \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\"" Sep 9 21:59:15.600452 containerd[1570]: time="2025-09-09T21:59:15.597471440Z" level=info msg="TearDown network for sandbox \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" successfully" Sep 9 21:59:15.600452 containerd[1570]: time="2025-09-09T21:59:15.597488664Z" level=info msg="StopPodSandbox for \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" returns successfully" Sep 9 21:59:15.600452 containerd[1570]: time="2025-09-09T21:59:15.597787684Z" level=info msg="RemovePodSandbox for \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\"" Sep 9 21:59:15.600452 containerd[1570]: time="2025-09-09T21:59:15.597812812Z" level=info msg="Forcibly stopping sandbox \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\"" Sep 9 21:59:15.600452 containerd[1570]: time="2025-09-09T21:59:15.597904096Z" level=info msg="TearDown network for sandbox \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" successfully" Sep 9 21:59:15.610593 containerd[1570]: time="2025-09-09T21:59:15.607722944Z" level=info msg="Ensure that sandbox ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4 in task-service has been cleanup successfully" Sep 9 21:59:15.620303 containerd[1570]: time="2025-09-09T21:59:15.620027063Z" level=info msg="RemovePodSandbox \"ae579493583d7f6ea577c0576f1f9d1c4ac7a152e0e00d96d06e37d8838a66f4\" returns successfully" Sep 9 21:59:17.059612 containerd[1570]: time="2025-09-09T21:59:17.059103909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276\" id:\"a45481101e3377f2daced7ad29201f140890a9815e99cf41a1f13df7067b40a5\" pid:5897 exited_at:{seconds:1757455157 nanos:52553134}" Sep 9 21:59:19.615704 containerd[1570]: time="2025-09-09T21:59:19.614609532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebec9850d40522eb82f7db9923665f4341e681893156c2f5525a9eb5bb34b276\" id:\"9d7388188379f03c6e961e76cc42bee6b51e7c6dd21e12818ed9c7d7d14229ac\" pid:5921 exited_at:{seconds:1757455159 nanos:613622252}" Sep 9 21:59:19.652098 kubelet[2790]: E0909 21:59:19.651388 2790 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34536->127.0.0.1:40737: write tcp 127.0.0.1:34536->127.0.0.1:40737: write: broken pipe Sep 9 21:59:19.808793 sshd[5021]: Connection closed by 10.0.0.1 port 42122 Sep 9 21:59:19.809728 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Sep 9 21:59:19.836467 systemd[1]: sshd@40-10.0.0.15:22-10.0.0.1:42122.service: Deactivated successfully. Sep 9 21:59:19.841242 systemd[1]: session-40.scope: Deactivated successfully. Sep 9 21:59:19.852370 systemd-logind[1552]: Session 40 logged out. Waiting for processes to exit. Sep 9 21:59:19.854236 systemd-logind[1552]: Removed session 40. Sep 9 21:59:20.985648 containerd[1570]: time="2025-09-09T21:59:20.985479440Z" level=warning msg="container event discarded" container=e7c47a9cb52ee9e45be49cf08add54ae16d7e1d174075838db5b950a5b4cef5e type=CONTAINER_CREATED_EVENT Sep 9 21:59:20.985648 containerd[1570]: time="2025-09-09T21:59:20.985597494Z" level=warning msg="container event discarded" container=e7c47a9cb52ee9e45be49cf08add54ae16d7e1d174075838db5b950a5b4cef5e type=CONTAINER_STARTED_EVENT Sep 9 21:59:21.110804 containerd[1570]: time="2025-09-09T21:59:21.110316980Z" level=warning msg="container event discarded" container=6d72a67d501cbfa18bb4067dd558c9444c1e802b4d9abb89d957c02a898179b8 type=CONTAINER_CREATED_EVENT Sep 9 21:59:21.587984 containerd[1570]: time="2025-09-09T21:59:21.587320621Z" level=warning msg="container event discarded" container=6d72a67d501cbfa18bb4067dd558c9444c1e802b4d9abb89d957c02a898179b8 type=CONTAINER_STARTED_EVENT