Sep 12 18:15:10.477540 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:35:29 -00 2025 Sep 12 18:15:10.477555 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 18:15:10.477561 kernel: BIOS-provided physical RAM map: Sep 12 18:15:10.477567 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 12 18:15:10.477570 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 12 18:15:10.477574 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 12 18:15:10.477579 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 12 18:15:10.477584 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 12 18:15:10.477588 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a70fff] usable Sep 12 18:15:10.477592 kernel: BIOS-e820: [mem 0x0000000081a71000-0x0000000081a71fff] ACPI NVS Sep 12 18:15:10.477596 kernel: BIOS-e820: [mem 0x0000000081a72000-0x0000000081a72fff] reserved Sep 12 18:15:10.477600 kernel: BIOS-e820: [mem 0x0000000081a73000-0x000000008afcdfff] usable Sep 12 18:15:10.477605 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Sep 12 18:15:10.477610 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Sep 12 18:15:10.477615 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Sep 12 18:15:10.477622 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Sep 12 18:15:10.477645 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 12 18:15:10.477650 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 12 18:15:10.477654 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 18:15:10.477659 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 12 18:15:10.477664 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 12 18:15:10.477684 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 12 18:15:10.477688 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 12 18:15:10.477693 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 12 18:15:10.477698 kernel: NX (Execute Disable) protection: active Sep 12 18:15:10.477702 kernel: APIC: Static calls initialized Sep 12 18:15:10.477707 kernel: SMBIOS 3.2.1 present. Sep 12 18:15:10.477711 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Sep 12 18:15:10.477717 kernel: tsc: Detected 3400.000 MHz processor Sep 12 18:15:10.477722 kernel: tsc: Detected 3399.906 MHz TSC Sep 12 18:15:10.477726 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 18:15:10.477732 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 18:15:10.477736 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 12 18:15:10.477741 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 12 18:15:10.477746 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 18:15:10.477751 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 12 18:15:10.477755 kernel: Using GB pages for direct mapping Sep 12 18:15:10.477760 kernel: ACPI: Early table checksum verification disabled Sep 12 18:15:10.477766 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 12 18:15:10.477772 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 12 18:15:10.477778 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Sep 12 18:15:10.477783 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 12 18:15:10.477788 kernel: ACPI: FACS 0x000000008C66DF80 000040 Sep 12 18:15:10.477793 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Sep 12 18:15:10.477799 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Sep 12 18:15:10.477804 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 12 18:15:10.477810 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 12 18:15:10.477815 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 12 18:15:10.477820 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 12 18:15:10.477825 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 12 18:15:10.477830 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 12 18:15:10.477836 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 18:15:10.477841 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 12 18:15:10.477846 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 12 18:15:10.477851 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 18:15:10.477856 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 18:15:10.477861 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 12 18:15:10.477866 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 12 18:15:10.477871 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 18:15:10.477876 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 12 18:15:10.477882 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 12 18:15:10.477887 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Sep 12 18:15:10.477892 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 12 18:15:10.477897 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 12 18:15:10.477902 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 12 18:15:10.477907 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 12 18:15:10.477912 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 12 18:15:10.477917 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 12 18:15:10.477923 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 12 18:15:10.477928 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 12 18:15:10.477933 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 12 18:15:10.477939 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Sep 12 18:15:10.477944 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Sep 12 18:15:10.477949 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Sep 12 18:15:10.477954 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Sep 12 18:15:10.477959 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Sep 12 18:15:10.477964 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Sep 12 18:15:10.477970 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Sep 12 18:15:10.477975 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Sep 12 18:15:10.477980 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Sep 12 18:15:10.477985 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Sep 12 18:15:10.477990 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Sep 12 18:15:10.477995 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Sep 12 18:15:10.478000 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Sep 12 18:15:10.478005 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Sep 12 18:15:10.478010 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Sep 12 18:15:10.478016 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Sep 12 18:15:10.478021 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Sep 12 18:15:10.478026 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Sep 12 18:15:10.478031 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Sep 12 18:15:10.478036 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Sep 12 18:15:10.478041 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Sep 12 18:15:10.478046 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Sep 12 18:15:10.478051 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Sep 12 18:15:10.478056 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Sep 12 18:15:10.478061 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Sep 12 18:15:10.478067 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Sep 12 18:15:10.478072 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Sep 12 18:15:10.478077 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Sep 12 18:15:10.478082 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Sep 12 18:15:10.478086 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Sep 12 18:15:10.478091 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Sep 12 18:15:10.478097 kernel: No NUMA configuration found Sep 12 18:15:10.478102 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 12 18:15:10.478107 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 12 18:15:10.478113 kernel: Zone ranges: Sep 12 18:15:10.478118 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 18:15:10.478123 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 18:15:10.478128 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 12 18:15:10.478133 kernel: Movable zone start for each node Sep 12 18:15:10.478138 kernel: Early memory node ranges Sep 12 18:15:10.478143 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 12 18:15:10.478148 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 12 18:15:10.478153 kernel: node 0: [mem 0x0000000040400000-0x0000000081a70fff] Sep 12 18:15:10.478159 kernel: node 0: [mem 0x0000000081a73000-0x000000008afcdfff] Sep 12 18:15:10.478164 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Sep 12 18:15:10.478169 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 12 18:15:10.478174 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 12 18:15:10.478183 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 12 18:15:10.478188 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 18:15:10.478194 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 12 18:15:10.478199 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 12 18:15:10.478206 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 12 18:15:10.478211 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 12 18:15:10.478216 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Sep 12 18:15:10.478222 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 12 18:15:10.478227 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 12 18:15:10.478232 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 12 18:15:10.478238 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 12 18:15:10.478243 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 12 18:15:10.478248 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 12 18:15:10.478255 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 12 18:15:10.478260 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 12 18:15:10.478265 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 12 18:15:10.478271 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 12 18:15:10.478276 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 12 18:15:10.478281 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 12 18:15:10.478286 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 12 18:15:10.478292 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 12 18:15:10.478297 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 12 18:15:10.478302 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 12 18:15:10.478309 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 12 18:15:10.478314 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 12 18:15:10.478319 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 12 18:15:10.478324 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 12 18:15:10.478330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 18:15:10.478335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 18:15:10.478341 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 18:15:10.478346 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 18:15:10.478351 kernel: TSC deadline timer available Sep 12 18:15:10.478358 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 12 18:15:10.478363 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 12 18:15:10.478368 kernel: Booting paravirtualized kernel on bare hardware Sep 12 18:15:10.478374 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 18:15:10.478379 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 12 18:15:10.478385 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 12 18:15:10.478390 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 12 18:15:10.478395 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 12 18:15:10.478401 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 18:15:10.478408 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 18:15:10.478413 kernel: random: crng init done Sep 12 18:15:10.478418 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 12 18:15:10.478424 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 12 18:15:10.478429 kernel: Fallback order for Node 0: 0 Sep 12 18:15:10.478434 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Sep 12 18:15:10.478440 kernel: Policy zone: Normal Sep 12 18:15:10.478445 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 18:15:10.478451 kernel: software IO TLB: area num 16. Sep 12 18:15:10.478457 kernel: Memory: 32718256K/33452984K available (14336K kernel code, 2293K rwdata, 22872K rodata, 43520K init, 1556K bss, 734468K reserved, 0K cma-reserved) Sep 12 18:15:10.478462 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 12 18:15:10.478468 kernel: ftrace: allocating 37948 entries in 149 pages Sep 12 18:15:10.478474 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 18:15:10.478479 kernel: Dynamic Preempt: voluntary Sep 12 18:15:10.478484 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 18:15:10.478490 kernel: rcu: RCU event tracing is enabled. Sep 12 18:15:10.478496 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 12 18:15:10.478502 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 18:15:10.478508 kernel: Rude variant of Tasks RCU enabled. Sep 12 18:15:10.478513 kernel: Tracing variant of Tasks RCU enabled. Sep 12 18:15:10.478518 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 18:15:10.478524 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 12 18:15:10.478529 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 12 18:15:10.478534 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 18:15:10.478540 kernel: Console: colour VGA+ 80x25 Sep 12 18:15:10.478545 kernel: printk: console [tty0] enabled Sep 12 18:15:10.478551 kernel: printk: console [ttyS1] enabled Sep 12 18:15:10.478556 kernel: ACPI: Core revision 20230628 Sep 12 18:15:10.478562 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Sep 12 18:15:10.478567 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 18:15:10.478573 kernel: DMAR: Host address width 39 Sep 12 18:15:10.478578 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 12 18:15:10.478583 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 12 18:15:10.478589 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Sep 12 18:15:10.478594 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 12 18:15:10.478600 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 12 18:15:10.478606 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 12 18:15:10.478611 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 12 18:15:10.478655 kernel: x2apic enabled Sep 12 18:15:10.478661 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 12 18:15:10.478682 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 18:15:10.478688 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 12 18:15:10.478693 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 12 18:15:10.478698 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 12 18:15:10.478705 kernel: process: using mwait in idle threads Sep 12 18:15:10.478711 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 18:15:10.478716 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 18:15:10.478721 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 18:15:10.478727 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 12 18:15:10.478732 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 12 18:15:10.478737 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 12 18:15:10.478743 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 12 18:15:10.478748 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 12 18:15:10.478754 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 18:15:10.478760 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 18:15:10.478765 kernel: TAA: Mitigation: TSX disabled Sep 12 18:15:10.478771 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 12 18:15:10.478776 kernel: SRBDS: Mitigation: Microcode Sep 12 18:15:10.478781 kernel: GDS: Mitigation: Microcode Sep 12 18:15:10.478787 kernel: active return thunk: its_return_thunk Sep 12 18:15:10.478792 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 18:15:10.478797 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Sep 12 18:15:10.478804 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 18:15:10.478809 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 18:15:10.478814 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 18:15:10.478820 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 18:15:10.478825 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 18:15:10.478831 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 18:15:10.478836 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 18:15:10.478841 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 18:15:10.478847 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 12 18:15:10.478853 kernel: Freeing SMP alternatives memory: 32K Sep 12 18:15:10.478859 kernel: pid_max: default: 32768 minimum: 301 Sep 12 18:15:10.478864 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 18:15:10.478869 kernel: landlock: Up and running. Sep 12 18:15:10.478875 kernel: SELinux: Initializing. Sep 12 18:15:10.478880 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 18:15:10.478886 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 18:15:10.478891 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 12 18:15:10.478896 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 18:15:10.478903 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 18:15:10.478908 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 18:15:10.478914 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 12 18:15:10.478919 kernel: ... version: 4 Sep 12 18:15:10.478925 kernel: ... bit width: 48 Sep 12 18:15:10.478930 kernel: ... generic registers: 4 Sep 12 18:15:10.478935 kernel: ... value mask: 0000ffffffffffff Sep 12 18:15:10.478941 kernel: ... max period: 00007fffffffffff Sep 12 18:15:10.478946 kernel: ... fixed-purpose events: 3 Sep 12 18:15:10.478952 kernel: ... event mask: 000000070000000f Sep 12 18:15:10.478958 kernel: signal: max sigframe size: 2032 Sep 12 18:15:10.478963 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 12 18:15:10.478969 kernel: rcu: Hierarchical SRCU implementation. Sep 12 18:15:10.478974 kernel: rcu: Max phase no-delay instances is 400. Sep 12 18:15:10.478980 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 12 18:15:10.478985 kernel: smp: Bringing up secondary CPUs ... Sep 12 18:15:10.478990 kernel: smpboot: x86: Booting SMP configuration: Sep 12 18:15:10.478996 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 12 18:15:10.479003 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 18:15:10.479008 kernel: smp: Brought up 1 node, 16 CPUs Sep 12 18:15:10.479014 kernel: smpboot: Max logical packages: 1 Sep 12 18:15:10.479019 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 12 18:15:10.479024 kernel: devtmpfs: initialized Sep 12 18:15:10.479030 kernel: x86/mm: Memory block size: 128MB Sep 12 18:15:10.479035 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a71000-0x81a71fff] (4096 bytes) Sep 12 18:15:10.479040 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Sep 12 18:15:10.479047 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 18:15:10.479053 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 12 18:15:10.479058 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 18:15:10.479063 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 18:15:10.479069 kernel: audit: initializing netlink subsys (disabled) Sep 12 18:15:10.479074 kernel: audit: type=2000 audit(1757700905.132:1): state=initialized audit_enabled=0 res=1 Sep 12 18:15:10.479079 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 18:15:10.479085 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 18:15:10.479090 kernel: cpuidle: using governor menu Sep 12 18:15:10.479096 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 18:15:10.479102 kernel: dca service started, version 1.12.1 Sep 12 18:15:10.479107 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 12 18:15:10.479112 kernel: PCI: Using configuration type 1 for base access Sep 12 18:15:10.479118 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 12 18:15:10.479123 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 18:15:10.479128 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 18:15:10.479134 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 18:15:10.479139 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 18:15:10.479145 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 18:15:10.479151 kernel: ACPI: Added _OSI(Module Device) Sep 12 18:15:10.479156 kernel: ACPI: Added _OSI(Processor Device) Sep 12 18:15:10.479162 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 18:15:10.479167 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 12 18:15:10.479172 kernel: ACPI: Dynamic OEM Table Load: Sep 12 18:15:10.479178 kernel: ACPI: SSDT 0xFFFF8A5541B33C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 12 18:15:10.479183 kernel: ACPI: Dynamic OEM Table Load: Sep 12 18:15:10.479189 kernel: ACPI: SSDT 0xFFFF8A5541B2A000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 12 18:15:10.479195 kernel: ACPI: Dynamic OEM Table Load: Sep 12 18:15:10.479201 kernel: ACPI: SSDT 0xFFFF8A5540247600 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 12 18:15:10.479206 kernel: ACPI: Dynamic OEM Table Load: Sep 12 18:15:10.479211 kernel: ACPI: SSDT 0xFFFF8A5541B2E000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 12 18:15:10.479217 kernel: ACPI: Dynamic OEM Table Load: Sep 12 18:15:10.479222 kernel: ACPI: SSDT 0xFFFF8A554012D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 12 18:15:10.479227 kernel: ACPI: Dynamic OEM Table Load: Sep 12 18:15:10.479233 kernel: ACPI: SSDT 0xFFFF8A5541B36400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 12 18:15:10.479238 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 12 18:15:10.479243 kernel: ACPI: Interpreter enabled Sep 12 18:15:10.479250 kernel: ACPI: PM: (supports S0 S5) Sep 12 18:15:10.479255 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 18:15:10.479260 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 12 18:15:10.479266 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 12 18:15:10.479271 kernel: HEST: Table parsing has been initialized. Sep 12 18:15:10.479277 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 12 18:15:10.479282 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 18:15:10.479287 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 18:15:10.479293 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 12 18:15:10.479299 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 12 18:15:10.479305 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 12 18:15:10.479310 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 12 18:15:10.479315 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 12 18:15:10.479321 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 12 18:15:10.479326 kernel: ACPI: \_TZ_.FN00: New power resource Sep 12 18:15:10.479332 kernel: ACPI: \_TZ_.FN01: New power resource Sep 12 18:15:10.479337 kernel: ACPI: \_TZ_.FN02: New power resource Sep 12 18:15:10.479342 kernel: ACPI: \_TZ_.FN03: New power resource Sep 12 18:15:10.479349 kernel: ACPI: \_TZ_.FN04: New power resource Sep 12 18:15:10.479354 kernel: ACPI: \PIN_: New power resource Sep 12 18:15:10.479359 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 12 18:15:10.479439 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 18:15:10.479491 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 12 18:15:10.479540 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 12 18:15:10.479548 kernel: PCI host bridge to bus 0000:00 Sep 12 18:15:10.479602 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 18:15:10.479672 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 18:15:10.479731 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 18:15:10.479774 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 12 18:15:10.479816 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 12 18:15:10.479859 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 12 18:15:10.479918 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 12 18:15:10.479983 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 12 18:15:10.480036 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 12 18:15:10.480091 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Sep 12 18:15:10.480143 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Sep 12 18:15:10.480197 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 12 18:15:10.480247 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 12 18:15:10.480303 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 12 18:15:10.480354 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 12 18:15:10.480407 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 12 18:15:10.480457 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 12 18:15:10.480507 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 12 18:15:10.480559 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 12 18:15:10.480612 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 12 18:15:10.480698 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 12 18:15:10.480754 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 12 18:15:10.480802 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 12 18:15:10.480856 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 12 18:15:10.480906 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 12 18:15:10.480968 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 12 18:15:10.481018 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 12 18:15:10.481066 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 12 18:15:10.481121 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 12 18:15:10.481170 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 12 18:15:10.481220 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 12 18:15:10.481273 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 12 18:15:10.481325 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 12 18:15:10.481375 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 12 18:15:10.481427 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 12 18:15:10.481477 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 12 18:15:10.481525 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 12 18:15:10.481575 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 12 18:15:10.481629 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 12 18:15:10.481715 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 12 18:15:10.481765 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 12 18:15:10.481813 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 12 18:15:10.481871 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 12 18:15:10.481923 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 12 18:15:10.481978 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 12 18:15:10.482028 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 12 18:15:10.482084 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 12 18:15:10.482135 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 12 18:15:10.482190 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 12 18:15:10.482243 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 12 18:15:10.482297 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Sep 12 18:15:10.482347 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Sep 12 18:15:10.482400 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 12 18:15:10.482452 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 12 18:15:10.482505 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 12 18:15:10.482564 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 12 18:15:10.482615 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 12 18:15:10.482702 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 12 18:15:10.482756 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 12 18:15:10.482805 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 12 18:15:10.482857 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 18:15:10.482914 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Sep 12 18:15:10.482967 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 12 18:15:10.483018 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 12 18:15:10.483068 kernel: pci 0000:02:00.0: PME# supported from D3cold Sep 12 18:15:10.483119 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 12 18:15:10.483170 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 12 18:15:10.483229 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Sep 12 18:15:10.483283 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 12 18:15:10.483333 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 12 18:15:10.483383 kernel: pci 0000:02:00.1: PME# supported from D3cold Sep 12 18:15:10.483435 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 12 18:15:10.483487 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 12 18:15:10.483538 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 12 18:15:10.483587 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Sep 12 18:15:10.483658 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 12 18:15:10.483725 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 12 18:15:10.483780 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 12 18:15:10.483832 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 12 18:15:10.483883 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 12 18:15:10.483934 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Sep 12 18:15:10.483984 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 12 18:15:10.484034 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 12 18:15:10.484087 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 12 18:15:10.484136 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 12 18:15:10.484184 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 12 18:15:10.484240 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Sep 12 18:15:10.484293 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Sep 12 18:15:10.484343 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 12 18:15:10.484394 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Sep 12 18:15:10.484446 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 12 18:15:10.484497 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Sep 12 18:15:10.484549 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 12 18:15:10.484598 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 12 18:15:10.484668 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 12 18:15:10.484750 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 12 18:15:10.484825 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Sep 12 18:15:10.484877 kernel: pci 0000:07:00.0: enabling Extended Tags Sep 12 18:15:10.484930 kernel: pci 0000:07:00.0: supports D1 D2 Sep 12 18:15:10.484982 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 18:15:10.485032 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 12 18:15:10.485082 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 12 18:15:10.485131 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Sep 12 18:15:10.485186 kernel: pci_bus 0000:08: extended config space not accessible Sep 12 18:15:10.485248 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Sep 12 18:15:10.485303 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 12 18:15:10.485357 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 12 18:15:10.485409 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Sep 12 18:15:10.485462 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 18:15:10.485515 kernel: pci 0000:08:00.0: supports D1 D2 Sep 12 18:15:10.485567 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 18:15:10.485687 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 12 18:15:10.485742 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 12 18:15:10.485796 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 12 18:15:10.485807 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 12 18:15:10.485813 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 12 18:15:10.485819 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 12 18:15:10.485825 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 12 18:15:10.485830 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 12 18:15:10.485836 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 12 18:15:10.485842 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 12 18:15:10.485849 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 12 18:15:10.485854 kernel: iommu: Default domain type: Translated Sep 12 18:15:10.485860 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 18:15:10.485866 kernel: PCI: Using ACPI for IRQ routing Sep 12 18:15:10.485871 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 18:15:10.485877 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 12 18:15:10.485883 kernel: e820: reserve RAM buffer [mem 0x81a71000-0x83ffffff] Sep 12 18:15:10.485888 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Sep 12 18:15:10.485893 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Sep 12 18:15:10.485900 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 12 18:15:10.485906 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 12 18:15:10.485958 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Sep 12 18:15:10.486011 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Sep 12 18:15:10.486066 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 18:15:10.486074 kernel: vgaarb: loaded Sep 12 18:15:10.486080 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 18:15:10.486086 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Sep 12 18:15:10.486092 kernel: clocksource: Switched to clocksource tsc-early Sep 12 18:15:10.486099 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 18:15:10.486105 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 18:15:10.486111 kernel: pnp: PnP ACPI init Sep 12 18:15:10.486164 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 12 18:15:10.486214 kernel: pnp 00:02: [dma 0 disabled] Sep 12 18:15:10.486267 kernel: pnp 00:03: [dma 0 disabled] Sep 12 18:15:10.486315 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 12 18:15:10.486364 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 12 18:15:10.486412 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Sep 12 18:15:10.486459 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Sep 12 18:15:10.486504 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Sep 12 18:15:10.486550 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Sep 12 18:15:10.486594 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 12 18:15:10.486676 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 12 18:15:10.486722 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 12 18:15:10.486767 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 12 18:15:10.486816 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Sep 12 18:15:10.486861 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 12 18:15:10.486906 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 12 18:15:10.486950 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 12 18:15:10.486998 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 12 18:15:10.487044 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 12 18:15:10.487089 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Sep 12 18:15:10.487138 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Sep 12 18:15:10.487147 kernel: pnp: PnP ACPI: found 9 devices Sep 12 18:15:10.487153 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 18:15:10.487159 kernel: NET: Registered PF_INET protocol family Sep 12 18:15:10.487165 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 18:15:10.487172 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 12 18:15:10.487178 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 18:15:10.487184 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 18:15:10.487189 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 18:15:10.487195 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 12 18:15:10.487201 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 18:15:10.487206 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 18:15:10.487212 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 18:15:10.487219 kernel: NET: Registered PF_XDP protocol family Sep 12 18:15:10.487268 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 12 18:15:10.487319 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 12 18:15:10.487369 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 12 18:15:10.487419 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 18:15:10.487472 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 12 18:15:10.487523 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 12 18:15:10.487576 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 12 18:15:10.487630 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 12 18:15:10.487721 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 12 18:15:10.487770 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Sep 12 18:15:10.487821 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 12 18:15:10.487871 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 12 18:15:10.487924 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 12 18:15:10.487973 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 12 18:15:10.488022 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 12 18:15:10.488072 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 12 18:15:10.488121 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 12 18:15:10.488174 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 12 18:15:10.488224 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 12 18:15:10.488276 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 12 18:15:10.488327 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 12 18:15:10.488380 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 12 18:15:10.488430 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 12 18:15:10.488479 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 12 18:15:10.488528 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Sep 12 18:15:10.488573 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 12 18:15:10.488620 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 18:15:10.488699 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 18:15:10.488742 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 18:15:10.488788 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 12 18:15:10.488831 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 12 18:15:10.488881 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Sep 12 18:15:10.488926 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 12 18:15:10.488978 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Sep 12 18:15:10.489024 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Sep 12 18:15:10.489075 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 12 18:15:10.489122 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Sep 12 18:15:10.489172 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 12 18:15:10.489218 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 12 18:15:10.489265 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Sep 12 18:15:10.489313 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Sep 12 18:15:10.489321 kernel: PCI: CLS 64 bytes, default 64 Sep 12 18:15:10.489327 kernel: DMAR: No ATSR found Sep 12 18:15:10.489334 kernel: DMAR: No SATC found Sep 12 18:15:10.489340 kernel: DMAR: dmar0: Using Queued invalidation Sep 12 18:15:10.489390 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 12 18:15:10.489442 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 12 18:15:10.489492 kernel: pci 0000:00:01.1: Adding to iommu group 1 Sep 12 18:15:10.489542 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 12 18:15:10.489592 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 12 18:15:10.489666 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 12 18:15:10.489733 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 12 18:15:10.489782 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 12 18:15:10.489831 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 12 18:15:10.489881 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 12 18:15:10.489931 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 12 18:15:10.489981 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 12 18:15:10.490030 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 12 18:15:10.490081 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 12 18:15:10.490133 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 12 18:15:10.490183 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 12 18:15:10.490232 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 12 18:15:10.490283 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Sep 12 18:15:10.490333 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 12 18:15:10.490383 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 12 18:15:10.490433 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 12 18:15:10.490482 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 12 18:15:10.490537 kernel: pci 0000:02:00.0: Adding to iommu group 1 Sep 12 18:15:10.490588 kernel: pci 0000:02:00.1: Adding to iommu group 1 Sep 12 18:15:10.490664 kernel: pci 0000:04:00.0: Adding to iommu group 15 Sep 12 18:15:10.490732 kernel: pci 0000:05:00.0: Adding to iommu group 16 Sep 12 18:15:10.490783 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 12 18:15:10.490837 kernel: pci 0000:08:00.0: Adding to iommu group 17 Sep 12 18:15:10.490846 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 12 18:15:10.490852 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 18:15:10.490857 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Sep 12 18:15:10.490865 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 12 18:15:10.490871 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 12 18:15:10.490877 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 12 18:15:10.490883 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 12 18:15:10.490936 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 12 18:15:10.490945 kernel: Initialise system trusted keyrings Sep 12 18:15:10.490951 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 12 18:15:10.490957 kernel: Key type asymmetric registered Sep 12 18:15:10.490964 kernel: Asymmetric key parser 'x509' registered Sep 12 18:15:10.490970 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 18:15:10.490976 kernel: io scheduler mq-deadline registered Sep 12 18:15:10.490981 kernel: io scheduler kyber registered Sep 12 18:15:10.490987 kernel: io scheduler bfq registered Sep 12 18:15:10.491037 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 12 18:15:10.491087 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Sep 12 18:15:10.491139 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Sep 12 18:15:10.491192 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Sep 12 18:15:10.491242 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Sep 12 18:15:10.491292 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Sep 12 18:15:10.491342 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Sep 12 18:15:10.491397 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 12 18:15:10.491406 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 12 18:15:10.491412 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 12 18:15:10.491418 kernel: pstore: Using crash dump compression: deflate Sep 12 18:15:10.491425 kernel: pstore: Registered erst as persistent store backend Sep 12 18:15:10.491431 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 18:15:10.491437 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 18:15:10.491443 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 18:15:10.491448 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 18:15:10.491501 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 12 18:15:10.491510 kernel: i8042: PNP: No PS/2 controller found. Sep 12 18:15:10.491555 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 12 18:15:10.491605 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 12 18:15:10.491692 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-12T18:15:09 UTC (1757700909) Sep 12 18:15:10.491739 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 12 18:15:10.491747 kernel: intel_pstate: Intel P-state driver initializing Sep 12 18:15:10.491753 kernel: intel_pstate: Disabling energy efficiency optimization Sep 12 18:15:10.491759 kernel: intel_pstate: HWP enabled Sep 12 18:15:10.491765 kernel: NET: Registered PF_INET6 protocol family Sep 12 18:15:10.491771 kernel: Segment Routing with IPv6 Sep 12 18:15:10.491778 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 18:15:10.491784 kernel: NET: Registered PF_PACKET protocol family Sep 12 18:15:10.491790 kernel: Key type dns_resolver registered Sep 12 18:15:10.491795 kernel: microcode: Current revision: 0x00000102 Sep 12 18:15:10.491801 kernel: microcode: Microcode Update Driver: v2.2. Sep 12 18:15:10.491807 kernel: IPI shorthand broadcast: enabled Sep 12 18:15:10.491812 kernel: sched_clock: Marking stable (1644130782, 1435140939)->(4569838632, -1490566911) Sep 12 18:15:10.491818 kernel: registered taskstats version 1 Sep 12 18:15:10.491824 kernel: Loading compiled-in X.509 certificates Sep 12 18:15:10.491830 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: d1d9e065fdbec39026aa56a07626d6d91ab4fce4' Sep 12 18:15:10.491836 kernel: Key type .fscrypt registered Sep 12 18:15:10.491841 kernel: Key type fscrypt-provisioning registered Sep 12 18:15:10.491847 kernel: ima: Allocated hash algorithm: sha1 Sep 12 18:15:10.491853 kernel: ima: No architecture policies found Sep 12 18:15:10.491858 kernel: clk: Disabling unused clocks Sep 12 18:15:10.491864 kernel: Freeing unused kernel image (initmem) memory: 43520K Sep 12 18:15:10.491870 kernel: Write protecting the kernel read-only data: 38912k Sep 12 18:15:10.491876 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Sep 12 18:15:10.491882 kernel: Run /init as init process Sep 12 18:15:10.491888 kernel: with arguments: Sep 12 18:15:10.491894 kernel: /init Sep 12 18:15:10.491899 kernel: with environment: Sep 12 18:15:10.491905 kernel: HOME=/ Sep 12 18:15:10.491910 kernel: TERM=linux Sep 12 18:15:10.491916 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 18:15:10.491922 systemd[1]: Successfully made /usr/ read-only. Sep 12 18:15:10.491931 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 18:15:10.491938 systemd[1]: Detected architecture x86-64. Sep 12 18:15:10.491943 systemd[1]: Running in initrd. Sep 12 18:15:10.491949 systemd[1]: No hostname configured, using default hostname. Sep 12 18:15:10.491955 systemd[1]: Hostname set to . Sep 12 18:15:10.491961 systemd[1]: Initializing machine ID from random generator. Sep 12 18:15:10.491967 systemd[1]: Queued start job for default target initrd.target. Sep 12 18:15:10.491973 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 18:15:10.491980 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 18:15:10.491986 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 18:15:10.491992 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 18:15:10.491998 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 18:15:10.492005 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 18:15:10.492011 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 18:15:10.492019 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 18:15:10.492025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 18:15:10.492031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 18:15:10.492037 systemd[1]: Reached target paths.target - Path Units. Sep 12 18:15:10.492043 systemd[1]: Reached target slices.target - Slice Units. Sep 12 18:15:10.492049 systemd[1]: Reached target swap.target - Swaps. Sep 12 18:15:10.492055 systemd[1]: Reached target timers.target - Timer Units. Sep 12 18:15:10.492061 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 18:15:10.492067 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 18:15:10.492074 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 18:15:10.492080 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 18:15:10.492086 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 18:15:10.492092 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 18:15:10.492098 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 18:15:10.492104 kernel: tsc: Refined TSC clocksource calibration: 3408.019 MHz Sep 12 18:15:10.492110 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fe57a0f5, max_idle_ns: 440795229317 ns Sep 12 18:15:10.492116 kernel: clocksource: Switched to clocksource tsc Sep 12 18:15:10.492122 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 18:15:10.492129 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 18:15:10.492135 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 18:15:10.492141 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 18:15:10.492146 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 18:15:10.492152 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 18:15:10.492169 systemd-journald[268]: Collecting audit messages is disabled. Sep 12 18:15:10.492185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 18:15:10.492191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:15:10.492198 systemd-journald[268]: Journal started Sep 12 18:15:10.492211 systemd-journald[268]: Runtime Journal (/run/log/journal/69bc7d365c2e4202a109de556dfffce3) is 8M, max 639.9M, 631.9M free. Sep 12 18:15:10.512426 systemd-modules-load[271]: Inserted module 'overlay' Sep 12 18:15:10.521624 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 18:15:10.521848 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 18:15:10.559226 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 18:15:10.559239 kernel: Bridge firewalling registered Sep 12 18:15:10.537856 systemd-modules-load[271]: Inserted module 'br_netfilter' Sep 12 18:15:10.559269 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 18:15:10.599452 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 18:15:10.618364 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 18:15:10.636278 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:15:10.670850 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 18:15:10.671276 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 18:15:10.671650 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 18:15:10.672049 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 18:15:10.675705 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 18:15:10.676381 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 18:15:10.676512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:15:10.677169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 18:15:10.678111 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 18:15:10.692064 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 18:15:10.699433 systemd-resolved[306]: Positive Trust Anchors: Sep 12 18:15:10.699441 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 18:15:10.699469 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 18:15:10.701272 systemd-resolved[306]: Defaulting to hostname 'linux'. Sep 12 18:15:10.724954 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 18:15:10.742963 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 18:15:10.766147 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 18:15:10.846025 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 18:15:10.921222 dracut-cmdline[311]: dracut-dracut-053 Sep 12 18:15:10.928902 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 18:15:11.116650 kernel: SCSI subsystem initialized Sep 12 18:15:11.129654 kernel: Loading iSCSI transport class v2.0-870. Sep 12 18:15:11.141670 kernel: iscsi: registered transport (tcp) Sep 12 18:15:11.163570 kernel: iscsi: registered transport (qla4xxx) Sep 12 18:15:11.163588 kernel: QLogic iSCSI HBA Driver Sep 12 18:15:11.185887 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 18:15:11.212898 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 18:15:11.248509 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 18:15:11.248529 kernel: device-mapper: uevent: version 1.0.3 Sep 12 18:15:11.257322 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 18:15:11.292673 kernel: raid6: avx2x4 gen() 47135 MB/s Sep 12 18:15:11.313678 kernel: raid6: avx2x2 gen() 53813 MB/s Sep 12 18:15:11.339743 kernel: raid6: avx2x1 gen() 45249 MB/s Sep 12 18:15:11.339763 kernel: raid6: using algorithm avx2x2 gen() 53813 MB/s Sep 12 18:15:11.366850 kernel: raid6: .... xor() 32674 MB/s, rmw enabled Sep 12 18:15:11.366867 kernel: raid6: using avx2x2 recovery algorithm Sep 12 18:15:11.387652 kernel: xor: automatically using best checksumming function avx Sep 12 18:15:11.487628 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 18:15:11.493021 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 18:15:11.512933 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 18:15:11.522890 systemd-udevd[496]: Using default interface naming scheme 'v255'. Sep 12 18:15:11.525857 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 18:15:11.549724 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 18:15:11.601922 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 12 18:15:11.619466 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 18:15:11.645985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 18:15:11.708191 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 18:15:11.744465 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 18:15:11.744481 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 18:15:11.744490 kernel: PTP clock support registered Sep 12 18:15:11.744497 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 18:15:11.744507 kernel: libata version 3.00 loaded. Sep 12 18:15:11.734841 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 18:15:11.751494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 18:15:11.796727 kernel: ACPI: bus type USB registered Sep 12 18:15:11.796742 kernel: usbcore: registered new interface driver usbfs Sep 12 18:15:11.796750 kernel: usbcore: registered new interface driver hub Sep 12 18:15:11.796757 kernel: usbcore: registered new device driver usb Sep 12 18:15:11.796771 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 18:15:11.796784 kernel: AES CTR mode by8 optimization enabled Sep 12 18:15:11.796798 kernel: ahci 0000:00:17.0: version 3.0 Sep 12 18:15:11.751613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 18:15:12.060826 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 12 18:15:12.060846 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 12 18:15:12.060854 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Sep 12 18:15:12.060986 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 12 18:15:12.061120 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 12 18:15:12.061254 kernel: igb 0000:04:00.0: added PHC on eth0 Sep 12 18:15:12.061351 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 12 18:15:12.061425 kernel: scsi host0: ahci Sep 12 18:15:12.061494 kernel: scsi host1: ahci Sep 12 18:15:12.061556 kernel: scsi host2: ahci Sep 12 18:15:12.061623 kernel: scsi host3: ahci Sep 12 18:15:12.061685 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 12 18:15:12.061751 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:08:6a Sep 12 18:15:12.061817 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Sep 12 18:15:12.061881 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 12 18:15:12.061947 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 12 18:15:12.062012 kernel: scsi host4: ahci Sep 12 18:15:12.062074 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 12 18:15:12.062136 kernel: scsi host5: ahci Sep 12 18:15:12.062197 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 12 18:15:12.062259 kernel: scsi host6: ahci Sep 12 18:15:12.062319 kernel: igb 0000:05:00.0: added PHC on eth1 Sep 12 18:15:12.062387 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 12 18:15:12.062452 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:08:6b Sep 12 18:15:12.062516 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Sep 12 18:15:12.062580 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 12 18:15:12.062657 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 12 18:15:12.062721 kernel: scsi host7: ahci Sep 12 18:15:12.062785 kernel: hub 1-0:1.0: USB hub found Sep 12 18:15:12.062860 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Sep 12 18:15:12.062926 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Sep 12 18:15:12.062935 kernel: hub 1-0:1.0: 16 ports detected Sep 12 18:15:12.063003 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Sep 12 18:15:12.063011 kernel: hub 2-0:1.0: USB hub found Sep 12 18:15:12.063082 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Sep 12 18:15:12.063090 kernel: hub 2-0:1.0: 10 ports detected Sep 12 18:15:12.063157 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Sep 12 18:15:12.063167 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Sep 12 18:15:12.063175 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Sep 12 18:15:12.063182 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Sep 12 18:15:12.063189 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 128 Sep 12 18:15:12.060770 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 18:15:12.107739 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Sep 12 18:15:12.060796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 18:15:12.060897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:15:12.131103 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:15:12.165676 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Sep 12 18:15:12.165853 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 12 18:15:12.158145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:15:12.181862 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 12 18:15:12.181955 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:15:12.182248 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 18:15:12.217137 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 18:15:12.233602 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 18:15:12.253730 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 18:15:12.265791 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 18:15:12.309143 kernel: hub 1-14:1.0: USB hub found Sep 12 18:15:12.309242 kernel: hub 1-14:1.0: 4 ports detected Sep 12 18:15:12.292814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:15:12.366574 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 18:15:12.366590 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 12 18:15:12.366601 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 12 18:15:12.366610 kernel: ata8: SATA link down (SStatus 0 SControl 300) Sep 12 18:15:12.366622 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 12 18:15:12.330385 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 18:15:12.456666 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 12 18:15:12.456681 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 18:15:12.456689 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 18:15:12.456697 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 12 18:15:12.456704 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 12 18:15:12.456712 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 12 18:15:12.456794 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 12 18:15:12.456803 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Sep 12 18:15:12.456901 kernel: ata2.00: Features: NCQ-prio Sep 12 18:15:12.456912 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Sep 12 18:15:12.456990 kernel: ata1.00: Features: NCQ-prio Sep 12 18:15:12.372637 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 18:15:12.500715 kernel: ata2.00: configured for UDMA/133 Sep 12 18:15:12.500746 kernel: ata1.00: configured for UDMA/133 Sep 12 18:15:12.500767 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 12 18:15:12.500992 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 12 18:15:12.518311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 18:15:12.664457 kernel: ata2.00: Enabling discard_zeroes_data Sep 12 18:15:12.664473 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 18:15:12.664487 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 12 18:15:12.664612 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 12 18:15:12.664728 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 12 18:15:12.664834 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 18:15:12.664941 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 12 18:15:12.665043 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 18:15:12.665144 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 12 18:15:12.665244 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 18:15:12.665311 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 12 18:15:12.665374 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 18:15:12.665436 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 12 18:15:12.665500 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 12 18:15:12.665561 kernel: ata2.00: Enabling discard_zeroes_data Sep 12 18:15:12.665569 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 12 18:15:12.665585 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 18:15:12.665592 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 12 18:15:12.665659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 18:15:12.665668 kernel: GPT:9289727 != 937703087 Sep 12 18:15:12.665675 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 18:15:12.665682 kernel: GPT:9289727 != 937703087 Sep 12 18:15:12.665691 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 18:15:12.665698 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 18:15:12.665705 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 12 18:15:12.665780 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 18:15:12.678678 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Sep 12 18:15:12.693821 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 12 18:15:12.710625 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 18:15:12.722430 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 12 18:15:12.775725 kernel: BTRFS: device fsid 8328a8c6-e42c-42bb-93d2-f755d7523d53 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (558) Sep 12 18:15:12.775739 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (690) Sep 12 18:15:12.775747 kernel: usbcore: registered new interface driver usbhid Sep 12 18:15:12.775754 kernel: usbhid: USB HID core driver Sep 12 18:15:12.775761 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 12 18:15:12.776498 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Sep 12 18:15:12.793470 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 12 18:15:12.859706 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 12 18:15:12.859910 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 12 18:15:12.859924 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 12 18:15:12.799765 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 12 18:15:12.883461 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 12 18:15:12.900723 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 18:15:12.934096 disk-uuid[716]: Primary Header is updated. Sep 12 18:15:12.934096 disk-uuid[716]: Secondary Entries is updated. Sep 12 18:15:12.934096 disk-uuid[716]: Secondary Header is updated. Sep 12 18:15:12.964652 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 18:15:12.964670 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 18:15:13.001627 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Sep 12 18:15:13.013902 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Sep 12 18:15:13.273711 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 12 18:15:13.286622 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Sep 12 18:15:13.299688 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Sep 12 18:15:13.956407 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 18:15:13.965057 disk-uuid[717]: The operation has completed successfully. Sep 12 18:15:13.973730 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 18:15:14.006797 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 18:15:14.006861 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 18:15:14.054906 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 18:15:14.081672 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 18:15:14.081866 sh[742]: Success Sep 12 18:15:14.123591 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 18:15:14.149149 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 18:15:14.158253 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 18:15:14.234062 kernel: BTRFS info (device dm-0): first mount of filesystem 8328a8c6-e42c-42bb-93d2-f755d7523d53 Sep 12 18:15:14.234084 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:15:14.243684 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 18:15:14.250695 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 18:15:14.256550 kernel: BTRFS info (device dm-0): using free space tree Sep 12 18:15:14.270647 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 18:15:14.272305 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 18:15:14.281930 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 18:15:14.291966 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 18:15:14.322990 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 18:15:14.370573 kernel: BTRFS info (device sda6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 18:15:14.370598 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:15:14.376510 kernel: BTRFS info (device sda6): using free space tree Sep 12 18:15:14.386500 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 18:15:14.423861 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 18:15:14.423875 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 18:15:14.423884 kernel: BTRFS info (device sda6): last unmount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 18:15:14.413007 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 18:15:14.449894 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 18:15:14.460461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 18:15:14.507841 systemd-networkd[922]: lo: Link UP Sep 12 18:15:14.507844 systemd-networkd[922]: lo: Gained carrier Sep 12 18:15:14.510712 systemd-networkd[922]: Enumeration completed Sep 12 18:15:14.520299 ignition[921]: Ignition 2.20.0 Sep 12 18:15:14.510778 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 18:15:14.520303 ignition[921]: Stage: fetch-offline Sep 12 18:15:14.511461 systemd-networkd[922]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 18:15:14.520325 ignition[921]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:15:14.522748 unknown[921]: fetched base config from "system" Sep 12 18:15:14.520330 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 18:15:14.522752 unknown[921]: fetched user config from "system" Sep 12 18:15:14.520383 ignition[921]: parsed url from cmdline: "" Sep 12 18:15:14.523987 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 18:15:14.520386 ignition[921]: no config URL provided Sep 12 18:15:14.541955 systemd-networkd[922]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 18:15:14.520389 ignition[921]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 18:15:14.548773 systemd[1]: Reached target network.target - Network. Sep 12 18:15:14.520412 ignition[921]: parsing config with SHA512: 34ba13544e7c539730d3055242f530c6c5224bd1316425e5a55f9e9b45385217bf2f6f549f32cf0400c64ccab440a1d28d6a7c7d6db3a3c1349188b600e89468 Sep 12 18:15:14.562871 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 18:15:14.522996 ignition[921]: fetch-offline: fetch-offline passed Sep 12 18:15:14.573069 systemd-networkd[922]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 18:15:14.522998 ignition[921]: POST message to Packet Timeline Sep 12 18:15:14.578028 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 18:15:14.523001 ignition[921]: POST Status error: resource requires networking Sep 12 18:15:14.523040 ignition[921]: Ignition finished successfully Sep 12 18:15:14.652024 ignition[935]: Ignition 2.20.0 Sep 12 18:15:14.652038 ignition[935]: Stage: kargs Sep 12 18:15:14.652364 ignition[935]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:15:14.652381 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 18:15:14.653762 ignition[935]: kargs: kargs passed Sep 12 18:15:14.792721 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 12 18:15:14.653770 ignition[935]: POST message to Packet Timeline Sep 12 18:15:14.653797 ignition[935]: GET https://metadata.packet.net/metadata: attempt #1 Sep 12 18:15:14.793238 systemd-networkd[922]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 18:15:14.654607 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58937->[::1]:53: read: connection refused Sep 12 18:15:14.855206 ignition[935]: GET https://metadata.packet.net/metadata: attempt #2 Sep 12 18:15:14.856565 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55565->[::1]:53: read: connection refused Sep 12 18:15:15.048633 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 12 18:15:15.051722 systemd-networkd[922]: eno1: Link UP Sep 12 18:15:15.051865 systemd-networkd[922]: eno2: Link UP Sep 12 18:15:15.051999 systemd-networkd[922]: enp2s0f0np0: Link UP Sep 12 18:15:15.052157 systemd-networkd[922]: enp2s0f0np0: Gained carrier Sep 12 18:15:15.059911 systemd-networkd[922]: enp2s0f1np1: Link UP Sep 12 18:15:15.091810 systemd-networkd[922]: enp2s0f0np0: DHCPv4 address 139.178.90.133/31, gateway 139.178.90.132 acquired from 145.40.83.140 Sep 12 18:15:15.257748 ignition[935]: GET https://metadata.packet.net/metadata: attempt #3 Sep 12 18:15:15.258887 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42484->[::1]:53: read: connection refused Sep 12 18:15:15.823236 systemd-networkd[922]: enp2s0f1np1: Gained carrier Sep 12 18:15:16.059138 ignition[935]: GET https://metadata.packet.net/metadata: attempt #4 Sep 12 18:15:16.060398 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45029->[::1]:53: read: connection refused Sep 12 18:15:16.335104 systemd-networkd[922]: enp2s0f0np0: Gained IPv6LL Sep 12 18:15:17.103117 systemd-networkd[922]: enp2s0f1np1: Gained IPv6LL Sep 12 18:15:17.661896 ignition[935]: GET https://metadata.packet.net/metadata: attempt #5 Sep 12 18:15:17.663388 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60573->[::1]:53: read: connection refused Sep 12 18:15:20.866566 ignition[935]: GET https://metadata.packet.net/metadata: attempt #6 Sep 12 18:15:22.007955 ignition[935]: GET result: OK Sep 12 18:15:23.215362 ignition[935]: Ignition finished successfully Sep 12 18:15:23.220660 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 18:15:23.246895 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 18:15:23.252842 ignition[955]: Ignition 2.20.0 Sep 12 18:15:23.252847 ignition[955]: Stage: disks Sep 12 18:15:23.252945 ignition[955]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:15:23.252951 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 18:15:23.253463 ignition[955]: disks: disks passed Sep 12 18:15:23.253466 ignition[955]: POST message to Packet Timeline Sep 12 18:15:23.253477 ignition[955]: GET https://metadata.packet.net/metadata: attempt #1 Sep 12 18:15:24.427565 ignition[955]: GET result: OK Sep 12 18:15:25.353981 ignition[955]: Ignition finished successfully Sep 12 18:15:25.357364 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 18:15:25.371910 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 18:15:25.389930 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 18:15:25.410908 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 18:15:25.432051 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 18:15:25.453051 systemd[1]: Reached target basic.target - Basic System. Sep 12 18:15:25.481888 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 18:15:25.523097 systemd-fsck[973]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 18:15:25.533064 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 18:15:25.560857 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 18:15:25.635620 kernel: EXT4-fs (sda9): mounted filesystem 5378802a-8117-4ea8-949a-cd38005ba44a r/w with ordered data mode. Quota mode: none. Sep 12 18:15:25.635960 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 18:15:25.644128 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 18:15:25.679826 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 18:15:25.711461 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (982) Sep 12 18:15:25.711474 kernel: BTRFS info (device sda6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 18:15:25.688890 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 18:15:25.754846 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:15:25.754860 kernel: BTRFS info (device sda6): using free space tree Sep 12 18:15:25.754869 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 18:15:25.754878 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 18:15:25.751702 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 18:15:25.767134 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Sep 12 18:15:25.790731 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 18:15:25.827850 coreos-metadata[999]: Sep 12 18:15:25.813 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 12 18:15:25.790751 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 18:15:25.810717 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 18:15:25.876838 coreos-metadata[1000]: Sep 12 18:15:25.815 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 12 18:15:25.836877 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 18:15:25.869968 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 18:15:25.916757 initrd-setup-root[1014]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 18:15:25.926731 initrd-setup-root[1021]: cut: /sysroot/etc/group: No such file or directory Sep 12 18:15:25.936743 initrd-setup-root[1028]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 18:15:25.947737 initrd-setup-root[1035]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 18:15:25.974838 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 18:15:25.993873 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 18:15:26.018763 kernel: BTRFS info (device sda6): last unmount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 18:15:25.995869 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 18:15:26.027412 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 18:15:26.048872 ignition[1102]: INFO : Ignition 2.20.0 Sep 12 18:15:26.048872 ignition[1102]: INFO : Stage: mount Sep 12 18:15:26.062745 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 18:15:26.062745 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 18:15:26.062745 ignition[1102]: INFO : mount: mount passed Sep 12 18:15:26.062745 ignition[1102]: INFO : POST message to Packet Timeline Sep 12 18:15:26.062745 ignition[1102]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 12 18:15:26.058890 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 18:15:27.044754 ignition[1102]: INFO : GET result: OK Sep 12 18:15:27.296413 coreos-metadata[1000]: Sep 12 18:15:27.296 INFO Fetch successful Sep 12 18:15:27.333039 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 12 18:15:27.333093 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Sep 12 18:15:27.655542 coreos-metadata[999]: Sep 12 18:15:27.655 INFO Fetch successful Sep 12 18:15:27.690601 coreos-metadata[999]: Sep 12 18:15:27.690 INFO wrote hostname ci-4230.2.3-a-0654ef0f4d to /sysroot/etc/hostname Sep 12 18:15:27.692086 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 18:15:28.202655 ignition[1102]: INFO : Ignition finished successfully Sep 12 18:15:28.205825 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 18:15:28.239911 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 18:15:28.252673 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 18:15:28.313749 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (1127) Sep 12 18:15:28.313778 kernel: BTRFS info (device sda6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 18:15:28.321827 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:15:28.327712 kernel: BTRFS info (device sda6): using free space tree Sep 12 18:15:28.342527 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 18:15:28.342545 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 18:15:28.344662 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 18:15:28.370549 ignition[1144]: INFO : Ignition 2.20.0 Sep 12 18:15:28.370549 ignition[1144]: INFO : Stage: files Sep 12 18:15:28.386846 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 18:15:28.386846 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 18:15:28.386846 ignition[1144]: DEBUG : files: compiled without relabeling support, skipping Sep 12 18:15:28.386846 ignition[1144]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 18:15:28.386846 ignition[1144]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 18:15:28.386846 ignition[1144]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 18:15:28.386846 ignition[1144]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 18:15:28.386846 ignition[1144]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 18:15:28.386846 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 18:15:28.386846 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 18:15:28.374354 unknown[1144]: wrote ssh authorized keys file for user: core Sep 12 18:15:28.525847 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 18:15:28.608130 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 18:15:28.624936 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 18:15:28.624936 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 18:15:28.945801 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 18:15:29.055847 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 18:15:29.055847 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 18:15:29.085860 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 18:15:29.508408 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 18:15:30.262122 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 18:15:30.262122 ignition[1144]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 18:15:30.291931 ignition[1144]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 18:15:30.291931 ignition[1144]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 18:15:30.291931 ignition[1144]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 18:15:30.291931 ignition[1144]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 18:15:30.291931 ignition[1144]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 18:15:30.291931 ignition[1144]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 18:15:30.291931 ignition[1144]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 18:15:30.291931 ignition[1144]: INFO : files: files passed Sep 12 18:15:30.291931 ignition[1144]: INFO : POST message to Packet Timeline Sep 12 18:15:30.291931 ignition[1144]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 12 18:15:31.231465 ignition[1144]: INFO : GET result: OK Sep 12 18:15:31.682175 ignition[1144]: INFO : Ignition finished successfully Sep 12 18:15:31.685110 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 18:15:31.724897 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 18:15:31.725353 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 18:15:31.744208 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 18:15:31.744281 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 18:15:31.787484 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 18:15:31.806126 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 18:15:31.837844 initrd-setup-root-after-ignition[1183]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 18:15:31.837844 initrd-setup-root-after-ignition[1183]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 18:15:31.852844 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 18:15:31.847952 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 18:15:31.933874 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 18:15:31.933938 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 18:15:31.934256 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 18:15:31.972835 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 18:15:31.994216 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 18:15:32.009018 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 18:15:32.082381 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 18:15:32.105923 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 18:15:32.122297 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 18:15:32.149984 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 18:15:32.162289 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 18:15:32.180400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 18:15:32.180844 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 18:15:32.220035 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 18:15:32.230255 systemd[1]: Stopped target basic.target - Basic System. Sep 12 18:15:32.249360 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 18:15:32.268236 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 18:15:32.290361 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 18:15:32.311225 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 18:15:32.331241 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 18:15:32.352274 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 18:15:32.373274 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 18:15:32.393247 systemd[1]: Stopped target swap.target - Swaps. Sep 12 18:15:32.412247 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 18:15:32.412705 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 18:15:32.438347 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 18:15:32.459231 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 18:15:32.481240 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 18:15:32.481561 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 18:15:32.504235 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 18:15:32.504667 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 18:15:32.537266 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 18:15:32.537755 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 18:15:32.558341 systemd[1]: Stopped target paths.target - Path Units. Sep 12 18:15:32.577128 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 18:15:32.580884 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 18:15:32.598254 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 18:15:32.618258 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 18:15:32.637225 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 18:15:32.637536 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 18:15:32.658292 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 18:15:32.658587 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 18:15:32.681375 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 18:15:32.681828 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 18:15:32.700328 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 18:15:32.806787 ignition[1207]: INFO : Ignition 2.20.0 Sep 12 18:15:32.806787 ignition[1207]: INFO : Stage: umount Sep 12 18:15:32.806787 ignition[1207]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 18:15:32.806787 ignition[1207]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 18:15:32.806787 ignition[1207]: INFO : umount: umount passed Sep 12 18:15:32.806787 ignition[1207]: INFO : POST message to Packet Timeline Sep 12 18:15:32.806787 ignition[1207]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 12 18:15:32.700738 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 18:15:32.718353 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 18:15:32.718806 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 18:15:32.747740 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 18:15:32.770387 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 18:15:32.787791 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 18:15:32.788019 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 18:15:32.818308 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 18:15:32.818703 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 18:15:32.853099 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 18:15:32.853918 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 18:15:32.853986 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 18:15:32.891466 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 18:15:32.891850 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 18:15:33.849917 ignition[1207]: INFO : GET result: OK Sep 12 18:15:34.429961 ignition[1207]: INFO : Ignition finished successfully Sep 12 18:15:34.433288 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 18:15:34.433583 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 18:15:34.450060 systemd[1]: Stopped target network.target - Network. Sep 12 18:15:34.465961 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 18:15:34.466146 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 18:15:34.485067 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 18:15:34.485219 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 18:15:34.504120 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 18:15:34.504298 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 18:15:34.523010 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 18:15:34.523182 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 18:15:34.542038 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 18:15:34.542217 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 18:15:34.561507 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 18:15:34.581148 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 18:15:34.600747 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 18:15:34.601022 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 18:15:34.623978 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 18:15:34.624096 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 18:15:34.624148 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 18:15:34.639520 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 18:15:34.639975 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 18:15:34.640008 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 18:15:34.663830 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 18:15:34.683777 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 18:15:34.683883 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 18:15:34.706071 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 18:15:34.706247 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:15:34.728377 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 18:15:34.728546 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 18:15:34.747139 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 18:15:34.747316 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 18:15:34.768461 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 18:15:34.793298 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 18:15:34.793508 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:15:34.794630 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 18:15:34.795011 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 18:15:34.825271 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 18:15:34.825297 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 18:15:34.857874 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 18:15:34.857912 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 18:15:34.878022 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 18:15:34.878153 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 18:15:34.907215 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 18:15:34.907384 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 18:15:34.944087 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 18:15:34.944248 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 18:15:35.000093 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 18:15:35.008014 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 18:15:35.008175 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 18:15:35.040212 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 18:15:35.278825 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Sep 12 18:15:35.040359 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 18:15:35.060018 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 18:15:35.060165 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 18:15:35.082000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 18:15:35.082147 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:15:35.105449 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 18:15:35.105724 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:15:35.106997 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 18:15:35.107237 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 18:15:35.124551 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 18:15:35.124827 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 18:15:35.145961 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 18:15:35.177966 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 18:15:35.224755 systemd[1]: Switching root. Sep 12 18:15:35.421711 systemd-journald[268]: Journal stopped Sep 12 18:15:37.203254 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 18:15:37.203269 kernel: SELinux: policy capability open_perms=1 Sep 12 18:15:37.203277 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 18:15:37.203282 kernel: SELinux: policy capability always_check_network=0 Sep 12 18:15:37.203289 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 18:15:37.203295 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 18:15:37.203301 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 18:15:37.203307 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 18:15:37.203312 kernel: audit: type=1403 audit(1757700935.519:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 18:15:37.203319 systemd[1]: Successfully loaded SELinux policy in 74.285ms. Sep 12 18:15:37.203327 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.399ms. Sep 12 18:15:37.203334 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 18:15:37.203341 systemd[1]: Detected architecture x86-64. Sep 12 18:15:37.203347 systemd[1]: Detected first boot. Sep 12 18:15:37.203354 systemd[1]: Hostname set to . Sep 12 18:15:37.203361 systemd[1]: Initializing machine ID from random generator. Sep 12 18:15:37.203368 zram_generator::config[1261]: No configuration found. Sep 12 18:15:37.203375 systemd[1]: Populated /etc with preset unit settings. Sep 12 18:15:37.203382 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 18:15:37.203388 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 18:15:37.203394 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 18:15:37.203401 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 18:15:37.203408 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 18:15:37.203415 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 18:15:37.203422 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 18:15:37.203429 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 18:15:37.203435 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 18:15:37.203442 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 18:15:37.203449 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 18:15:37.203457 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 18:15:37.203463 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 18:15:37.203470 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 18:15:37.203476 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 18:15:37.203483 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 18:15:37.203490 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 18:15:37.203496 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 18:15:37.203504 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Sep 12 18:15:37.203512 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 18:15:37.203519 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 18:15:37.203525 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 18:15:37.203534 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 18:15:37.203540 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 18:15:37.203547 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 18:15:37.203554 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 18:15:37.203561 systemd[1]: Reached target slices.target - Slice Units. Sep 12 18:15:37.203568 systemd[1]: Reached target swap.target - Swaps. Sep 12 18:15:37.203575 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 18:15:37.203582 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 18:15:37.203589 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 18:15:37.203596 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 18:15:37.203604 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 18:15:37.203611 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 18:15:37.203620 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 18:15:37.203627 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 18:15:37.203634 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 18:15:37.203641 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 18:15:37.203648 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:15:37.203655 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 18:15:37.203663 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 18:15:37.203670 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 18:15:37.203677 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 18:15:37.203684 systemd[1]: Reached target machines.target - Containers. Sep 12 18:15:37.203691 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 18:15:37.203698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:15:37.203705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 18:15:37.203712 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 18:15:37.203720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 18:15:37.203727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 18:15:37.203734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 18:15:37.203741 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 18:15:37.203748 kernel: ACPI: bus type drm_connector registered Sep 12 18:15:37.203754 kernel: fuse: init (API version 7.39) Sep 12 18:15:37.203760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 18:15:37.203766 kernel: loop: module loaded Sep 12 18:15:37.203773 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 18:15:37.203781 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 18:15:37.203788 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 18:15:37.203795 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 18:15:37.203803 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 18:15:37.203810 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:15:37.203817 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 18:15:37.203832 systemd-journald[1364]: Collecting audit messages is disabled. Sep 12 18:15:37.203849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 18:15:37.203857 systemd-journald[1364]: Journal started Sep 12 18:15:37.203871 systemd-journald[1364]: Runtime Journal (/run/log/journal/cf99563515254793bc97c3ca83e69ae6) is 8M, max 639.9M, 631.9M free. Sep 12 18:15:36.021680 systemd[1]: Queued start job for default target multi-user.target. Sep 12 18:15:36.039656 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 18:15:36.040595 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 18:15:37.231661 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 18:15:37.242665 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 18:15:37.274674 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 18:15:37.295702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 18:15:37.316848 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 18:15:37.316887 systemd[1]: Stopped verity-setup.service. Sep 12 18:15:37.342659 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:15:37.351667 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 18:15:37.360078 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 18:15:37.369757 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 18:15:37.379916 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 18:15:37.389901 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 18:15:37.399919 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 18:15:37.409883 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 18:15:37.419971 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 18:15:37.430988 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 18:15:37.441931 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 18:15:37.442083 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 18:15:37.454588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 18:15:37.455104 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 18:15:37.466588 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 18:15:37.467095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 18:15:37.477588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 18:15:37.478106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 18:15:37.489606 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 18:15:37.490127 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 18:15:37.500569 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 18:15:37.501337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 18:15:37.511710 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 18:15:37.523662 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 18:15:37.536659 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 18:15:37.549822 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 18:15:37.562818 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 18:15:37.582335 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 18:15:37.604835 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 18:15:37.616590 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 18:15:37.627787 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 18:15:37.627825 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 18:15:37.639380 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 18:15:37.668056 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 18:15:37.681158 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 18:15:37.692022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:15:37.694371 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 18:15:37.705378 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 18:15:37.717768 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 18:15:37.718444 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 18:15:37.720890 systemd-journald[1364]: Time spent on flushing to /var/log/journal/cf99563515254793bc97c3ca83e69ae6 is 13.129ms for 1385 entries. Sep 12 18:15:37.720890 systemd-journald[1364]: System Journal (/var/log/journal/cf99563515254793bc97c3ca83e69ae6) is 8M, max 195.6M, 187.6M free. Sep 12 18:15:37.745322 systemd-journald[1364]: Received client request to flush runtime journal. Sep 12 18:15:37.736763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 18:15:37.737579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 18:15:37.748411 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 18:15:37.761321 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 18:15:37.771681 kernel: loop0: detected capacity change from 0 to 8 Sep 12 18:15:37.782646 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 18:15:37.800846 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 18:15:37.812120 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Sep 12 18:15:37.812130 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Sep 12 18:15:37.812961 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 18:15:37.824826 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 18:15:37.836689 kernel: loop1: detected capacity change from 0 to 221472 Sep 12 18:15:37.841870 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 18:15:37.852879 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 18:15:37.863855 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 18:15:37.874864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:15:37.884885 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 18:15:37.898856 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 18:15:37.906682 kernel: loop2: detected capacity change from 0 to 138176 Sep 12 18:15:37.930894 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 18:15:37.942433 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 18:15:37.952319 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 18:15:37.952943 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 18:15:37.964873 udevadm[1407]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 18:15:37.974624 kernel: loop3: detected capacity change from 0 to 147912 Sep 12 18:15:37.974995 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 18:15:37.996863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 18:15:38.004302 systemd-tmpfiles[1423]: ACLs are not supported, ignoring. Sep 12 18:15:38.004312 systemd-tmpfiles[1423]: ACLs are not supported, ignoring. Sep 12 18:15:38.008461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 18:15:38.038649 kernel: loop4: detected capacity change from 0 to 8 Sep 12 18:15:38.045679 kernel: loop5: detected capacity change from 0 to 221472 Sep 12 18:15:38.069678 kernel: loop6: detected capacity change from 0 to 138176 Sep 12 18:15:38.073210 ldconfig[1395]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 18:15:38.074431 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 18:15:38.090674 kernel: loop7: detected capacity change from 0 to 147912 Sep 12 18:15:38.103646 (sd-merge)[1427]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Sep 12 18:15:38.103924 (sd-merge)[1427]: Merged extensions into '/usr'. Sep 12 18:15:38.135644 systemd[1]: Reload requested from client PID 1401 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 18:15:38.135654 systemd[1]: Reloading... Sep 12 18:15:38.164718 zram_generator::config[1452]: No configuration found. Sep 12 18:15:38.238759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 18:15:38.292052 systemd[1]: Reloading finished in 156 ms. Sep 12 18:15:38.308698 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 18:15:38.320007 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 18:15:38.350806 systemd[1]: Starting ensure-sysext.service... Sep 12 18:15:38.358812 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 18:15:38.370638 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 18:15:38.381057 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 18:15:38.381212 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 18:15:38.381742 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 18:15:38.381930 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Sep 12 18:15:38.381984 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Sep 12 18:15:38.386549 systemd[1]: Reload requested from client PID 1511 ('systemctl') (unit ensure-sysext.service)... Sep 12 18:15:38.386573 systemd[1]: Reloading... Sep 12 18:15:38.386934 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 18:15:38.386938 systemd-tmpfiles[1512]: Skipping /boot Sep 12 18:15:38.392541 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 18:15:38.392545 systemd-tmpfiles[1512]: Skipping /boot Sep 12 18:15:38.397943 systemd-udevd[1513]: Using default interface naming scheme 'v255'. Sep 12 18:15:38.418628 zram_generator::config[1542]: No configuration found. Sep 12 18:15:38.449643 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1556) Sep 12 18:15:38.460682 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 12 18:15:38.460733 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 18:15:38.460776 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 18:15:38.478627 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 18:15:38.484625 kernel: IPMI message handler: version 39.2 Sep 12 18:15:38.484668 kernel: ACPI: button: Power Button [PWRF] Sep 12 18:15:38.496628 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 12 18:15:38.505629 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 12 18:15:38.513627 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 12 18:15:38.519104 kernel: ipmi device interface Sep 12 18:15:38.543069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 18:15:38.562623 kernel: iTCO_vendor_support: vendor-support=0 Sep 12 18:15:38.562664 kernel: ipmi_si: IPMI System Interface driver Sep 12 18:15:38.562676 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 12 18:15:38.562782 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 12 18:15:38.592431 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 12 18:15:38.592660 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 12 18:15:38.605644 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 12 18:15:38.611988 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 12 18:15:38.620334 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 12 18:15:38.629641 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 12 18:15:38.632535 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 12 18:15:38.635675 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 12 18:15:38.645940 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 12 18:15:38.656808 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Sep 12 18:15:38.657040 systemd[1]: Reloading finished in 270 ms. Sep 12 18:15:38.670625 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Sep 12 18:15:38.693395 kernel: intel_rapl_common: Found RAPL domain package Sep 12 18:15:38.693433 kernel: intel_rapl_common: Found RAPL domain core Sep 12 18:15:38.693467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 18:15:38.694684 kernel: intel_rapl_common: Found RAPL domain dram Sep 12 18:15:38.719257 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 18:15:38.727656 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 12 18:15:38.748693 systemd[1]: Finished ensure-sysext.service. Sep 12 18:15:38.761686 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Sep 12 18:15:38.786880 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 12 18:15:38.796704 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:15:38.812761 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 18:15:38.822013 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 18:15:38.831626 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 12 18:15:38.831771 augenrules[1715]: No rules Sep 12 18:15:38.838620 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 12 18:15:38.843774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:15:38.853154 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 18:15:38.864251 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 18:15:38.874269 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 18:15:38.885230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 18:15:38.895792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:15:38.896370 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 18:15:38.907713 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:15:38.908333 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 18:15:38.920583 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 18:15:38.921546 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 18:15:38.922465 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 18:15:38.959749 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 18:15:38.972284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:15:38.982709 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:15:38.983332 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 18:15:38.995791 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 18:15:38.995897 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 18:15:38.996157 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 18:15:38.996294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 18:15:38.996380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 18:15:38.996520 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 18:15:38.996604 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 18:15:38.996751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 18:15:38.996836 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 18:15:38.996976 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 18:15:38.997058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 18:15:38.997277 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 18:15:38.997434 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 18:15:39.012744 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 18:15:39.012779 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 18:15:39.012812 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 18:15:39.013471 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 18:15:39.014382 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 18:15:39.014407 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 18:15:39.014668 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 18:15:39.019255 lvm[1743]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 18:15:39.021387 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 18:15:39.038971 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 18:15:39.059753 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 18:15:39.080723 systemd-resolved[1728]: Positive Trust Anchors: Sep 12 18:15:39.080729 systemd-resolved[1728]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 18:15:39.080755 systemd-resolved[1728]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 18:15:39.083315 systemd-resolved[1728]: Using system hostname 'ci-4230.2.3-a-0654ef0f4d'. Sep 12 18:15:39.085632 systemd-networkd[1727]: lo: Link UP Sep 12 18:15:39.085635 systemd-networkd[1727]: lo: Gained carrier Sep 12 18:15:39.088515 systemd-networkd[1727]: bond0: netdev ready Sep 12 18:15:39.089525 systemd-networkd[1727]: Enumeration completed Sep 12 18:15:39.096193 systemd-networkd[1727]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d7:69:1a.network. Sep 12 18:15:39.126871 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 18:15:39.137945 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 18:15:39.147734 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 18:15:39.157851 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:15:39.169604 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 18:15:39.178714 systemd[1]: Reached target network.target - Network. Sep 12 18:15:39.186662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 18:15:39.197708 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 18:15:39.207746 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 18:15:39.218718 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 18:15:39.229709 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 18:15:39.240691 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 18:15:39.240711 systemd[1]: Reached target paths.target - Path Units. Sep 12 18:15:39.248711 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 18:15:39.258781 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 18:15:39.268751 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 18:15:39.279693 systemd[1]: Reached target timers.target - Timer Units. Sep 12 18:15:39.288480 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 18:15:39.298411 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 18:15:39.307918 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 18:15:39.326965 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 18:15:39.336839 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 18:15:39.354760 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 18:15:39.356926 lvm[1768]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 18:15:39.366352 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 18:15:39.378273 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 18:15:39.389037 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 18:15:39.398881 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 18:15:39.410122 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 18:15:39.419747 systemd[1]: Reached target basic.target - Basic System. Sep 12 18:15:39.427778 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 18:15:39.427795 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 18:15:39.434735 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 18:15:39.445478 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 18:15:39.461994 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 18:15:39.470211 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 18:15:39.475069 coreos-metadata[1773]: Sep 12 18:15:39.475 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 12 18:15:39.475956 coreos-metadata[1773]: Sep 12 18:15:39.475 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Sep 12 18:15:39.480236 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 18:15:39.481897 jq[1777]: false Sep 12 18:15:39.481900 dbus-daemon[1774]: [system] SELinux support is enabled Sep 12 18:15:39.489727 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 18:15:39.490345 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 18:15:39.498184 extend-filesystems[1779]: Found loop4 Sep 12 18:15:39.498184 extend-filesystems[1779]: Found loop5 Sep 12 18:15:39.523857 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Sep 12 18:15:39.523882 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1556) Sep 12 18:15:39.523899 extend-filesystems[1779]: Found loop6 Sep 12 18:15:39.523899 extend-filesystems[1779]: Found loop7 Sep 12 18:15:39.523899 extend-filesystems[1779]: Found sda Sep 12 18:15:39.523899 extend-filesystems[1779]: Found sda1 Sep 12 18:15:39.523899 extend-filesystems[1779]: Found sda2 Sep 12 18:15:39.523899 extend-filesystems[1779]: Found sda3 Sep 12 18:15:39.523899 extend-filesystems[1779]: Found usr Sep 12 18:15:39.523899 extend-filesystems[1779]: Found sda4 Sep 12 18:15:39.523899 extend-filesystems[1779]: Found sda6 Sep 12 18:15:39.523899 extend-filesystems[1779]: Found sda7 Sep 12 18:15:39.523899 extend-filesystems[1779]: Found sda9 Sep 12 18:15:39.523899 extend-filesystems[1779]: Checking size of /dev/sda9 Sep 12 18:15:39.523899 extend-filesystems[1779]: Resized partition /dev/sda9 Sep 12 18:15:39.500307 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 18:15:39.694755 extend-filesystems[1790]: resize2fs 1.47.1 (20-May-2024) Sep 12 18:15:39.524546 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 18:15:39.700773 dbus-daemon[1774]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 18:15:39.538398 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 18:15:39.568236 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 18:15:39.583777 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Sep 12 18:15:39.590023 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 18:15:39.713181 update_engine[1804]: I20250912 18:15:39.625591 1804 main.cc:92] Flatcar Update Engine starting Sep 12 18:15:39.713181 update_engine[1804]: I20250912 18:15:39.626215 1804 update_check_scheduler.cc:74] Next update check in 4m45s Sep 12 18:15:39.590350 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 18:15:39.713358 jq[1805]: true Sep 12 18:15:39.604300 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 18:15:39.615740 systemd-logind[1799]: Watching system buttons on /dev/input/event3 (Power Button) Sep 12 18:15:39.615751 systemd-logind[1799]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 12 18:15:39.615761 systemd-logind[1799]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 12 18:15:39.615924 systemd-logind[1799]: New seat seat0. Sep 12 18:15:39.632179 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 18:15:39.656324 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 18:15:39.678799 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 18:15:39.678948 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 18:15:39.679120 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 18:15:39.679236 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 18:15:39.680205 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 18:15:39.680318 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 18:15:39.703338 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 12 18:15:39.703467 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Sep 12 18:15:39.719529 systemd[1]: Started update-engine.service - Update Engine. Sep 12 18:15:39.719609 tar[1807]: linux-amd64/helm Sep 12 18:15:39.721086 (ntainerd)[1809]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 18:15:39.723587 jq[1808]: true Sep 12 18:15:39.732348 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 18:15:39.732453 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 18:15:39.733166 sshd_keygen[1802]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 18:15:39.743759 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 18:15:39.743847 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 18:15:39.755583 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 18:15:39.767230 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 18:15:39.779029 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 18:15:39.791772 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 18:15:39.791898 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 18:15:39.798941 locksmithd[1845]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 18:15:39.803694 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 18:15:39.819516 bash[1844]: Updated "/home/core/.ssh/authorized_keys" Sep 12 18:15:39.820302 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 18:15:39.832012 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 18:15:39.843938 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 18:15:39.852509 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Sep 12 18:15:39.861800 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 18:15:39.870637 systemd[1]: Starting sshkeys.service... Sep 12 18:15:39.896333 containerd[1809]: time="2025-09-12T18:15:39.896295738Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 18:15:39.903923 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 18:15:39.909418 containerd[1809]: time="2025-09-12T18:15:39.909371519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910311 containerd[1809]: time="2025-09-12T18:15:39.910266602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910311 containerd[1809]: time="2025-09-12T18:15:39.910283353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 18:15:39.910311 containerd[1809]: time="2025-09-12T18:15:39.910293859Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 18:15:39.910388 containerd[1809]: time="2025-09-12T18:15:39.910380623Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 18:15:39.910406 containerd[1809]: time="2025-09-12T18:15:39.910391159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910433 containerd[1809]: time="2025-09-12T18:15:39.910425192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910448 containerd[1809]: time="2025-09-12T18:15:39.910433456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910558 containerd[1809]: time="2025-09-12T18:15:39.910549457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910580 containerd[1809]: time="2025-09-12T18:15:39.910558510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910580 containerd[1809]: time="2025-09-12T18:15:39.910566011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910580 containerd[1809]: time="2025-09-12T18:15:39.910571409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910629 containerd[1809]: time="2025-09-12T18:15:39.910612880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910739 containerd[1809]: time="2025-09-12T18:15:39.910731529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910808 containerd[1809]: time="2025-09-12T18:15:39.910800450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 18:15:39.910824 containerd[1809]: time="2025-09-12T18:15:39.910808815Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 18:15:39.910910 containerd[1809]: time="2025-09-12T18:15:39.910903642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 18:15:39.910938 containerd[1809]: time="2025-09-12T18:15:39.910932274Z" level=info msg="metadata content store policy set" policy=shared Sep 12 18:15:39.921218 containerd[1809]: time="2025-09-12T18:15:39.921205347Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 18:15:39.921259 containerd[1809]: time="2025-09-12T18:15:39.921230342Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 18:15:39.921259 containerd[1809]: time="2025-09-12T18:15:39.921242049Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 18:15:39.921259 containerd[1809]: time="2025-09-12T18:15:39.921251939Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 18:15:39.921302 containerd[1809]: time="2025-09-12T18:15:39.921260792Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 18:15:39.921378 containerd[1809]: time="2025-09-12T18:15:39.921330857Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 18:15:39.921464 containerd[1809]: time="2025-09-12T18:15:39.921457597Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 18:15:39.921522 containerd[1809]: time="2025-09-12T18:15:39.921514742Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 18:15:39.921539 containerd[1809]: time="2025-09-12T18:15:39.921524884Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 18:15:39.921539 containerd[1809]: time="2025-09-12T18:15:39.921533137Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 18:15:39.921565 containerd[1809]: time="2025-09-12T18:15:39.921540876Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 18:15:39.921565 containerd[1809]: time="2025-09-12T18:15:39.921547824Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 18:15:39.921565 containerd[1809]: time="2025-09-12T18:15:39.921554438Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 18:15:39.921565 containerd[1809]: time="2025-09-12T18:15:39.921561884Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 18:15:39.921626 containerd[1809]: time="2025-09-12T18:15:39.921569838Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 18:15:39.921626 containerd[1809]: time="2025-09-12T18:15:39.921576733Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 18:15:39.921626 containerd[1809]: time="2025-09-12T18:15:39.921585908Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 18:15:39.921626 containerd[1809]: time="2025-09-12T18:15:39.921592455Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 18:15:39.921626 containerd[1809]: time="2025-09-12T18:15:39.921603843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921626 containerd[1809]: time="2025-09-12T18:15:39.921612438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921626110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921634805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921641438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921648328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921654727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921661769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921668791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921677112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921683498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921690221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921707 containerd[1809]: time="2025-09-12T18:15:39.921697042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921863 containerd[1809]: time="2025-09-12T18:15:39.921754228Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 18:15:39.921863 containerd[1809]: time="2025-09-12T18:15:39.921789422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921863 containerd[1809]: time="2025-09-12T18:15:39.921802564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.921863 containerd[1809]: time="2025-09-12T18:15:39.921858919Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 18:15:39.921951 containerd[1809]: time="2025-09-12T18:15:39.921891202Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 18:15:39.921951 containerd[1809]: time="2025-09-12T18:15:39.921916557Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 18:15:39.921951 containerd[1809]: time="2025-09-12T18:15:39.921928871Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 18:15:39.921951 containerd[1809]: time="2025-09-12T18:15:39.921940148Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 18:15:39.921951 containerd[1809]: time="2025-09-12T18:15:39.921949397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.922056 containerd[1809]: time="2025-09-12T18:15:39.921960465Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 18:15:39.922056 containerd[1809]: time="2025-09-12T18:15:39.921969462Z" level=info msg="NRI interface is disabled by configuration." Sep 12 18:15:39.922056 containerd[1809]: time="2025-09-12T18:15:39.922011466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 18:15:39.922322 containerd[1809]: time="2025-09-12T18:15:39.922293388Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 18:15:39.922397 containerd[1809]: time="2025-09-12T18:15:39.922328001Z" level=info msg="Connect containerd service" Sep 12 18:15:39.922397 containerd[1809]: time="2025-09-12T18:15:39.922347443Z" level=info msg="using legacy CRI server" Sep 12 18:15:39.922397 containerd[1809]: time="2025-09-12T18:15:39.922352734Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 18:15:39.922438 containerd[1809]: time="2025-09-12T18:15:39.922411185Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 18:15:39.922622 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 12 18:15:39.922765 containerd[1809]: time="2025-09-12T18:15:39.922728854Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 18:15:39.922842 containerd[1809]: time="2025-09-12T18:15:39.922822513Z" level=info msg="Start subscribing containerd event" Sep 12 18:15:39.922874 containerd[1809]: time="2025-09-12T18:15:39.922853745Z" level=info msg="Start recovering state" Sep 12 18:15:39.922900 containerd[1809]: time="2025-09-12T18:15:39.922877809Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 18:15:39.922926 containerd[1809]: time="2025-09-12T18:15:39.922902855Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 18:15:39.922926 containerd[1809]: time="2025-09-12T18:15:39.922909977Z" level=info msg="Start event monitor" Sep 12 18:15:39.922926 containerd[1809]: time="2025-09-12T18:15:39.922920788Z" level=info msg="Start snapshots syncer" Sep 12 18:15:39.923001 containerd[1809]: time="2025-09-12T18:15:39.922928995Z" level=info msg="Start cni network conf syncer for default" Sep 12 18:15:39.923001 containerd[1809]: time="2025-09-12T18:15:39.922936451Z" level=info msg="Start streaming server" Sep 12 18:15:39.923001 containerd[1809]: time="2025-09-12T18:15:39.922977148Z" level=info msg="containerd successfully booted in 0.027117s" Sep 12 18:15:39.924972 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 18:15:39.935626 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Sep 12 18:15:39.936020 coreos-metadata[1879]: Sep 12 18:15:39.936 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 12 18:15:39.936245 systemd-networkd[1727]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d7:69:1b.network. Sep 12 18:15:39.936722 coreos-metadata[1879]: Sep 12 18:15:39.936 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Sep 12 18:15:39.943999 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 18:15:39.993137 tar[1807]: linux-amd64/LICENSE Sep 12 18:15:39.993137 tar[1807]: linux-amd64/README.md Sep 12 18:15:40.005719 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 18:15:40.041623 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Sep 12 18:15:40.070674 extend-filesystems[1790]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 18:15:40.070674 extend-filesystems[1790]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 12 18:15:40.070674 extend-filesystems[1790]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Sep 12 18:15:40.117794 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 12 18:15:40.118112 extend-filesystems[1779]: Resized filesystem in /dev/sda9 Sep 12 18:15:40.118112 extend-filesystems[1779]: Found sdb Sep 12 18:15:40.144988 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Sep 12 18:15:40.145006 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 12 18:15:40.071083 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 18:15:40.071194 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 18:15:40.128343 systemd-networkd[1727]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 12 18:15:40.128674 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 18:15:40.129540 systemd-networkd[1727]: enp2s0f0np0: Link UP Sep 12 18:15:40.129773 systemd-networkd[1727]: enp2s0f0np0: Gained carrier Sep 12 18:15:40.156378 systemd-networkd[1727]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d7:69:1a.network. Sep 12 18:15:40.156625 systemd-networkd[1727]: enp2s0f1np1: Link UP Sep 12 18:15:40.156824 systemd-networkd[1727]: enp2s0f1np1: Gained carrier Sep 12 18:15:40.179012 systemd-networkd[1727]: bond0: Link UP Sep 12 18:15:40.179388 systemd-networkd[1727]: bond0: Gained carrier Sep 12 18:15:40.179742 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 12 18:15:40.180614 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 12 18:15:40.181091 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 12 18:15:40.181352 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 12 18:15:40.244547 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 12 18:15:40.244567 kernel: bond0: active interface up! Sep 12 18:15:40.360620 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 12 18:15:40.476036 coreos-metadata[1773]: Sep 12 18:15:40.476 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 12 18:15:40.936849 coreos-metadata[1879]: Sep 12 18:15:40.936 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 12 18:15:41.934932 systemd-networkd[1727]: bond0: Gained IPv6LL Sep 12 18:15:41.935319 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 12 18:15:41.998821 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 12 18:15:41.998955 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 12 18:15:41.999822 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 18:15:42.011315 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 18:15:42.036883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:15:42.047405 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 18:15:42.066400 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 18:15:42.799387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:15:42.811162 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 18:15:43.283820 kubelet[1911]: E0912 18:15:43.283675 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 18:15:43.284961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 18:15:43.285041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 18:15:43.285220 systemd[1]: kubelet.service: Consumed 600ms CPU time, 271.9M memory peak. Sep 12 18:15:43.786504 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 18:15:43.809711 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Sep 12 18:15:43.809851 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Sep 12 18:15:43.820991 systemd[1]: Started sshd@0-139.178.90.133:22-139.178.68.195:43890.service - OpenSSH per-connection server daemon (139.178.68.195:43890). Sep 12 18:15:43.887436 sshd[1928]: Accepted publickey for core from 139.178.68.195 port 43890 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:15:43.888201 sshd-session[1928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:15:43.895577 systemd-logind[1799]: New session 1 of user core. Sep 12 18:15:43.896621 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 18:15:43.917675 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 18:15:43.930861 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 18:15:43.943194 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 18:15:43.954392 (systemd)[1934]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 18:15:43.956283 systemd-logind[1799]: New session c1 of user core. Sep 12 18:15:44.059473 systemd[1934]: Queued start job for default target default.target. Sep 12 18:15:44.067190 systemd[1934]: Created slice app.slice - User Application Slice. Sep 12 18:15:44.067223 systemd[1934]: Reached target paths.target - Paths. Sep 12 18:15:44.067245 systemd[1934]: Reached target timers.target - Timers. Sep 12 18:15:44.067917 systemd[1934]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 18:15:44.073486 systemd[1934]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 18:15:44.073516 systemd[1934]: Reached target sockets.target - Sockets. Sep 12 18:15:44.073540 systemd[1934]: Reached target basic.target - Basic System. Sep 12 18:15:44.073563 systemd[1934]: Reached target default.target - Main User Target. Sep 12 18:15:44.073578 systemd[1934]: Startup finished in 113ms. Sep 12 18:15:44.073600 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 18:15:44.083624 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 18:15:44.152401 systemd[1]: Started sshd@1-139.178.90.133:22-139.178.68.195:43898.service - OpenSSH per-connection server daemon (139.178.68.195:43898). Sep 12 18:15:44.190556 sshd[1945]: Accepted publickey for core from 139.178.68.195 port 43898 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:15:44.191139 sshd-session[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:15:44.193814 systemd-logind[1799]: New session 2 of user core. Sep 12 18:15:44.204771 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 18:15:44.260132 sshd[1947]: Connection closed by 139.178.68.195 port 43898 Sep 12 18:15:44.260242 sshd-session[1945]: pam_unix(sshd:session): session closed for user core Sep 12 18:15:44.276799 systemd[1]: sshd@1-139.178.90.133:22-139.178.68.195:43898.service: Deactivated successfully. Sep 12 18:15:44.277675 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 18:15:44.278379 systemd-logind[1799]: Session 2 logged out. Waiting for processes to exit. Sep 12 18:15:44.279072 systemd[1]: Started sshd@2-139.178.90.133:22-139.178.68.195:43902.service - OpenSSH per-connection server daemon (139.178.68.195:43902). Sep 12 18:15:44.290470 systemd-logind[1799]: Removed session 2. Sep 12 18:15:44.316715 sshd[1952]: Accepted publickey for core from 139.178.68.195 port 43902 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:15:44.317334 sshd-session[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:15:44.320079 systemd-logind[1799]: New session 3 of user core. Sep 12 18:15:44.335807 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 18:15:44.394373 sshd[1955]: Connection closed by 139.178.68.195 port 43902 Sep 12 18:15:44.394487 sshd-session[1952]: pam_unix(sshd:session): session closed for user core Sep 12 18:15:44.395800 systemd[1]: sshd@2-139.178.90.133:22-139.178.68.195:43902.service: Deactivated successfully. Sep 12 18:15:44.396657 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 18:15:44.397392 systemd-logind[1799]: Session 3 logged out. Waiting for processes to exit. Sep 12 18:15:44.398026 systemd-logind[1799]: Removed session 3. Sep 12 18:15:44.560932 coreos-metadata[1773]: Sep 12 18:15:44.560 INFO Fetch successful Sep 12 18:15:44.592705 coreos-metadata[1879]: Sep 12 18:15:44.592 INFO Fetch successful Sep 12 18:15:44.607816 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 18:15:44.625900 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Sep 12 18:15:44.630169 unknown[1879]: wrote ssh authorized keys file for user: core Sep 12 18:15:44.649631 update-ssh-keys[1966]: Updated "/home/core/.ssh/authorized_keys" Sep 12 18:15:44.650025 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 18:15:44.661354 systemd[1]: Finished sshkeys.service. Sep 12 18:15:44.950664 login[1866]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 18:15:44.951554 login[1867]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 18:15:44.954189 systemd-logind[1799]: New session 4 of user core. Sep 12 18:15:44.970881 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 18:15:44.973723 systemd-logind[1799]: New session 5 of user core. Sep 12 18:15:44.974897 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 18:15:45.179341 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Sep 12 18:15:45.181859 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 18:15:45.182468 systemd[1]: Startup finished in 1.831s (kernel) + 25.677s (initrd) + 9.736s (userspace) = 37.245s. Sep 12 18:15:46.967345 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 12 18:15:53.420280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 18:15:53.440081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:15:53.703469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:15:53.705527 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 18:15:53.727937 kubelet[2007]: E0912 18:15:53.727882 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 18:15:53.730054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 18:15:53.730147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 18:15:53.730338 systemd[1]: kubelet.service: Consumed 158ms CPU time, 116.6M memory peak. Sep 12 18:15:54.427143 systemd[1]: Started sshd@3-139.178.90.133:22-139.178.68.195:49044.service - OpenSSH per-connection server daemon (139.178.68.195:49044). Sep 12 18:15:54.455708 sshd[2027]: Accepted publickey for core from 139.178.68.195 port 49044 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:15:54.456301 sshd-session[2027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:15:54.458979 systemd-logind[1799]: New session 6 of user core. Sep 12 18:15:54.474116 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 18:15:54.535389 sshd[2029]: Connection closed by 139.178.68.195 port 49044 Sep 12 18:15:54.535614 sshd-session[2027]: pam_unix(sshd:session): session closed for user core Sep 12 18:15:54.544886 systemd[1]: sshd@3-139.178.90.133:22-139.178.68.195:49044.service: Deactivated successfully. Sep 12 18:15:54.545673 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 18:15:54.546228 systemd-logind[1799]: Session 6 logged out. Waiting for processes to exit. Sep 12 18:15:54.547336 systemd[1]: Started sshd@4-139.178.90.133:22-139.178.68.195:49048.service - OpenSSH per-connection server daemon (139.178.68.195:49048). Sep 12 18:15:54.547834 systemd-logind[1799]: Removed session 6. Sep 12 18:15:54.593900 sshd[2034]: Accepted publickey for core from 139.178.68.195 port 49048 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:15:54.594755 sshd-session[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:15:54.598322 systemd-logind[1799]: New session 7 of user core. Sep 12 18:15:54.613286 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 18:15:54.673776 sshd[2038]: Connection closed by 139.178.68.195 port 49048 Sep 12 18:15:54.674526 sshd-session[2034]: pam_unix(sshd:session): session closed for user core Sep 12 18:15:54.695777 systemd[1]: sshd@4-139.178.90.133:22-139.178.68.195:49048.service: Deactivated successfully. Sep 12 18:15:54.699725 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 18:15:54.701937 systemd-logind[1799]: Session 7 logged out. Waiting for processes to exit. Sep 12 18:15:54.717398 systemd[1]: Started sshd@5-139.178.90.133:22-139.178.68.195:49064.service - OpenSSH per-connection server daemon (139.178.68.195:49064). Sep 12 18:15:54.720514 systemd-logind[1799]: Removed session 7. Sep 12 18:15:54.776427 sshd[2043]: Accepted publickey for core from 139.178.68.195 port 49064 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:15:54.777073 sshd-session[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:15:54.779927 systemd-logind[1799]: New session 8 of user core. Sep 12 18:15:54.797184 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 18:15:54.867747 sshd[2047]: Connection closed by 139.178.68.195 port 49064 Sep 12 18:15:54.868751 sshd-session[2043]: pam_unix(sshd:session): session closed for user core Sep 12 18:15:54.885538 systemd[1]: sshd@5-139.178.90.133:22-139.178.68.195:49064.service: Deactivated successfully. Sep 12 18:15:54.889532 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 18:15:54.890853 systemd-logind[1799]: Session 8 logged out. Waiting for processes to exit. Sep 12 18:15:54.901080 systemd[1]: Started sshd@6-139.178.90.133:22-139.178.68.195:49072.service - OpenSSH per-connection server daemon (139.178.68.195:49072). Sep 12 18:15:54.902186 systemd-logind[1799]: Removed session 8. Sep 12 18:15:54.932934 sshd[2052]: Accepted publickey for core from 139.178.68.195 port 49072 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:15:54.933520 sshd-session[2052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:15:54.936394 systemd-logind[1799]: New session 9 of user core. Sep 12 18:15:54.945102 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 18:15:55.017831 sudo[2056]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 18:15:55.017983 sudo[2056]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:15:55.032500 sudo[2056]: pam_unix(sudo:session): session closed for user root Sep 12 18:15:55.033373 sshd[2055]: Connection closed by 139.178.68.195 port 49072 Sep 12 18:15:55.033558 sshd-session[2052]: pam_unix(sshd:session): session closed for user core Sep 12 18:15:55.052454 systemd[1]: sshd@6-139.178.90.133:22-139.178.68.195:49072.service: Deactivated successfully. Sep 12 18:15:55.053869 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 18:15:55.054638 systemd-logind[1799]: Session 9 logged out. Waiting for processes to exit. Sep 12 18:15:55.056685 systemd[1]: Started sshd@7-139.178.90.133:22-139.178.68.195:49080.service - OpenSSH per-connection server daemon (139.178.68.195:49080). Sep 12 18:15:55.057659 systemd-logind[1799]: Removed session 9. Sep 12 18:15:55.131289 sshd[2061]: Accepted publickey for core from 139.178.68.195 port 49080 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:15:55.132713 sshd-session[2061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:15:55.137590 systemd-logind[1799]: New session 10 of user core. Sep 12 18:15:55.154986 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 18:15:55.210300 sudo[2066]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 18:15:55.210446 sudo[2066]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:15:55.212511 sudo[2066]: pam_unix(sudo:session): session closed for user root Sep 12 18:15:55.215179 sudo[2065]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 18:15:55.215325 sudo[2065]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:15:55.233977 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 18:15:55.256588 augenrules[2088]: No rules Sep 12 18:15:55.256925 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 18:15:55.257045 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 18:15:55.257522 sudo[2065]: pam_unix(sudo:session): session closed for user root Sep 12 18:15:55.258286 sshd[2064]: Connection closed by 139.178.68.195 port 49080 Sep 12 18:15:55.258438 sshd-session[2061]: pam_unix(sshd:session): session closed for user core Sep 12 18:15:55.284677 systemd[1]: sshd@7-139.178.90.133:22-139.178.68.195:49080.service: Deactivated successfully. Sep 12 18:15:55.285798 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 18:15:55.286511 systemd-logind[1799]: Session 10 logged out. Waiting for processes to exit. Sep 12 18:15:55.288185 systemd[1]: Started sshd@8-139.178.90.133:22-139.178.68.195:49096.service - OpenSSH per-connection server daemon (139.178.68.195:49096). Sep 12 18:15:55.289041 systemd-logind[1799]: Removed session 10. Sep 12 18:15:55.333078 sshd[2096]: Accepted publickey for core from 139.178.68.195 port 49096 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:15:55.333601 sshd-session[2096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:15:55.336462 systemd-logind[1799]: New session 11 of user core. Sep 12 18:15:55.357937 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 18:15:55.416149 sudo[2100]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 18:15:55.417071 sudo[2100]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:15:55.779937 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 18:15:55.779992 (dockerd)[2128]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 18:15:56.039444 dockerd[2128]: time="2025-09-12T18:15:56.039354609Z" level=info msg="Starting up" Sep 12 18:15:56.108228 dockerd[2128]: time="2025-09-12T18:15:56.108184607Z" level=info msg="Loading containers: start." Sep 12 18:15:56.241628 kernel: Initializing XFRM netlink socket Sep 12 18:15:56.257993 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 12 18:15:56.301302 systemd-networkd[1727]: docker0: Link UP Sep 12 18:15:56.333837 dockerd[2128]: time="2025-09-12T18:15:56.333789235Z" level=info msg="Loading containers: done." Sep 12 18:15:56.345199 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3362901539-merged.mount: Deactivated successfully. Sep 12 18:15:56.346113 dockerd[2128]: time="2025-09-12T18:15:56.346069411Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 18:15:56.346156 dockerd[2128]: time="2025-09-12T18:15:56.346118687Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 18:15:56.346181 dockerd[2128]: time="2025-09-12T18:15:56.346173440Z" level=info msg="Daemon has completed initialization" Sep 12 18:15:56.360288 dockerd[2128]: time="2025-09-12T18:15:56.360212881Z" level=info msg="API listen on /run/docker.sock" Sep 12 18:15:56.360318 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 18:15:56.680077 systemd-timesyncd[1729]: Contacted time server [2606:a300:1004:7::2]:123 (2.flatcar.pool.ntp.org). Sep 12 18:15:56.680121 systemd-timesyncd[1729]: Initial clock synchronization to Fri 2025-09-12 18:15:56.844377 UTC. Sep 12 18:15:57.207751 containerd[1809]: time="2025-09-12T18:15:57.207666459Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 18:15:57.895429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200009820.mount: Deactivated successfully. Sep 12 18:15:58.646356 containerd[1809]: time="2025-09-12T18:15:58.646300846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:15:58.646578 containerd[1809]: time="2025-09-12T18:15:58.646485704Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 18:15:58.646997 containerd[1809]: time="2025-09-12T18:15:58.646957732Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:15:58.648607 containerd[1809]: time="2025-09-12T18:15:58.648565287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:15:58.649209 containerd[1809]: time="2025-09-12T18:15:58.649167865Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.441427166s" Sep 12 18:15:58.649209 containerd[1809]: time="2025-09-12T18:15:58.649184274Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 18:15:58.649502 containerd[1809]: time="2025-09-12T18:15:58.649491619Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 18:15:59.773056 containerd[1809]: time="2025-09-12T18:15:59.772998530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:15:59.773272 containerd[1809]: time="2025-09-12T18:15:59.773183085Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 18:15:59.773692 containerd[1809]: time="2025-09-12T18:15:59.773646383Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:15:59.775251 containerd[1809]: time="2025-09-12T18:15:59.775207481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:15:59.775918 containerd[1809]: time="2025-09-12T18:15:59.775873533Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.126365467s" Sep 12 18:15:59.775918 containerd[1809]: time="2025-09-12T18:15:59.775890218Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 18:15:59.776148 containerd[1809]: time="2025-09-12T18:15:59.776135239Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 18:16:00.763455 containerd[1809]: time="2025-09-12T18:16:00.763401310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:00.763538 containerd[1809]: time="2025-09-12T18:16:00.763522686Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 18:16:00.764083 containerd[1809]: time="2025-09-12T18:16:00.764046155Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:00.765736 containerd[1809]: time="2025-09-12T18:16:00.765696064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:00.766361 containerd[1809]: time="2025-09-12T18:16:00.766318609Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 990.168037ms" Sep 12 18:16:00.766361 containerd[1809]: time="2025-09-12T18:16:00.766334051Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 18:16:00.766719 containerd[1809]: time="2025-09-12T18:16:00.766660831Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 18:16:01.642331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275503741.mount: Deactivated successfully. Sep 12 18:16:01.833958 containerd[1809]: time="2025-09-12T18:16:01.833933269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:01.834208 containerd[1809]: time="2025-09-12T18:16:01.834155761Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 18:16:01.834570 containerd[1809]: time="2025-09-12T18:16:01.834557738Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:01.835511 containerd[1809]: time="2025-09-12T18:16:01.835498081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:01.835931 containerd[1809]: time="2025-09-12T18:16:01.835916571Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.069222469s" Sep 12 18:16:01.835973 containerd[1809]: time="2025-09-12T18:16:01.835934092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 18:16:01.836211 containerd[1809]: time="2025-09-12T18:16:01.836199153Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 18:16:02.437540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount963534166.mount: Deactivated successfully. Sep 12 18:16:02.971828 containerd[1809]: time="2025-09-12T18:16:02.971774829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:02.972074 containerd[1809]: time="2025-09-12T18:16:02.971955173Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 18:16:02.972510 containerd[1809]: time="2025-09-12T18:16:02.972469868Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:02.974139 containerd[1809]: time="2025-09-12T18:16:02.974099326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:02.974801 containerd[1809]: time="2025-09-12T18:16:02.974758841Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.138542666s" Sep 12 18:16:02.974801 containerd[1809]: time="2025-09-12T18:16:02.974775659Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 18:16:02.975101 containerd[1809]: time="2025-09-12T18:16:02.975044179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 18:16:03.531854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671269930.mount: Deactivated successfully. Sep 12 18:16:03.532860 containerd[1809]: time="2025-09-12T18:16:03.532843025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:03.533089 containerd[1809]: time="2025-09-12T18:16:03.533068224Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 18:16:03.533489 containerd[1809]: time="2025-09-12T18:16:03.533476411Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:03.534743 containerd[1809]: time="2025-09-12T18:16:03.534731099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:03.535178 containerd[1809]: time="2025-09-12T18:16:03.535152769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 560.076768ms" Sep 12 18:16:03.535222 containerd[1809]: time="2025-09-12T18:16:03.535179373Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 18:16:03.535530 containerd[1809]: time="2025-09-12T18:16:03.535518926Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 18:16:03.918720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 18:16:03.927896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:16:04.171703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:16:04.174314 (kubelet)[2481]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 18:16:04.202090 kubelet[2481]: E0912 18:16:04.202039 2481 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 18:16:04.203261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 18:16:04.203344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 18:16:04.203538 systemd[1]: kubelet.service: Consumed 103ms CPU time, 122M memory peak. Sep 12 18:16:04.221549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052921163.mount: Deactivated successfully. Sep 12 18:16:05.277333 containerd[1809]: time="2025-09-12T18:16:05.277278962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:05.277548 containerd[1809]: time="2025-09-12T18:16:05.277457459Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 18:16:05.277962 containerd[1809]: time="2025-09-12T18:16:05.277922211Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:05.279677 containerd[1809]: time="2025-09-12T18:16:05.279650887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:05.280929 containerd[1809]: time="2025-09-12T18:16:05.280887087Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.745353973s" Sep 12 18:16:05.280929 containerd[1809]: time="2025-09-12T18:16:05.280902523Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 18:16:07.203404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:16:07.203517 systemd[1]: kubelet.service: Consumed 103ms CPU time, 122M memory peak. Sep 12 18:16:07.218063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:16:07.232098 systemd[1]: Reload requested from client PID 2604 ('systemctl') (unit session-11.scope)... Sep 12 18:16:07.232106 systemd[1]: Reloading... Sep 12 18:16:07.288635 zram_generator::config[2650]: No configuration found. Sep 12 18:16:07.359791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 18:16:07.443726 systemd[1]: Reloading finished in 211 ms. Sep 12 18:16:07.483449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:16:07.485071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:16:07.485582 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 18:16:07.485701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:16:07.485722 systemd[1]: kubelet.service: Consumed 57ms CPU time, 98.2M memory peak. Sep 12 18:16:07.486542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:16:07.723590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:16:07.725883 (kubelet)[2720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 18:16:07.749397 kubelet[2720]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:16:07.749397 kubelet[2720]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 18:16:07.749397 kubelet[2720]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:16:07.749643 kubelet[2720]: I0912 18:16:07.749399 2720 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 18:16:07.958248 kubelet[2720]: I0912 18:16:07.958207 2720 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 18:16:07.958248 kubelet[2720]: I0912 18:16:07.958218 2720 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 18:16:07.958389 kubelet[2720]: I0912 18:16:07.958351 2720 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 18:16:07.977879 kubelet[2720]: E0912 18:16:07.977838 2720 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.90.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.90.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 18:16:07.978431 kubelet[2720]: I0912 18:16:07.978395 2720 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 18:16:07.983366 kubelet[2720]: E0912 18:16:07.983306 2720 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 18:16:07.983366 kubelet[2720]: I0912 18:16:07.983338 2720 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 18:16:07.993070 kubelet[2720]: I0912 18:16:07.993034 2720 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 18:16:07.993098 kubelet[2720]: I0912 18:16:07.993080 2720 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 18:16:07.993180 kubelet[2720]: I0912 18:16:07.993136 2720 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 18:16:07.993274 kubelet[2720]: I0912 18:16:07.993151 2720 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.3-a-0654ef0f4d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 18:16:07.993274 kubelet[2720]: I0912 18:16:07.993253 2720 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 18:16:07.993274 kubelet[2720]: I0912 18:16:07.993258 2720 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 18:16:07.993376 kubelet[2720]: I0912 18:16:07.993315 2720 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:16:07.995564 kubelet[2720]: I0912 18:16:07.995527 2720 kubelet.go:408] "Attempting to sync node with API server" Sep 12 18:16:07.995564 kubelet[2720]: I0912 18:16:07.995539 2720 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 18:16:07.995564 kubelet[2720]: I0912 18:16:07.995556 2720 kubelet.go:314] "Adding apiserver pod source" Sep 12 18:16:07.995564 kubelet[2720]: I0912 18:16:07.995566 2720 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 18:16:07.997693 kubelet[2720]: I0912 18:16:07.997631 2720 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 18:16:07.998001 kubelet[2720]: I0912 18:16:07.997961 2720 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 18:16:07.998028 kubelet[2720]: W0912 18:16:07.998010 2720 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 18:16:07.998354 kubelet[2720]: W0912 18:16:07.998305 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.90.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.3-a-0654ef0f4d&limit=500&resourceVersion=0": dial tcp 139.178.90.133:6443: connect: connection refused Sep 12 18:16:07.998385 kubelet[2720]: E0912 18:16:07.998357 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.90.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.3-a-0654ef0f4d&limit=500&resourceVersion=0\": dial tcp 139.178.90.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 18:16:07.998385 kubelet[2720]: W0912 18:16:07.998362 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.90.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.90.133:6443: connect: connection refused Sep 12 18:16:07.998479 kubelet[2720]: E0912 18:16:07.998388 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.90.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.90.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 18:16:07.999518 kubelet[2720]: I0912 18:16:07.999439 2720 server.go:1274] "Started kubelet" Sep 12 18:16:07.999569 kubelet[2720]: I0912 18:16:07.999552 2720 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 18:16:07.999588 kubelet[2720]: I0912 18:16:07.999559 2720 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 18:16:08.001420 kubelet[2720]: I0912 18:16:08.001391 2720 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 18:16:08.002440 kubelet[2720]: I0912 18:16:08.002430 2720 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 18:16:08.002440 kubelet[2720]: I0912 18:16:08.002434 2720 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 18:16:08.002517 kubelet[2720]: I0912 18:16:08.002476 2720 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 18:16:08.002517 kubelet[2720]: I0912 18:16:08.002490 2720 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 18:16:08.002586 kubelet[2720]: I0912 18:16:08.002535 2720 reconciler.go:26] "Reconciler: start to sync state" Sep 12 18:16:08.002624 kubelet[2720]: E0912 18:16:08.002603 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:08.002828 kubelet[2720]: I0912 18:16:08.002819 2720 factory.go:221] Registration of the systemd container factory successfully Sep 12 18:16:08.003604 kubelet[2720]: E0912 18:16:08.002602 2720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.90.133:6443/api/v1/namespaces/default/events\": dial tcp 139.178.90.133:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.3-a-0654ef0f4d.18649bb7baf75420 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.3-a-0654ef0f4d,UID:ci-4230.2.3-a-0654ef0f4d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.3-a-0654ef0f4d,},FirstTimestamp:2025-09-12 18:16:07.999427616 +0000 UTC m=+0.271645157,LastTimestamp:2025-09-12 18:16:07.999427616 +0000 UTC m=+0.271645157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.3-a-0654ef0f4d,}" Sep 12 18:16:08.004247 kubelet[2720]: E0912 18:16:08.004234 2720 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 18:16:08.004304 kubelet[2720]: I0912 18:16:08.004277 2720 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 18:16:08.004304 kubelet[2720]: E0912 18:16:08.004280 2720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.3-a-0654ef0f4d?timeout=10s\": dial tcp 139.178.90.133:6443: connect: connection refused" interval="200ms" Sep 12 18:16:08.004304 kubelet[2720]: I0912 18:16:08.004284 2720 server.go:449] "Adding debug handlers to kubelet server" Sep 12 18:16:08.004391 kubelet[2720]: W0912 18:16:08.004340 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.90.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.133:6443: connect: connection refused Sep 12 18:16:08.004391 kubelet[2720]: E0912 18:16:08.004372 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.90.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.90.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 18:16:08.004755 kubelet[2720]: I0912 18:16:08.004744 2720 factory.go:221] Registration of the containerd container factory successfully Sep 12 18:16:08.010562 kubelet[2720]: I0912 18:16:08.010552 2720 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 18:16:08.010562 kubelet[2720]: I0912 18:16:08.010560 2720 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 18:16:08.010632 kubelet[2720]: I0912 18:16:08.010569 2720 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:16:08.011453 kubelet[2720]: I0912 18:16:08.011445 2720 policy_none.go:49] "None policy: Start" Sep 12 18:16:08.011673 kubelet[2720]: I0912 18:16:08.011666 2720 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 18:16:08.011698 kubelet[2720]: I0912 18:16:08.011679 2720 state_mem.go:35] "Initializing new in-memory state store" Sep 12 18:16:08.012423 kubelet[2720]: I0912 18:16:08.012408 2720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 18:16:08.013005 kubelet[2720]: I0912 18:16:08.012984 2720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 18:16:08.013005 kubelet[2720]: I0912 18:16:08.012995 2720 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 18:16:08.013005 kubelet[2720]: I0912 18:16:08.013008 2720 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 18:16:08.013091 kubelet[2720]: E0912 18:16:08.013034 2720 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 18:16:08.014235 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 18:16:08.014797 kubelet[2720]: W0912 18:16:08.014761 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.90.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.133:6443: connect: connection refused Sep 12 18:16:08.015000 kubelet[2720]: E0912 18:16:08.014866 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.90.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.90.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 18:16:08.031360 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 18:16:08.048576 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 18:16:08.049387 kubelet[2720]: I0912 18:16:08.049346 2720 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 18:16:08.049488 kubelet[2720]: I0912 18:16:08.049479 2720 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 18:16:08.049518 kubelet[2720]: I0912 18:16:08.049489 2720 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 18:16:08.049634 kubelet[2720]: I0912 18:16:08.049617 2720 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 18:16:08.050131 kubelet[2720]: E0912 18:16:08.050118 2720 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:08.135521 systemd[1]: Created slice kubepods-burstable-pod4f449b80278e268061f05f4f15ab86c6.slice - libcontainer container kubepods-burstable-pod4f449b80278e268061f05f4f15ab86c6.slice. Sep 12 18:16:08.154101 kubelet[2720]: I0912 18:16:08.154016 2720 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.154844 kubelet[2720]: E0912 18:16:08.154732 2720 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.90.133:6443/api/v1/nodes\": dial tcp 139.178.90.133:6443: connect: connection refused" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.174658 systemd[1]: Created slice kubepods-burstable-podc262181ae3499fd44256f30f0eec02b9.slice - libcontainer container kubepods-burstable-podc262181ae3499fd44256f30f0eec02b9.slice. Sep 12 18:16:08.203833 kubelet[2720]: I0912 18:16:08.203739 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c262181ae3499fd44256f30f0eec02b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" (UID: \"c262181ae3499fd44256f30f0eec02b9\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.203833 kubelet[2720]: I0912 18:16:08.203812 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c262181ae3499fd44256f30f0eec02b9-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" (UID: \"c262181ae3499fd44256f30f0eec02b9\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.204108 kubelet[2720]: I0912 18:16:08.203870 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c262181ae3499fd44256f30f0eec02b9-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" (UID: \"c262181ae3499fd44256f30f0eec02b9\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.204108 kubelet[2720]: I0912 18:16:08.203916 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f449b80278e268061f05f4f15ab86c6-ca-certs\") pod \"kube-apiserver-ci-4230.2.3-a-0654ef0f4d\" (UID: \"4f449b80278e268061f05f4f15ab86c6\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.204108 kubelet[2720]: I0912 18:16:08.203963 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f449b80278e268061f05f4f15ab86c6-k8s-certs\") pod \"kube-apiserver-ci-4230.2.3-a-0654ef0f4d\" (UID: \"4f449b80278e268061f05f4f15ab86c6\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.204108 kubelet[2720]: I0912 18:16:08.204012 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f449b80278e268061f05f4f15ab86c6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.3-a-0654ef0f4d\" (UID: \"4f449b80278e268061f05f4f15ab86c6\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.204108 kubelet[2720]: I0912 18:16:08.204056 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c262181ae3499fd44256f30f0eec02b9-ca-certs\") pod \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" (UID: \"c262181ae3499fd44256f30f0eec02b9\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.204505 kubelet[2720]: I0912 18:16:08.204104 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c262181ae3499fd44256f30f0eec02b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" (UID: \"c262181ae3499fd44256f30f0eec02b9\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.204505 kubelet[2720]: I0912 18:16:08.204149 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7326735d0e20a3a7096326a8dd2fd6e3-kubeconfig\") pod \"kube-scheduler-ci-4230.2.3-a-0654ef0f4d\" (UID: \"7326735d0e20a3a7096326a8dd2fd6e3\") " pod="kube-system/kube-scheduler-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.204598 systemd[1]: Created slice kubepods-burstable-pod7326735d0e20a3a7096326a8dd2fd6e3.slice - libcontainer container kubepods-burstable-pod7326735d0e20a3a7096326a8dd2fd6e3.slice. Sep 12 18:16:08.205268 kubelet[2720]: E0912 18:16:08.205107 2720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.3-a-0654ef0f4d?timeout=10s\": dial tcp 139.178.90.133:6443: connect: connection refused" interval="400ms" Sep 12 18:16:08.359113 kubelet[2720]: I0912 18:16:08.358909 2720 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.359824 kubelet[2720]: E0912 18:16:08.359705 2720 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.90.133:6443/api/v1/nodes\": dial tcp 139.178.90.133:6443: connect: connection refused" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.468930 containerd[1809]: time="2025-09-12T18:16:08.468839821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.3-a-0654ef0f4d,Uid:4f449b80278e268061f05f4f15ab86c6,Namespace:kube-system,Attempt:0,}" Sep 12 18:16:08.497769 containerd[1809]: time="2025-09-12T18:16:08.497709642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.3-a-0654ef0f4d,Uid:c262181ae3499fd44256f30f0eec02b9,Namespace:kube-system,Attempt:0,}" Sep 12 18:16:08.509501 containerd[1809]: time="2025-09-12T18:16:08.509484477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.3-a-0654ef0f4d,Uid:7326735d0e20a3a7096326a8dd2fd6e3,Namespace:kube-system,Attempt:0,}" Sep 12 18:16:08.606481 kubelet[2720]: E0912 18:16:08.606350 2720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.3-a-0654ef0f4d?timeout=10s\": dial tcp 139.178.90.133:6443: connect: connection refused" interval="800ms" Sep 12 18:16:08.764585 kubelet[2720]: I0912 18:16:08.764525 2720 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.768194 kubelet[2720]: E0912 18:16:08.765236 2720 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.90.133:6443/api/v1/nodes\": dial tcp 139.178.90.133:6443: connect: connection refused" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:08.988403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556205702.mount: Deactivated successfully. Sep 12 18:16:08.989434 containerd[1809]: time="2025-09-12T18:16:08.989413861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:16:08.990117 containerd[1809]: time="2025-09-12T18:16:08.990095759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 18:16:08.990960 containerd[1809]: time="2025-09-12T18:16:08.990944273Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:16:08.991532 containerd[1809]: time="2025-09-12T18:16:08.991512665Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 18:16:08.991623 containerd[1809]: time="2025-09-12T18:16:08.991610447Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:16:08.992114 containerd[1809]: time="2025-09-12T18:16:08.992101930Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:16:08.992324 containerd[1809]: time="2025-09-12T18:16:08.992306467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 18:16:08.994437 containerd[1809]: time="2025-09-12T18:16:08.994405607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.880033ms" Sep 12 18:16:08.995261 containerd[1809]: time="2025-09-12T18:16:08.995230106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:16:08.995834 containerd[1809]: time="2025-09-12T18:16:08.995818661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.799238ms" Sep 12 18:16:08.997257 containerd[1809]: time="2025-09-12T18:16:08.997245517Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.464632ms" Sep 12 18:16:09.064418 kubelet[2720]: W0912 18:16:09.064314 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.90.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.3-a-0654ef0f4d&limit=500&resourceVersion=0": dial tcp 139.178.90.133:6443: connect: connection refused Sep 12 18:16:09.064418 kubelet[2720]: E0912 18:16:09.064355 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.90.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.3-a-0654ef0f4d&limit=500&resourceVersion=0\": dial tcp 139.178.90.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 18:16:09.078565 containerd[1809]: time="2025-09-12T18:16:09.078510662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 18:16:09.078565 containerd[1809]: time="2025-09-12T18:16:09.078549587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 18:16:09.078565 containerd[1809]: time="2025-09-12T18:16:09.078557835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:09.078721 containerd[1809]: time="2025-09-12T18:16:09.078604014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:09.078721 containerd[1809]: time="2025-09-12T18:16:09.078597454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 18:16:09.078721 containerd[1809]: time="2025-09-12T18:16:09.078625665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 18:16:09.078721 containerd[1809]: time="2025-09-12T18:16:09.078633947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:09.078721 containerd[1809]: time="2025-09-12T18:16:09.078670377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:09.079204 containerd[1809]: time="2025-09-12T18:16:09.078467942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 18:16:09.079247 containerd[1809]: time="2025-09-12T18:16:09.079199486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 18:16:09.079247 containerd[1809]: time="2025-09-12T18:16:09.079209496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:09.079300 containerd[1809]: time="2025-09-12T18:16:09.079245196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:09.098942 systemd[1]: Started cri-containerd-40c91e8316dd467e17303afb291220d65baffffba90c4ceccca520062da58ea7.scope - libcontainer container 40c91e8316dd467e17303afb291220d65baffffba90c4ceccca520062da58ea7. Sep 12 18:16:09.099821 systemd[1]: Started cri-containerd-9643a2ef5b7f8739f1878ad9afe9f52b0b73cdb8e1cba16696920adcb1ab6aa9.scope - libcontainer container 9643a2ef5b7f8739f1878ad9afe9f52b0b73cdb8e1cba16696920adcb1ab6aa9. Sep 12 18:16:09.100849 systemd[1]: Started cri-containerd-afae4b59ccc8a192fd88c3a0427ac7118ae1a51ff1d54f9968fca61e10bbb72a.scope - libcontainer container afae4b59ccc8a192fd88c3a0427ac7118ae1a51ff1d54f9968fca61e10bbb72a. Sep 12 18:16:09.123122 kubelet[2720]: W0912 18:16:09.123087 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.90.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.90.133:6443: connect: connection refused Sep 12 18:16:09.123217 kubelet[2720]: E0912 18:16:09.123130 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.90.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.90.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 18:16:09.125154 containerd[1809]: time="2025-09-12T18:16:09.125123882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.3-a-0654ef0f4d,Uid:c262181ae3499fd44256f30f0eec02b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"40c91e8316dd467e17303afb291220d65baffffba90c4ceccca520062da58ea7\"" Sep 12 18:16:09.125455 containerd[1809]: time="2025-09-12T18:16:09.125439672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.3-a-0654ef0f4d,Uid:7326735d0e20a3a7096326a8dd2fd6e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"afae4b59ccc8a192fd88c3a0427ac7118ae1a51ff1d54f9968fca61e10bbb72a\"" Sep 12 18:16:09.125760 containerd[1809]: time="2025-09-12T18:16:09.125742152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.3-a-0654ef0f4d,Uid:4f449b80278e268061f05f4f15ab86c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9643a2ef5b7f8739f1878ad9afe9f52b0b73cdb8e1cba16696920adcb1ab6aa9\"" Sep 12 18:16:09.127521 containerd[1809]: time="2025-09-12T18:16:09.127505849Z" level=info msg="CreateContainer within sandbox \"9643a2ef5b7f8739f1878ad9afe9f52b0b73cdb8e1cba16696920adcb1ab6aa9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 18:16:09.127577 containerd[1809]: time="2025-09-12T18:16:09.127556660Z" level=info msg="CreateContainer within sandbox \"40c91e8316dd467e17303afb291220d65baffffba90c4ceccca520062da58ea7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 18:16:09.127645 containerd[1809]: time="2025-09-12T18:16:09.127627933Z" level=info msg="CreateContainer within sandbox \"afae4b59ccc8a192fd88c3a0427ac7118ae1a51ff1d54f9968fca61e10bbb72a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 18:16:09.134782 containerd[1809]: time="2025-09-12T18:16:09.134740825Z" level=info msg="CreateContainer within sandbox \"afae4b59ccc8a192fd88c3a0427ac7118ae1a51ff1d54f9968fca61e10bbb72a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"deed6fb6c3faefa34f1ce5c85cee781a552fec961cb5ccce5eb710c8e9692d49\"" Sep 12 18:16:09.134956 containerd[1809]: time="2025-09-12T18:16:09.134894874Z" level=info msg="CreateContainer within sandbox \"40c91e8316dd467e17303afb291220d65baffffba90c4ceccca520062da58ea7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"058df387abbb4ce53554d5abf8ca0ef4540a7a37a5f5d1f174d291ba9bd579b2\"" Sep 12 18:16:09.135029 containerd[1809]: time="2025-09-12T18:16:09.135017768Z" level=info msg="StartContainer for \"deed6fb6c3faefa34f1ce5c85cee781a552fec961cb5ccce5eb710c8e9692d49\"" Sep 12 18:16:09.135060 containerd[1809]: time="2025-09-12T18:16:09.135041721Z" level=info msg="StartContainer for \"058df387abbb4ce53554d5abf8ca0ef4540a7a37a5f5d1f174d291ba9bd579b2\"" Sep 12 18:16:09.135859 containerd[1809]: time="2025-09-12T18:16:09.135839107Z" level=info msg="CreateContainer within sandbox \"9643a2ef5b7f8739f1878ad9afe9f52b0b73cdb8e1cba16696920adcb1ab6aa9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b52211a3d664be099ee543e70b69e4e8369186107eb354f78bcc5aaef7ae12bd\"" Sep 12 18:16:09.136043 containerd[1809]: time="2025-09-12T18:16:09.136030211Z" level=info msg="StartContainer for \"b52211a3d664be099ee543e70b69e4e8369186107eb354f78bcc5aaef7ae12bd\"" Sep 12 18:16:09.160982 systemd[1]: Started cri-containerd-058df387abbb4ce53554d5abf8ca0ef4540a7a37a5f5d1f174d291ba9bd579b2.scope - libcontainer container 058df387abbb4ce53554d5abf8ca0ef4540a7a37a5f5d1f174d291ba9bd579b2. Sep 12 18:16:09.161741 systemd[1]: Started cri-containerd-b52211a3d664be099ee543e70b69e4e8369186107eb354f78bcc5aaef7ae12bd.scope - libcontainer container b52211a3d664be099ee543e70b69e4e8369186107eb354f78bcc5aaef7ae12bd. Sep 12 18:16:09.162440 systemd[1]: Started cri-containerd-deed6fb6c3faefa34f1ce5c85cee781a552fec961cb5ccce5eb710c8e9692d49.scope - libcontainer container deed6fb6c3faefa34f1ce5c85cee781a552fec961cb5ccce5eb710c8e9692d49. Sep 12 18:16:09.185583 containerd[1809]: time="2025-09-12T18:16:09.185555924Z" level=info msg="StartContainer for \"deed6fb6c3faefa34f1ce5c85cee781a552fec961cb5ccce5eb710c8e9692d49\" returns successfully" Sep 12 18:16:09.185583 containerd[1809]: time="2025-09-12T18:16:09.185578903Z" level=info msg="StartContainer for \"b52211a3d664be099ee543e70b69e4e8369186107eb354f78bcc5aaef7ae12bd\" returns successfully" Sep 12 18:16:09.186540 containerd[1809]: time="2025-09-12T18:16:09.186523842Z" level=info msg="StartContainer for \"058df387abbb4ce53554d5abf8ca0ef4540a7a37a5f5d1f174d291ba9bd579b2\" returns successfully" Sep 12 18:16:09.567504 kubelet[2720]: I0912 18:16:09.567488 2720 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:09.768439 kubelet[2720]: E0912 18:16:09.768416 2720 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.3-a-0654ef0f4d\" not found" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:09.889588 kubelet[2720]: I0912 18:16:09.889518 2720 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:09.889588 kubelet[2720]: E0912 18:16:09.889546 2720 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.2.3-a-0654ef0f4d\": node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:09.894445 kubelet[2720]: E0912 18:16:09.894432 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:09.994904 kubelet[2720]: E0912 18:16:09.994825 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:10.095872 kubelet[2720]: E0912 18:16:10.095793 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:10.196323 kubelet[2720]: E0912 18:16:10.196201 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:10.297323 kubelet[2720]: E0912 18:16:10.297207 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:10.397987 kubelet[2720]: E0912 18:16:10.397877 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:10.498201 kubelet[2720]: E0912 18:16:10.498001 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:10.598315 kubelet[2720]: E0912 18:16:10.598188 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:10.699375 kubelet[2720]: E0912 18:16:10.699262 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:10.800413 kubelet[2720]: E0912 18:16:10.800219 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:10.900487 kubelet[2720]: E0912 18:16:10.900380 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:11.001028 kubelet[2720]: E0912 18:16:11.000930 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:11.101693 kubelet[2720]: E0912 18:16:11.101444 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:11.202293 kubelet[2720]: E0912 18:16:11.202188 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:11.303204 kubelet[2720]: E0912 18:16:11.303154 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:11.404261 kubelet[2720]: E0912 18:16:11.404099 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:11.505144 kubelet[2720]: E0912 18:16:11.505061 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:11.606350 kubelet[2720]: E0912 18:16:11.606243 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:11.707056 kubelet[2720]: E0912 18:16:11.706931 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:11.807962 kubelet[2720]: E0912 18:16:11.807892 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:11.909186 kubelet[2720]: E0912 18:16:11.909073 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:12.009774 kubelet[2720]: E0912 18:16:12.009502 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:12.179274 systemd[1]: Reload requested from client PID 3037 ('systemctl') (unit session-11.scope)... Sep 12 18:16:12.179282 systemd[1]: Reloading... Sep 12 18:16:12.225701 zram_generator::config[3083]: No configuration found. Sep 12 18:16:12.302763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 18:16:12.332959 kubelet[2720]: W0912 18:16:12.332947 2720 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 18:16:12.395804 systemd[1]: Reloading finished in 216 ms. Sep 12 18:16:12.414974 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:16:12.433379 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 18:16:12.433514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:16:12.433542 systemd[1]: kubelet.service: Consumed 666ms CPU time, 143.9M memory peak. Sep 12 18:16:12.446046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:16:12.779526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:16:12.782114 (kubelet)[3147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 18:16:12.801487 kubelet[3147]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:16:12.801487 kubelet[3147]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 18:16:12.801487 kubelet[3147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:16:12.801810 kubelet[3147]: I0912 18:16:12.801544 3147 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 18:16:12.805523 kubelet[3147]: I0912 18:16:12.805508 3147 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 18:16:12.805523 kubelet[3147]: I0912 18:16:12.805520 3147 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 18:16:12.805662 kubelet[3147]: I0912 18:16:12.805656 3147 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 18:16:12.806465 kubelet[3147]: I0912 18:16:12.806458 3147 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 18:16:12.807900 kubelet[3147]: I0912 18:16:12.807893 3147 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 18:16:12.809477 kubelet[3147]: E0912 18:16:12.809461 3147 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 18:16:12.809527 kubelet[3147]: I0912 18:16:12.809479 3147 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 18:16:12.818035 kubelet[3147]: I0912 18:16:12.818015 3147 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 18:16:12.818120 kubelet[3147]: I0912 18:16:12.818095 3147 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 18:16:12.818186 kubelet[3147]: I0912 18:16:12.818166 3147 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 18:16:12.818348 kubelet[3147]: I0912 18:16:12.818190 3147 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.3-a-0654ef0f4d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 18:16:12.818403 kubelet[3147]: I0912 18:16:12.818361 3147 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 18:16:12.818403 kubelet[3147]: I0912 18:16:12.818373 3147 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 18:16:12.818403 kubelet[3147]: I0912 18:16:12.818398 3147 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:16:12.818484 kubelet[3147]: I0912 18:16:12.818478 3147 kubelet.go:408] "Attempting to sync node with API server" Sep 12 18:16:12.818507 kubelet[3147]: I0912 18:16:12.818488 3147 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 18:16:12.818526 kubelet[3147]: I0912 18:16:12.818508 3147 kubelet.go:314] "Adding apiserver pod source" Sep 12 18:16:12.818526 kubelet[3147]: I0912 18:16:12.818515 3147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 18:16:12.818919 kubelet[3147]: I0912 18:16:12.818908 3147 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 18:16:12.819176 kubelet[3147]: I0912 18:16:12.819169 3147 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 18:16:12.819417 kubelet[3147]: I0912 18:16:12.819411 3147 server.go:1274] "Started kubelet" Sep 12 18:16:12.819496 kubelet[3147]: I0912 18:16:12.819458 3147 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 18:16:12.819534 kubelet[3147]: I0912 18:16:12.819453 3147 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 18:16:12.819828 kubelet[3147]: I0912 18:16:12.819801 3147 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 18:16:12.820280 kubelet[3147]: I0912 18:16:12.820268 3147 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 18:16:12.820327 kubelet[3147]: I0912 18:16:12.820280 3147 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 18:16:12.820371 kubelet[3147]: I0912 18:16:12.820331 3147 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 18:16:12.820371 kubelet[3147]: E0912 18:16:12.820331 3147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.3-a-0654ef0f4d\" not found" Sep 12 18:16:12.820449 kubelet[3147]: I0912 18:16:12.820387 3147 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 18:16:12.820547 kubelet[3147]: I0912 18:16:12.820536 3147 reconciler.go:26] "Reconciler: start to sync state" Sep 12 18:16:12.821533 kubelet[3147]: E0912 18:16:12.821501 3147 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 18:16:12.821602 kubelet[3147]: I0912 18:16:12.821591 3147 factory.go:221] Registration of the systemd container factory successfully Sep 12 18:16:12.821707 kubelet[3147]: I0912 18:16:12.821680 3147 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 18:16:12.822052 kubelet[3147]: I0912 18:16:12.822040 3147 server.go:449] "Adding debug handlers to kubelet server" Sep 12 18:16:12.822962 kubelet[3147]: I0912 18:16:12.822947 3147 factory.go:221] Registration of the containerd container factory successfully Sep 12 18:16:12.827383 kubelet[3147]: I0912 18:16:12.827349 3147 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 18:16:12.828407 kubelet[3147]: I0912 18:16:12.828389 3147 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 18:16:12.828407 kubelet[3147]: I0912 18:16:12.828410 3147 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 18:16:12.828501 kubelet[3147]: I0912 18:16:12.828425 3147 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 18:16:12.828501 kubelet[3147]: E0912 18:16:12.828460 3147 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 18:16:12.839081 kubelet[3147]: I0912 18:16:12.839033 3147 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 18:16:12.839081 kubelet[3147]: I0912 18:16:12.839044 3147 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 18:16:12.839081 kubelet[3147]: I0912 18:16:12.839056 3147 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:16:12.839203 kubelet[3147]: I0912 18:16:12.839149 3147 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 18:16:12.839203 kubelet[3147]: I0912 18:16:12.839158 3147 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 18:16:12.839203 kubelet[3147]: I0912 18:16:12.839171 3147 policy_none.go:49] "None policy: Start" Sep 12 18:16:12.839502 kubelet[3147]: I0912 18:16:12.839466 3147 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 18:16:12.839502 kubelet[3147]: I0912 18:16:12.839477 3147 state_mem.go:35] "Initializing new in-memory state store" Sep 12 18:16:12.839570 kubelet[3147]: I0912 18:16:12.839561 3147 state_mem.go:75] "Updated machine memory state" Sep 12 18:16:12.841872 kubelet[3147]: I0912 18:16:12.841861 3147 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 18:16:12.841982 kubelet[3147]: I0912 18:16:12.841973 3147 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 18:16:12.842015 kubelet[3147]: I0912 18:16:12.841981 3147 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 18:16:12.842102 kubelet[3147]: I0912 18:16:12.842094 3147 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 18:16:12.937183 kubelet[3147]: W0912 18:16:12.937085 3147 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 18:16:12.937183 kubelet[3147]: W0912 18:16:12.937177 3147 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 18:16:12.937558 kubelet[3147]: W0912 18:16:12.937359 3147 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 18:16:12.937558 kubelet[3147]: E0912 18:16:12.937497 3147 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:12.951797 kubelet[3147]: I0912 18:16:12.951698 3147 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:12.966142 kubelet[3147]: I0912 18:16:12.966083 3147 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:12.966390 kubelet[3147]: I0912 18:16:12.966265 3147 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.122532 kubelet[3147]: I0912 18:16:13.122312 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c262181ae3499fd44256f30f0eec02b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" (UID: \"c262181ae3499fd44256f30f0eec02b9\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.122532 kubelet[3147]: I0912 18:16:13.122415 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c262181ae3499fd44256f30f0eec02b9-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" (UID: \"c262181ae3499fd44256f30f0eec02b9\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.122956 kubelet[3147]: I0912 18:16:13.122529 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c262181ae3499fd44256f30f0eec02b9-ca-certs\") pod \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" (UID: \"c262181ae3499fd44256f30f0eec02b9\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.122956 kubelet[3147]: I0912 18:16:13.122611 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c262181ae3499fd44256f30f0eec02b9-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" (UID: \"c262181ae3499fd44256f30f0eec02b9\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.122956 kubelet[3147]: I0912 18:16:13.122697 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c262181ae3499fd44256f30f0eec02b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.3-a-0654ef0f4d\" (UID: \"c262181ae3499fd44256f30f0eec02b9\") " pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.122956 kubelet[3147]: I0912 18:16:13.122764 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7326735d0e20a3a7096326a8dd2fd6e3-kubeconfig\") pod \"kube-scheduler-ci-4230.2.3-a-0654ef0f4d\" (UID: \"7326735d0e20a3a7096326a8dd2fd6e3\") " pod="kube-system/kube-scheduler-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.122956 kubelet[3147]: I0912 18:16:13.122815 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f449b80278e268061f05f4f15ab86c6-ca-certs\") pod \"kube-apiserver-ci-4230.2.3-a-0654ef0f4d\" (UID: \"4f449b80278e268061f05f4f15ab86c6\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.123425 kubelet[3147]: I0912 18:16:13.122860 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f449b80278e268061f05f4f15ab86c6-k8s-certs\") pod \"kube-apiserver-ci-4230.2.3-a-0654ef0f4d\" (UID: \"4f449b80278e268061f05f4f15ab86c6\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.123425 kubelet[3147]: I0912 18:16:13.122907 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f449b80278e268061f05f4f15ab86c6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.3-a-0654ef0f4d\" (UID: \"4f449b80278e268061f05f4f15ab86c6\") " pod="kube-system/kube-apiserver-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.209122 sudo[3192]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 18:16:13.210043 sudo[3192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 18:16:13.588162 sudo[3192]: pam_unix(sudo:session): session closed for user root Sep 12 18:16:13.819124 kubelet[3147]: I0912 18:16:13.819042 3147 apiserver.go:52] "Watching apiserver" Sep 12 18:16:13.824458 kubelet[3147]: I0912 18:16:13.824437 3147 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 18:16:13.852660 kubelet[3147]: W0912 18:16:13.852581 3147 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 18:16:13.852660 kubelet[3147]: E0912 18:16:13.852631 3147 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.2.3-a-0654ef0f4d\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.3-a-0654ef0f4d" Sep 12 18:16:13.852818 kubelet[3147]: I0912 18:16:13.852759 3147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.3-a-0654ef0f4d" podStartSLOduration=1.852750184 podStartE2EDuration="1.852750184s" podCreationTimestamp="2025-09-12 18:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:16:13.852718848 +0000 UTC m=+1.068611908" watchObservedRunningTime="2025-09-12 18:16:13.852750184 +0000 UTC m=+1.068643241" Sep 12 18:16:13.857229 kubelet[3147]: I0912 18:16:13.857211 3147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.3-a-0654ef0f4d" podStartSLOduration=1.857188819 podStartE2EDuration="1.857188819s" podCreationTimestamp="2025-09-12 18:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:16:13.857131791 +0000 UTC m=+1.073024851" watchObservedRunningTime="2025-09-12 18:16:13.857188819 +0000 UTC m=+1.073081877" Sep 12 18:16:13.861464 kubelet[3147]: I0912 18:16:13.861441 3147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.3-a-0654ef0f4d" podStartSLOduration=1.8614330639999999 podStartE2EDuration="1.861433064s" podCreationTimestamp="2025-09-12 18:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:16:13.860748413 +0000 UTC m=+1.076641481" watchObservedRunningTime="2025-09-12 18:16:13.861433064 +0000 UTC m=+1.077326122" Sep 12 18:16:14.847252 sudo[2100]: pam_unix(sudo:session): session closed for user root Sep 12 18:16:14.847969 sshd[2099]: Connection closed by 139.178.68.195 port 49096 Sep 12 18:16:14.848146 sshd-session[2096]: pam_unix(sshd:session): session closed for user core Sep 12 18:16:14.849737 systemd[1]: sshd@8-139.178.90.133:22-139.178.68.195:49096.service: Deactivated successfully. Sep 12 18:16:14.850794 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 18:16:14.850906 systemd[1]: session-11.scope: Consumed 3.317s CPU time, 274.1M memory peak. Sep 12 18:16:14.852090 systemd-logind[1799]: Session 11 logged out. Waiting for processes to exit. Sep 12 18:16:14.852719 systemd-logind[1799]: Removed session 11. Sep 12 18:16:17.462919 kubelet[3147]: I0912 18:16:17.462813 3147 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 18:16:17.463808 containerd[1809]: time="2025-09-12T18:16:17.463520363Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 18:16:17.464444 kubelet[3147]: I0912 18:16:17.463999 3147 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 18:16:18.455262 systemd[1]: Created slice kubepods-besteffort-pod6a539586_d743_4761_b6ba_a47dab850ca9.slice - libcontainer container kubepods-besteffort-pod6a539586_d743_4761_b6ba_a47dab850ca9.slice. Sep 12 18:16:18.478165 systemd[1]: Created slice kubepods-burstable-pod6ef487c0_6222_42ea_94f7_36f4cb4ac902.slice - libcontainer container kubepods-burstable-pod6ef487c0_6222_42ea_94f7_36f4cb4ac902.slice. Sep 12 18:16:18.557470 kubelet[3147]: I0912 18:16:18.557349 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-host-proc-sys-kernel\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.557470 kubelet[3147]: I0912 18:16:18.557458 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-lib-modules\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.558462 kubelet[3147]: I0912 18:16:18.557518 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-xtables-lock\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.558462 kubelet[3147]: I0912 18:16:18.557570 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ef487c0-6222-42ea-94f7-36f4cb4ac902-hubble-tls\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.558462 kubelet[3147]: I0912 18:16:18.557642 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a539586-d743-4761-b6ba-a47dab850ca9-xtables-lock\") pod \"kube-proxy-9kqw6\" (UID: \"6a539586-d743-4761-b6ba-a47dab850ca9\") " pod="kube-system/kube-proxy-9kqw6" Sep 12 18:16:18.558462 kubelet[3147]: I0912 18:16:18.557694 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-bpf-maps\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.558462 kubelet[3147]: I0912 18:16:18.557791 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-cgroup\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.558462 kubelet[3147]: I0912 18:16:18.557884 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-config-path\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.559202 kubelet[3147]: I0912 18:16:18.557954 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ef487c0-6222-42ea-94f7-36f4cb4ac902-clustermesh-secrets\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.559202 kubelet[3147]: I0912 18:16:18.558012 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a539586-d743-4761-b6ba-a47dab850ca9-kube-proxy\") pod \"kube-proxy-9kqw6\" (UID: \"6a539586-d743-4761-b6ba-a47dab850ca9\") " pod="kube-system/kube-proxy-9kqw6" Sep 12 18:16:18.559202 kubelet[3147]: I0912 18:16:18.558105 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8smdl\" (UniqueName: \"kubernetes.io/projected/6a539586-d743-4761-b6ba-a47dab850ca9-kube-api-access-8smdl\") pod \"kube-proxy-9kqw6\" (UID: \"6a539586-d743-4761-b6ba-a47dab850ca9\") " pod="kube-system/kube-proxy-9kqw6" Sep 12 18:16:18.559202 kubelet[3147]: I0912 18:16:18.558200 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-etc-cni-netd\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.559202 kubelet[3147]: I0912 18:16:18.558255 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-run\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.559669 kubelet[3147]: I0912 18:16:18.558300 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-host-proc-sys-net\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.559669 kubelet[3147]: I0912 18:16:18.558364 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl6vm\" (UniqueName: \"kubernetes.io/projected/6ef487c0-6222-42ea-94f7-36f4cb4ac902-kube-api-access-cl6vm\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.559669 kubelet[3147]: I0912 18:16:18.558437 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a539586-d743-4761-b6ba-a47dab850ca9-lib-modules\") pod \"kube-proxy-9kqw6\" (UID: \"6a539586-d743-4761-b6ba-a47dab850ca9\") " pod="kube-system/kube-proxy-9kqw6" Sep 12 18:16:18.559669 kubelet[3147]: I0912 18:16:18.558482 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cni-path\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.559669 kubelet[3147]: I0912 18:16:18.558529 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-hostproc\") pod \"cilium-mq8gq\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " pod="kube-system/cilium-mq8gq" Sep 12 18:16:18.609920 systemd[1]: Created slice kubepods-besteffort-pod37ed2745_4fb8_4684_a12c_badc5421b9b8.slice - libcontainer container kubepods-besteffort-pod37ed2745_4fb8_4684_a12c_badc5421b9b8.slice. Sep 12 18:16:18.659117 kubelet[3147]: I0912 18:16:18.659070 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37ed2745-4fb8-4684-a12c-badc5421b9b8-cilium-config-path\") pod \"cilium-operator-5d85765b45-m7jwm\" (UID: \"37ed2745-4fb8-4684-a12c-badc5421b9b8\") " pod="kube-system/cilium-operator-5d85765b45-m7jwm" Sep 12 18:16:18.659504 kubelet[3147]: I0912 18:16:18.659430 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzmj6\" (UniqueName: \"kubernetes.io/projected/37ed2745-4fb8-4684-a12c-badc5421b9b8-kube-api-access-lzmj6\") pod \"cilium-operator-5d85765b45-m7jwm\" (UID: \"37ed2745-4fb8-4684-a12c-badc5421b9b8\") " pod="kube-system/cilium-operator-5d85765b45-m7jwm" Sep 12 18:16:18.779373 containerd[1809]: time="2025-09-12T18:16:18.779166614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9kqw6,Uid:6a539586-d743-4761-b6ba-a47dab850ca9,Namespace:kube-system,Attempt:0,}" Sep 12 18:16:18.781527 containerd[1809]: time="2025-09-12T18:16:18.781413692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mq8gq,Uid:6ef487c0-6222-42ea-94f7-36f4cb4ac902,Namespace:kube-system,Attempt:0,}" Sep 12 18:16:18.917667 containerd[1809]: time="2025-09-12T18:16:18.917584402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m7jwm,Uid:37ed2745-4fb8-4684-a12c-badc5421b9b8,Namespace:kube-system,Attempt:0,}" Sep 12 18:16:19.347603 containerd[1809]: time="2025-09-12T18:16:19.347513254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 18:16:19.347603 containerd[1809]: time="2025-09-12T18:16:19.347554602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 18:16:19.347603 containerd[1809]: time="2025-09-12T18:16:19.347564347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:19.347759 containerd[1809]: time="2025-09-12T18:16:19.347611479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:19.363891 systemd[1]: Started cri-containerd-17c61a45566f9e41f17b3985aa02064e480e6cd9281efcd50a8f569814bb1475.scope - libcontainer container 17c61a45566f9e41f17b3985aa02064e480e6cd9281efcd50a8f569814bb1475. Sep 12 18:16:19.377524 containerd[1809]: time="2025-09-12T18:16:19.377503126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9kqw6,Uid:6a539586-d743-4761-b6ba-a47dab850ca9,Namespace:kube-system,Attempt:0,} returns sandbox id \"17c61a45566f9e41f17b3985aa02064e480e6cd9281efcd50a8f569814bb1475\"" Sep 12 18:16:19.378780 containerd[1809]: time="2025-09-12T18:16:19.378763533Z" level=info msg="CreateContainer within sandbox \"17c61a45566f9e41f17b3985aa02064e480e6cd9281efcd50a8f569814bb1475\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 18:16:19.383701 containerd[1809]: time="2025-09-12T18:16:19.383658812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 18:16:19.383701 containerd[1809]: time="2025-09-12T18:16:19.383689347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 18:16:19.383701 containerd[1809]: time="2025-09-12T18:16:19.383696870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:19.383881 containerd[1809]: time="2025-09-12T18:16:19.383735997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:19.385388 containerd[1809]: time="2025-09-12T18:16:19.385176126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 18:16:19.385388 containerd[1809]: time="2025-09-12T18:16:19.385383582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 18:16:19.385463 containerd[1809]: time="2025-09-12T18:16:19.385392078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:19.385463 containerd[1809]: time="2025-09-12T18:16:19.385435311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:19.385680 containerd[1809]: time="2025-09-12T18:16:19.385659002Z" level=info msg="CreateContainer within sandbox \"17c61a45566f9e41f17b3985aa02064e480e6cd9281efcd50a8f569814bb1475\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e36afc4603354af73fd6ea0cde1a53b830905112e5bfd0e3dab4ae6bd7a42bb5\"" Sep 12 18:16:19.386073 containerd[1809]: time="2025-09-12T18:16:19.386059086Z" level=info msg="StartContainer for \"e36afc4603354af73fd6ea0cde1a53b830905112e5bfd0e3dab4ae6bd7a42bb5\"" Sep 12 18:16:19.402812 systemd[1]: Started cri-containerd-2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf.scope - libcontainer container 2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf. Sep 12 18:16:19.405228 systemd[1]: Started cri-containerd-b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7.scope - libcontainer container b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7. Sep 12 18:16:19.405811 systemd[1]: Started cri-containerd-e36afc4603354af73fd6ea0cde1a53b830905112e5bfd0e3dab4ae6bd7a42bb5.scope - libcontainer container e36afc4603354af73fd6ea0cde1a53b830905112e5bfd0e3dab4ae6bd7a42bb5. Sep 12 18:16:19.413483 containerd[1809]: time="2025-09-12T18:16:19.413459299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mq8gq,Uid:6ef487c0-6222-42ea-94f7-36f4cb4ac902,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\"" Sep 12 18:16:19.414169 containerd[1809]: time="2025-09-12T18:16:19.414154790Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 18:16:19.420430 containerd[1809]: time="2025-09-12T18:16:19.420409885Z" level=info msg="StartContainer for \"e36afc4603354af73fd6ea0cde1a53b830905112e5bfd0e3dab4ae6bd7a42bb5\" returns successfully" Sep 12 18:16:19.428988 containerd[1809]: time="2025-09-12T18:16:19.428941106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m7jwm,Uid:37ed2745-4fb8-4684-a12c-badc5421b9b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7\"" Sep 12 18:16:19.866602 kubelet[3147]: I0912 18:16:19.866482 3147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9kqw6" podStartSLOduration=1.866440512 podStartE2EDuration="1.866440512s" podCreationTimestamp="2025-09-12 18:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:16:19.86622036 +0000 UTC m=+7.082113484" watchObservedRunningTime="2025-09-12 18:16:19.866440512 +0000 UTC m=+7.082333673" Sep 12 18:16:21.937010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1529149346.mount: Deactivated successfully. Sep 12 18:16:22.740672 containerd[1809]: time="2025-09-12T18:16:22.740593003Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:22.740892 containerd[1809]: time="2025-09-12T18:16:22.740778051Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 18:16:22.741867 containerd[1809]: time="2025-09-12T18:16:22.741407620Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:22.742778 containerd[1809]: time="2025-09-12T18:16:22.742763024Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 3.328588871s" Sep 12 18:16:22.742817 containerd[1809]: time="2025-09-12T18:16:22.742780673Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 18:16:22.743412 containerd[1809]: time="2025-09-12T18:16:22.743401074Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 18:16:22.743942 containerd[1809]: time="2025-09-12T18:16:22.743927990Z" level=info msg="CreateContainer within sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 18:16:22.748706 containerd[1809]: time="2025-09-12T18:16:22.748689811Z" level=info msg="CreateContainer within sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\"" Sep 12 18:16:22.749039 containerd[1809]: time="2025-09-12T18:16:22.748994233Z" level=info msg="StartContainer for \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\"" Sep 12 18:16:22.749906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218945217.mount: Deactivated successfully. Sep 12 18:16:22.777843 systemd[1]: Started cri-containerd-1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6.scope - libcontainer container 1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6. Sep 12 18:16:22.798012 systemd[1]: cri-containerd-1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6.scope: Deactivated successfully. Sep 12 18:16:22.820027 containerd[1809]: time="2025-09-12T18:16:22.819974149Z" level=info msg="StartContainer for \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\" returns successfully" Sep 12 18:16:23.752836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6-rootfs.mount: Deactivated successfully. Sep 12 18:16:24.003276 containerd[1809]: time="2025-09-12T18:16:24.003152458Z" level=info msg="shim disconnected" id=1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6 namespace=k8s.io Sep 12 18:16:24.003276 containerd[1809]: time="2025-09-12T18:16:24.003181623Z" level=warning msg="cleaning up after shim disconnected" id=1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6 namespace=k8s.io Sep 12 18:16:24.003276 containerd[1809]: time="2025-09-12T18:16:24.003186433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:16:24.659657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058016214.mount: Deactivated successfully. Sep 12 18:16:24.855485 containerd[1809]: time="2025-09-12T18:16:24.855463399Z" level=info msg="CreateContainer within sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 18:16:24.859589 containerd[1809]: time="2025-09-12T18:16:24.859569597Z" level=info msg="CreateContainer within sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\"" Sep 12 18:16:24.859827 containerd[1809]: time="2025-09-12T18:16:24.859812806Z" level=info msg="StartContainer for \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\"" Sep 12 18:16:24.860908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183666786.mount: Deactivated successfully. Sep 12 18:16:24.873947 containerd[1809]: time="2025-09-12T18:16:24.873894665Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:24.874150 containerd[1809]: time="2025-09-12T18:16:24.874100403Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 18:16:24.874419 containerd[1809]: time="2025-09-12T18:16:24.874384509Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:16:24.875212 containerd[1809]: time="2025-09-12T18:16:24.875171862Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.13175505s" Sep 12 18:16:24.875212 containerd[1809]: time="2025-09-12T18:16:24.875185779Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 18:16:24.876162 containerd[1809]: time="2025-09-12T18:16:24.876123547Z" level=info msg="CreateContainer within sandbox \"b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 18:16:24.880264 containerd[1809]: time="2025-09-12T18:16:24.880222615Z" level=info msg="CreateContainer within sandbox \"b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\"" Sep 12 18:16:24.880448 containerd[1809]: time="2025-09-12T18:16:24.880435611Z" level=info msg="StartContainer for \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\"" Sep 12 18:16:24.881776 systemd[1]: Started cri-containerd-5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154.scope - libcontainer container 5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154. Sep 12 18:16:24.890372 systemd[1]: Started cri-containerd-7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646.scope - libcontainer container 7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646. Sep 12 18:16:24.893067 containerd[1809]: time="2025-09-12T18:16:24.893042821Z" level=info msg="StartContainer for \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\" returns successfully" Sep 12 18:16:24.900533 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 18:16:24.900725 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:16:24.900838 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 18:16:24.901763 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 18:16:24.902278 containerd[1809]: time="2025-09-12T18:16:24.902257329Z" level=info msg="StartContainer for \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\" returns successfully" Sep 12 18:16:24.902853 systemd[1]: cri-containerd-5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154.scope: Deactivated successfully. Sep 12 18:16:24.910741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:16:24.937648 update_engine[1804]: I20250912 18:16:24.937599 1804 update_attempter.cc:509] Updating boot flags... Sep 12 18:16:24.995671 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3789) Sep 12 18:16:25.073343 containerd[1809]: time="2025-09-12T18:16:25.073311945Z" level=info msg="shim disconnected" id=5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154 namespace=k8s.io Sep 12 18:16:25.073343 containerd[1809]: time="2025-09-12T18:16:25.073339952Z" level=warning msg="cleaning up after shim disconnected" id=5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154 namespace=k8s.io Sep 12 18:16:25.073572 containerd[1809]: time="2025-09-12T18:16:25.073348373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:16:25.078906 containerd[1809]: time="2025-09-12T18:16:25.078878515Z" level=warning msg="cleanup warnings time=\"2025-09-12T18:16:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 18:16:25.102659 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3792) Sep 12 18:16:25.121630 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3792) Sep 12 18:16:25.863919 containerd[1809]: time="2025-09-12T18:16:25.863891854Z" level=info msg="CreateContainer within sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 18:16:25.864827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154-rootfs.mount: Deactivated successfully. Sep 12 18:16:25.870082 containerd[1809]: time="2025-09-12T18:16:25.870045186Z" level=info msg="CreateContainer within sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\"" Sep 12 18:16:25.870519 containerd[1809]: time="2025-09-12T18:16:25.870502151Z" level=info msg="StartContainer for \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\"" Sep 12 18:16:25.871394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643216442.mount: Deactivated successfully. Sep 12 18:16:25.877826 kubelet[3147]: I0912 18:16:25.877787 3147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-m7jwm" podStartSLOduration=2.431667146 podStartE2EDuration="7.877772288s" podCreationTimestamp="2025-09-12 18:16:18 +0000 UTC" firstStartedPulling="2025-09-12 18:16:19.429455251 +0000 UTC m=+6.645348312" lastFinishedPulling="2025-09-12 18:16:24.875560393 +0000 UTC m=+12.091453454" observedRunningTime="2025-09-12 18:16:25.877583306 +0000 UTC m=+13.093476372" watchObservedRunningTime="2025-09-12 18:16:25.877772288 +0000 UTC m=+13.093665346" Sep 12 18:16:25.899942 systemd[1]: Started cri-containerd-8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601.scope - libcontainer container 8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601. Sep 12 18:16:25.914199 containerd[1809]: time="2025-09-12T18:16:25.914144351Z" level=info msg="StartContainer for \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\" returns successfully" Sep 12 18:16:25.915015 systemd[1]: cri-containerd-8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601.scope: Deactivated successfully. Sep 12 18:16:25.926421 containerd[1809]: time="2025-09-12T18:16:25.926387950Z" level=info msg="shim disconnected" id=8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601 namespace=k8s.io Sep 12 18:16:25.926421 containerd[1809]: time="2025-09-12T18:16:25.926419834Z" level=warning msg="cleaning up after shim disconnected" id=8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601 namespace=k8s.io Sep 12 18:16:25.926421 containerd[1809]: time="2025-09-12T18:16:25.926424891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:16:26.859572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601-rootfs.mount: Deactivated successfully. Sep 12 18:16:26.867019 containerd[1809]: time="2025-09-12T18:16:26.866995516Z" level=info msg="CreateContainer within sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 18:16:26.872717 containerd[1809]: time="2025-09-12T18:16:26.872663662Z" level=info msg="CreateContainer within sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\"" Sep 12 18:16:26.873089 containerd[1809]: time="2025-09-12T18:16:26.873041750Z" level=info msg="StartContainer for \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\"" Sep 12 18:16:26.902800 systemd[1]: Started cri-containerd-7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220.scope - libcontainer container 7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220. Sep 12 18:16:26.918085 systemd[1]: cri-containerd-7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220.scope: Deactivated successfully. Sep 12 18:16:26.918446 containerd[1809]: time="2025-09-12T18:16:26.918424053Z" level=info msg="StartContainer for \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\" returns successfully" Sep 12 18:16:26.960062 containerd[1809]: time="2025-09-12T18:16:26.960017416Z" level=info msg="shim disconnected" id=7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220 namespace=k8s.io Sep 12 18:16:26.960062 containerd[1809]: time="2025-09-12T18:16:26.960061505Z" level=warning msg="cleaning up after shim disconnected" id=7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220 namespace=k8s.io Sep 12 18:16:26.960237 containerd[1809]: time="2025-09-12T18:16:26.960073152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:16:27.859769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220-rootfs.mount: Deactivated successfully. Sep 12 18:16:27.869925 containerd[1809]: time="2025-09-12T18:16:27.869897744Z" level=info msg="CreateContainer within sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 18:16:27.876110 containerd[1809]: time="2025-09-12T18:16:27.876062006Z" level=info msg="CreateContainer within sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\"" Sep 12 18:16:27.876349 containerd[1809]: time="2025-09-12T18:16:27.876307706Z" level=info msg="StartContainer for \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\"" Sep 12 18:16:27.877517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826445836.mount: Deactivated successfully. Sep 12 18:16:27.905914 systemd[1]: Started cri-containerd-9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72.scope - libcontainer container 9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72. Sep 12 18:16:27.920684 containerd[1809]: time="2025-09-12T18:16:27.920658679Z" level=info msg="StartContainer for \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\" returns successfully" Sep 12 18:16:27.974520 kubelet[3147]: I0912 18:16:27.974502 3147 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 18:16:27.986872 systemd[1]: Created slice kubepods-burstable-podd0e345c2_675b_41b3_81eb_8c59722986e1.slice - libcontainer container kubepods-burstable-podd0e345c2_675b_41b3_81eb_8c59722986e1.slice. Sep 12 18:16:27.989652 systemd[1]: Created slice kubepods-burstable-podf7bce1ff_bc96_46f0_a0f5_0c9e9923b148.slice - libcontainer container kubepods-burstable-podf7bce1ff_bc96_46f0_a0f5_0c9e9923b148.slice. Sep 12 18:16:28.026809 kubelet[3147]: I0912 18:16:28.026787 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7bce1ff-bc96-46f0-a0f5-0c9e9923b148-config-volume\") pod \"coredns-7c65d6cfc9-5mr6b\" (UID: \"f7bce1ff-bc96-46f0-a0f5-0c9e9923b148\") " pod="kube-system/coredns-7c65d6cfc9-5mr6b" Sep 12 18:16:28.026809 kubelet[3147]: I0912 18:16:28.026813 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mng5l\" (UniqueName: \"kubernetes.io/projected/d0e345c2-675b-41b3-81eb-8c59722986e1-kube-api-access-mng5l\") pod \"coredns-7c65d6cfc9-9n992\" (UID: \"d0e345c2-675b-41b3-81eb-8c59722986e1\") " pod="kube-system/coredns-7c65d6cfc9-9n992" Sep 12 18:16:28.026925 kubelet[3147]: I0912 18:16:28.026833 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0e345c2-675b-41b3-81eb-8c59722986e1-config-volume\") pod \"coredns-7c65d6cfc9-9n992\" (UID: \"d0e345c2-675b-41b3-81eb-8c59722986e1\") " pod="kube-system/coredns-7c65d6cfc9-9n992" Sep 12 18:16:28.026925 kubelet[3147]: I0912 18:16:28.026848 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfkls\" (UniqueName: \"kubernetes.io/projected/f7bce1ff-bc96-46f0-a0f5-0c9e9923b148-kube-api-access-wfkls\") pod \"coredns-7c65d6cfc9-5mr6b\" (UID: \"f7bce1ff-bc96-46f0-a0f5-0c9e9923b148\") " pod="kube-system/coredns-7c65d6cfc9-5mr6b" Sep 12 18:16:28.293395 containerd[1809]: time="2025-09-12T18:16:28.293251129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5mr6b,Uid:f7bce1ff-bc96-46f0-a0f5-0c9e9923b148,Namespace:kube-system,Attempt:0,}" Sep 12 18:16:28.293395 containerd[1809]: time="2025-09-12T18:16:28.293312708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9n992,Uid:d0e345c2-675b-41b3-81eb-8c59722986e1,Namespace:kube-system,Attempt:0,}" Sep 12 18:16:28.880989 kubelet[3147]: I0912 18:16:28.880930 3147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mq8gq" podStartSLOduration=7.5515275840000005 podStartE2EDuration="10.880919904s" podCreationTimestamp="2025-09-12 18:16:18 +0000 UTC" firstStartedPulling="2025-09-12 18:16:19.413932877 +0000 UTC m=+6.629825937" lastFinishedPulling="2025-09-12 18:16:22.743325197 +0000 UTC m=+9.959218257" observedRunningTime="2025-09-12 18:16:28.880726524 +0000 UTC m=+16.096619586" watchObservedRunningTime="2025-09-12 18:16:28.880919904 +0000 UTC m=+16.096812964" Sep 12 18:16:29.706419 systemd-networkd[1727]: cilium_host: Link UP Sep 12 18:16:29.706515 systemd-networkd[1727]: cilium_net: Link UP Sep 12 18:16:29.706629 systemd-networkd[1727]: cilium_net: Gained carrier Sep 12 18:16:29.706730 systemd-networkd[1727]: cilium_host: Gained carrier Sep 12 18:16:29.757070 systemd-networkd[1727]: cilium_vxlan: Link UP Sep 12 18:16:29.757072 systemd-networkd[1727]: cilium_vxlan: Gained carrier Sep 12 18:16:29.893629 kernel: NET: Registered PF_ALG protocol family Sep 12 18:16:30.175694 systemd-networkd[1727]: cilium_host: Gained IPv6LL Sep 12 18:16:30.255711 systemd-networkd[1727]: cilium_net: Gained IPv6LL Sep 12 18:16:30.339225 systemd-networkd[1727]: lxc_health: Link UP Sep 12 18:16:30.339522 systemd-networkd[1727]: lxc_health: Gained carrier Sep 12 18:16:30.843635 kernel: eth0: renamed from tmp63b46 Sep 12 18:16:30.857724 kernel: eth0: renamed from tmp6bea7 Sep 12 18:16:30.870531 systemd-networkd[1727]: lxcbf168cc1abdc: Link UP Sep 12 18:16:30.870665 systemd-networkd[1727]: lxcf62b32fa56e1: Link UP Sep 12 18:16:30.871097 systemd-networkd[1727]: lxcbf168cc1abdc: Gained carrier Sep 12 18:16:30.871203 systemd-networkd[1727]: lxcf62b32fa56e1: Gained carrier Sep 12 18:16:30.894933 systemd-networkd[1727]: cilium_vxlan: Gained IPv6LL Sep 12 18:16:31.662799 systemd-networkd[1727]: lxc_health: Gained IPv6LL Sep 12 18:16:32.110802 systemd-networkd[1727]: lxcbf168cc1abdc: Gained IPv6LL Sep 12 18:16:32.750779 systemd-networkd[1727]: lxcf62b32fa56e1: Gained IPv6LL Sep 12 18:16:33.193216 containerd[1809]: time="2025-09-12T18:16:33.193170085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 18:16:33.193216 containerd[1809]: time="2025-09-12T18:16:33.193203386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 18:16:33.193532 containerd[1809]: time="2025-09-12T18:16:33.193225817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:33.193532 containerd[1809]: time="2025-09-12T18:16:33.193475787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:33.193574 containerd[1809]: time="2025-09-12T18:16:33.193513914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 18:16:33.193574 containerd[1809]: time="2025-09-12T18:16:33.193541333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 18:16:33.193574 containerd[1809]: time="2025-09-12T18:16:33.193549128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:33.193653 containerd[1809]: time="2025-09-12T18:16:33.193586001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:16:33.219971 systemd[1]: Started cri-containerd-63b46305ce4ed160950462bd313da60e8e71c7c3a3b60e1808f8f13b144cbede.scope - libcontainer container 63b46305ce4ed160950462bd313da60e8e71c7c3a3b60e1808f8f13b144cbede. Sep 12 18:16:33.220645 systemd[1]: Started cri-containerd-6bea7668d0264793f45d43d3e99fbfdb8dbb0978495eed69341442f48298beb7.scope - libcontainer container 6bea7668d0264793f45d43d3e99fbfdb8dbb0978495eed69341442f48298beb7. Sep 12 18:16:33.248530 containerd[1809]: time="2025-09-12T18:16:33.248506938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5mr6b,Uid:f7bce1ff-bc96-46f0-a0f5-0c9e9923b148,Namespace:kube-system,Attempt:0,} returns sandbox id \"63b46305ce4ed160950462bd313da60e8e71c7c3a3b60e1808f8f13b144cbede\"" Sep 12 18:16:33.248940 containerd[1809]: time="2025-09-12T18:16:33.248927495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9n992,Uid:d0e345c2-675b-41b3-81eb-8c59722986e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bea7668d0264793f45d43d3e99fbfdb8dbb0978495eed69341442f48298beb7\"" Sep 12 18:16:33.249750 containerd[1809]: time="2025-09-12T18:16:33.249736908Z" level=info msg="CreateContainer within sandbox \"63b46305ce4ed160950462bd313da60e8e71c7c3a3b60e1808f8f13b144cbede\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 18:16:33.249791 containerd[1809]: time="2025-09-12T18:16:33.249758838Z" level=info msg="CreateContainer within sandbox \"6bea7668d0264793f45d43d3e99fbfdb8dbb0978495eed69341442f48298beb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 18:16:33.254963 containerd[1809]: time="2025-09-12T18:16:33.254914211Z" level=info msg="CreateContainer within sandbox \"63b46305ce4ed160950462bd313da60e8e71c7c3a3b60e1808f8f13b144cbede\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9004da8a8fd43382d7b8d1ec33a5315b28d6529673d175759e85dceff2679b8f\"" Sep 12 18:16:33.255161 containerd[1809]: time="2025-09-12T18:16:33.255148075Z" level=info msg="StartContainer for \"9004da8a8fd43382d7b8d1ec33a5315b28d6529673d175759e85dceff2679b8f\"" Sep 12 18:16:33.256636 containerd[1809]: time="2025-09-12T18:16:33.256607709Z" level=info msg="CreateContainer within sandbox \"6bea7668d0264793f45d43d3e99fbfdb8dbb0978495eed69341442f48298beb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97c5e7b8138bbc824c340a04522822580dbfd3ec0861caf2ab04b4b29faffd8a\"" Sep 12 18:16:33.257106 containerd[1809]: time="2025-09-12T18:16:33.257091496Z" level=info msg="StartContainer for \"97c5e7b8138bbc824c340a04522822580dbfd3ec0861caf2ab04b4b29faffd8a\"" Sep 12 18:16:33.280808 systemd[1]: Started cri-containerd-9004da8a8fd43382d7b8d1ec33a5315b28d6529673d175759e85dceff2679b8f.scope - libcontainer container 9004da8a8fd43382d7b8d1ec33a5315b28d6529673d175759e85dceff2679b8f. Sep 12 18:16:33.282547 systemd[1]: Started cri-containerd-97c5e7b8138bbc824c340a04522822580dbfd3ec0861caf2ab04b4b29faffd8a.scope - libcontainer container 97c5e7b8138bbc824c340a04522822580dbfd3ec0861caf2ab04b4b29faffd8a. Sep 12 18:16:33.294103 containerd[1809]: time="2025-09-12T18:16:33.294076030Z" level=info msg="StartContainer for \"9004da8a8fd43382d7b8d1ec33a5315b28d6529673d175759e85dceff2679b8f\" returns successfully" Sep 12 18:16:33.295061 containerd[1809]: time="2025-09-12T18:16:33.295041590Z" level=info msg="StartContainer for \"97c5e7b8138bbc824c340a04522822580dbfd3ec0861caf2ab04b4b29faffd8a\" returns successfully" Sep 12 18:16:33.892140 kubelet[3147]: I0912 18:16:33.892035 3147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5mr6b" podStartSLOduration=15.892000963 podStartE2EDuration="15.892000963s" podCreationTimestamp="2025-09-12 18:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:16:33.891230555 +0000 UTC m=+21.107123670" watchObservedRunningTime="2025-09-12 18:16:33.892000963 +0000 UTC m=+21.107894054" Sep 12 18:16:33.902179 kubelet[3147]: I0912 18:16:33.902147 3147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9n992" podStartSLOduration=15.902135408 podStartE2EDuration="15.902135408s" podCreationTimestamp="2025-09-12 18:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:16:33.902024678 +0000 UTC m=+21.117917740" watchObservedRunningTime="2025-09-12 18:16:33.902135408 +0000 UTC m=+21.118028465" Sep 12 18:16:34.209987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146406960.mount: Deactivated successfully. Sep 12 18:16:34.229993 systemd[1]: Started sshd@9-139.178.90.133:22-45.245.61.114:54964.service - OpenSSH per-connection server daemon (45.245.61.114:54964). Sep 12 18:16:36.593785 kubelet[3147]: I0912 18:16:36.593669 3147 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 18:16:36.956080 sshd[4721]: Received disconnect from 45.245.61.114 port 54964:11: Bye Bye [preauth] Sep 12 18:16:36.956080 sshd[4721]: Disconnected from authenticating user root 45.245.61.114 port 54964 [preauth] Sep 12 18:16:36.959546 systemd[1]: sshd@9-139.178.90.133:22-45.245.61.114:54964.service: Deactivated successfully. Sep 12 18:17:56.090102 systemd[1]: Started sshd@10-139.178.90.133:22-45.245.61.114:59884.service - OpenSSH per-connection server daemon (45.245.61.114:59884). Sep 12 18:17:57.198537 sshd[4739]: Invalid user ftp from 45.245.61.114 port 59884 Sep 12 18:17:57.395965 sshd[4739]: Received disconnect from 45.245.61.114 port 59884:11: Bye Bye [preauth] Sep 12 18:17:57.395965 sshd[4739]: Disconnected from invalid user ftp 45.245.61.114 port 59884 [preauth] Sep 12 18:17:57.399322 systemd[1]: sshd@10-139.178.90.133:22-45.245.61.114:59884.service: Deactivated successfully. Sep 12 18:18:59.340046 systemd[1]: Started sshd@11-139.178.90.133:22-185.156.73.233:56366.service - OpenSSH per-connection server daemon (185.156.73.233:56366). Sep 12 18:19:00.429812 sshd[4750]: Invalid user usuario from 185.156.73.233 port 56366 Sep 12 18:19:00.576228 sshd[4750]: Connection closed by invalid user usuario 185.156.73.233 port 56366 [preauth] Sep 12 18:19:00.579573 systemd[1]: sshd@11-139.178.90.133:22-185.156.73.233:56366.service: Deactivated successfully. Sep 12 18:19:22.014902 systemd[1]: Started sshd@12-139.178.90.133:22-45.245.61.114:36004.service - OpenSSH per-connection server daemon (45.245.61.114:36004). Sep 12 18:19:26.314460 sshd[4759]: Invalid user openhabian from 45.245.61.114 port 36004 Sep 12 18:19:26.855405 sshd[4759]: Received disconnect from 45.245.61.114 port 36004:11: Bye Bye [preauth] Sep 12 18:19:26.855405 sshd[4759]: Disconnected from invalid user openhabian 45.245.61.114 port 36004 [preauth] Sep 12 18:19:26.858814 systemd[1]: sshd@12-139.178.90.133:22-45.245.61.114:36004.service: Deactivated successfully. Sep 12 18:20:24.952970 update_engine[1804]: I20250912 18:20:24.952757 1804 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 18:20:24.952970 update_engine[1804]: I20250912 18:20:24.952874 1804 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 18:20:24.954553 update_engine[1804]: I20250912 18:20:24.953402 1804 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 18:20:24.955001 update_engine[1804]: I20250912 18:20:24.954937 1804 omaha_request_params.cc:62] Current group set to stable Sep 12 18:20:24.955335 update_engine[1804]: I20250912 18:20:24.955273 1804 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 18:20:24.955335 update_engine[1804]: I20250912 18:20:24.955313 1804 update_attempter.cc:643] Scheduling an action processor start. Sep 12 18:20:24.955705 update_engine[1804]: I20250912 18:20:24.955366 1804 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 18:20:24.955705 update_engine[1804]: I20250912 18:20:24.955478 1804 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 18:20:24.956033 update_engine[1804]: I20250912 18:20:24.955734 1804 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 18:20:24.956033 update_engine[1804]: I20250912 18:20:24.955782 1804 omaha_request_action.cc:272] Request: Sep 12 18:20:24.956033 update_engine[1804]: Sep 12 18:20:24.956033 update_engine[1804]: Sep 12 18:20:24.956033 update_engine[1804]: Sep 12 18:20:24.956033 update_engine[1804]: Sep 12 18:20:24.956033 update_engine[1804]: Sep 12 18:20:24.956033 update_engine[1804]: Sep 12 18:20:24.956033 update_engine[1804]: Sep 12 18:20:24.956033 update_engine[1804]: Sep 12 18:20:24.956033 update_engine[1804]: I20250912 18:20:24.955816 1804 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 18:20:24.957394 locksmithd[1845]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 18:20:24.957958 update_engine[1804]: I20250912 18:20:24.957947 1804 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 18:20:24.958140 update_engine[1804]: I20250912 18:20:24.958127 1804 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 18:20:24.958444 update_engine[1804]: E20250912 18:20:24.958429 1804 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 18:20:24.958500 update_engine[1804]: I20250912 18:20:24.958485 1804 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 18:20:34.947853 update_engine[1804]: I20250912 18:20:34.947712 1804 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 18:20:34.949241 update_engine[1804]: I20250912 18:20:34.948477 1804 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 18:20:34.949425 update_engine[1804]: I20250912 18:20:34.949276 1804 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 18:20:34.949787 update_engine[1804]: E20250912 18:20:34.949716 1804 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 18:20:34.950005 update_engine[1804]: I20250912 18:20:34.949908 1804 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 12 18:20:44.938073 update_engine[1804]: I20250912 18:20:44.937901 1804 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 18:20:44.939858 update_engine[1804]: I20250912 18:20:44.938467 1804 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 18:20:44.939858 update_engine[1804]: I20250912 18:20:44.939153 1804 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 18:20:44.939858 update_engine[1804]: E20250912 18:20:44.939548 1804 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 18:20:44.939858 update_engine[1804]: I20250912 18:20:44.939728 1804 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 12 18:20:51.438437 systemd[1]: Started sshd@13-139.178.90.133:22-45.245.61.114:37014.service - OpenSSH per-connection server daemon (45.245.61.114:37014). Sep 12 18:20:52.783881 sshd[4775]: Received disconnect from 45.245.61.114 port 37014:11: Bye Bye [preauth] Sep 12 18:20:52.783881 sshd[4775]: Disconnected from authenticating user root 45.245.61.114 port 37014 [preauth] Sep 12 18:20:52.787243 systemd[1]: sshd@13-139.178.90.133:22-45.245.61.114:37014.service: Deactivated successfully. Sep 12 18:20:54.947049 update_engine[1804]: I20250912 18:20:54.946877 1804 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 18:20:54.947949 update_engine[1804]: I20250912 18:20:54.947432 1804 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 18:20:54.948110 update_engine[1804]: I20250912 18:20:54.948045 1804 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 18:20:54.948770 update_engine[1804]: E20250912 18:20:54.948660 1804 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 18:20:54.948984 update_engine[1804]: I20250912 18:20:54.948779 1804 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 18:20:54.948984 update_engine[1804]: I20250912 18:20:54.948809 1804 omaha_request_action.cc:617] Omaha request response: Sep 12 18:20:54.949203 update_engine[1804]: E20250912 18:20:54.948975 1804 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 12 18:20:54.949203 update_engine[1804]: I20250912 18:20:54.949024 1804 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 12 18:20:54.949203 update_engine[1804]: I20250912 18:20:54.949043 1804 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 18:20:54.949203 update_engine[1804]: I20250912 18:20:54.949059 1804 update_attempter.cc:306] Processing Done. Sep 12 18:20:54.949203 update_engine[1804]: E20250912 18:20:54.949090 1804 update_attempter.cc:619] Update failed. Sep 12 18:20:54.949203 update_engine[1804]: I20250912 18:20:54.949107 1804 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 12 18:20:54.949203 update_engine[1804]: I20250912 18:20:54.949122 1804 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 12 18:20:54.949203 update_engine[1804]: I20250912 18:20:54.949139 1804 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 12 18:20:54.949899 update_engine[1804]: I20250912 18:20:54.949296 1804 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 18:20:54.949899 update_engine[1804]: I20250912 18:20:54.949357 1804 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 18:20:54.949899 update_engine[1804]: I20250912 18:20:54.949376 1804 omaha_request_action.cc:272] Request: Sep 12 18:20:54.949899 update_engine[1804]: Sep 12 18:20:54.949899 update_engine[1804]: Sep 12 18:20:54.949899 update_engine[1804]: Sep 12 18:20:54.949899 update_engine[1804]: Sep 12 18:20:54.949899 update_engine[1804]: Sep 12 18:20:54.949899 update_engine[1804]: Sep 12 18:20:54.949899 update_engine[1804]: I20250912 18:20:54.949393 1804 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 18:20:54.949899 update_engine[1804]: I20250912 18:20:54.949833 1804 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 18:20:54.950802 locksmithd[1845]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 12 18:20:54.951474 update_engine[1804]: I20250912 18:20:54.950318 1804 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 18:20:54.951474 update_engine[1804]: E20250912 18:20:54.950776 1804 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 18:20:54.951474 update_engine[1804]: I20250912 18:20:54.950881 1804 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 18:20:54.951474 update_engine[1804]: I20250912 18:20:54.950906 1804 omaha_request_action.cc:617] Omaha request response: Sep 12 18:20:54.951474 update_engine[1804]: I20250912 18:20:54.950924 1804 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 18:20:54.951474 update_engine[1804]: I20250912 18:20:54.950942 1804 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 18:20:54.951474 update_engine[1804]: I20250912 18:20:54.950958 1804 update_attempter.cc:306] Processing Done. Sep 12 18:20:54.951474 update_engine[1804]: I20250912 18:20:54.950974 1804 update_attempter.cc:310] Error event sent. Sep 12 18:20:54.951474 update_engine[1804]: I20250912 18:20:54.950997 1804 update_check_scheduler.cc:74] Next update check in 44m53s Sep 12 18:20:54.952256 locksmithd[1845]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 12 18:22:10.990936 systemd[1]: Started sshd@14-139.178.90.133:22-45.245.61.114:53404.service - OpenSSH per-connection server daemon (45.245.61.114:53404). Sep 12 18:22:12.105714 sshd[4787]: Invalid user sergey from 45.245.61.114 port 53404 Sep 12 18:22:12.335883 sshd[4787]: Received disconnect from 45.245.61.114 port 53404:11: Bye Bye [preauth] Sep 12 18:22:12.335883 sshd[4787]: Disconnected from invalid user sergey 45.245.61.114 port 53404 [preauth] Sep 12 18:22:12.339185 systemd[1]: sshd@14-139.178.90.133:22-45.245.61.114:53404.service: Deactivated successfully. Sep 12 18:22:47.675916 systemd[1]: Started sshd@15-139.178.90.133:22-139.178.68.195:42408.service - OpenSSH per-connection server daemon (139.178.68.195:42408). Sep 12 18:22:47.722503 sshd[4796]: Accepted publickey for core from 139.178.68.195 port 42408 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:22:47.723632 sshd-session[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:22:47.727624 systemd-logind[1799]: New session 12 of user core. Sep 12 18:22:47.749814 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 18:22:47.869252 sshd[4798]: Connection closed by 139.178.68.195 port 42408 Sep 12 18:22:47.869419 sshd-session[4796]: pam_unix(sshd:session): session closed for user core Sep 12 18:22:47.871309 systemd[1]: sshd@15-139.178.90.133:22-139.178.68.195:42408.service: Deactivated successfully. Sep 12 18:22:47.872167 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 18:22:47.872527 systemd-logind[1799]: Session 12 logged out. Waiting for processes to exit. Sep 12 18:22:47.873119 systemd-logind[1799]: Removed session 12. Sep 12 18:22:52.886751 systemd[1]: Started sshd@16-139.178.90.133:22-139.178.68.195:57040.service - OpenSSH per-connection server daemon (139.178.68.195:57040). Sep 12 18:22:52.917786 sshd[4827]: Accepted publickey for core from 139.178.68.195 port 57040 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:22:52.918437 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:22:52.921330 systemd-logind[1799]: New session 13 of user core. Sep 12 18:22:52.933071 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 18:22:53.056818 sshd[4829]: Connection closed by 139.178.68.195 port 57040 Sep 12 18:22:53.057068 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Sep 12 18:22:53.059263 systemd[1]: sshd@16-139.178.90.133:22-139.178.68.195:57040.service: Deactivated successfully. Sep 12 18:22:53.060544 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 18:22:53.061585 systemd-logind[1799]: Session 13 logged out. Waiting for processes to exit. Sep 12 18:22:53.062484 systemd-logind[1799]: Removed session 13. Sep 12 18:22:58.069052 systemd[1]: Started sshd@17-139.178.90.133:22-139.178.68.195:57046.service - OpenSSH per-connection server daemon (139.178.68.195:57046). Sep 12 18:22:58.101126 sshd[4855]: Accepted publickey for core from 139.178.68.195 port 57046 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:22:58.101822 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:22:58.104312 systemd-logind[1799]: New session 14 of user core. Sep 12 18:22:58.113899 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 18:22:58.200761 sshd[4857]: Connection closed by 139.178.68.195 port 57046 Sep 12 18:22:58.200941 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Sep 12 18:22:58.202442 systemd[1]: sshd@17-139.178.90.133:22-139.178.68.195:57046.service: Deactivated successfully. Sep 12 18:22:58.203410 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 18:22:58.204216 systemd-logind[1799]: Session 14 logged out. Waiting for processes to exit. Sep 12 18:22:58.204822 systemd-logind[1799]: Removed session 14. Sep 12 18:23:03.241166 systemd[1]: Started sshd@18-139.178.90.133:22-139.178.68.195:51102.service - OpenSSH per-connection server daemon (139.178.68.195:51102). Sep 12 18:23:03.278298 sshd[4884]: Accepted publickey for core from 139.178.68.195 port 51102 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:03.279053 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:03.282333 systemd-logind[1799]: New session 15 of user core. Sep 12 18:23:03.304839 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 18:23:03.395391 sshd[4886]: Connection closed by 139.178.68.195 port 51102 Sep 12 18:23:03.395564 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:03.420240 systemd[1]: sshd@18-139.178.90.133:22-139.178.68.195:51102.service: Deactivated successfully. Sep 12 18:23:03.421279 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 18:23:03.422171 systemd-logind[1799]: Session 15 logged out. Waiting for processes to exit. Sep 12 18:23:03.423012 systemd[1]: Started sshd@19-139.178.90.133:22-139.178.68.195:51108.service - OpenSSH per-connection server daemon (139.178.68.195:51108). Sep 12 18:23:03.423564 systemd-logind[1799]: Removed session 15. Sep 12 18:23:03.457348 sshd[4911]: Accepted publickey for core from 139.178.68.195 port 51108 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:03.458051 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:03.460571 systemd-logind[1799]: New session 16 of user core. Sep 12 18:23:03.478919 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 18:23:03.632373 sshd[4915]: Connection closed by 139.178.68.195 port 51108 Sep 12 18:23:03.632521 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:03.645706 systemd[1]: sshd@19-139.178.90.133:22-139.178.68.195:51108.service: Deactivated successfully. Sep 12 18:23:03.646600 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 18:23:03.647332 systemd-logind[1799]: Session 16 logged out. Waiting for processes to exit. Sep 12 18:23:03.647962 systemd[1]: Started sshd@20-139.178.90.133:22-139.178.68.195:51120.service - OpenSSH per-connection server daemon (139.178.68.195:51120). Sep 12 18:23:03.648373 systemd-logind[1799]: Removed session 16. Sep 12 18:23:03.678726 sshd[4937]: Accepted publickey for core from 139.178.68.195 port 51120 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:03.679318 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:03.681788 systemd-logind[1799]: New session 17 of user core. Sep 12 18:23:03.692950 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 18:23:03.821933 sshd[4941]: Connection closed by 139.178.68.195 port 51120 Sep 12 18:23:03.822126 sshd-session[4937]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:03.823712 systemd[1]: sshd@20-139.178.90.133:22-139.178.68.195:51120.service: Deactivated successfully. Sep 12 18:23:03.824687 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 18:23:03.825422 systemd-logind[1799]: Session 17 logged out. Waiting for processes to exit. Sep 12 18:23:03.826020 systemd-logind[1799]: Removed session 17. Sep 12 18:23:08.853965 systemd[1]: Started sshd@21-139.178.90.133:22-139.178.68.195:51126.service - OpenSSH per-connection server daemon (139.178.68.195:51126). Sep 12 18:23:08.888448 sshd[4967]: Accepted publickey for core from 139.178.68.195 port 51126 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:08.889090 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:08.891919 systemd-logind[1799]: New session 18 of user core. Sep 12 18:23:08.904901 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 18:23:08.993110 sshd[4969]: Connection closed by 139.178.68.195 port 51126 Sep 12 18:23:08.993345 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:08.995128 systemd[1]: sshd@21-139.178.90.133:22-139.178.68.195:51126.service: Deactivated successfully. Sep 12 18:23:08.996151 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 18:23:08.996918 systemd-logind[1799]: Session 18 logged out. Waiting for processes to exit. Sep 12 18:23:08.997467 systemd-logind[1799]: Removed session 18. Sep 12 18:23:14.017856 systemd[1]: Started sshd@22-139.178.90.133:22-139.178.68.195:38544.service - OpenSSH per-connection server daemon (139.178.68.195:38544). Sep 12 18:23:14.049118 sshd[4995]: Accepted publickey for core from 139.178.68.195 port 38544 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:14.049743 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:14.052438 systemd-logind[1799]: New session 19 of user core. Sep 12 18:23:14.065773 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 18:23:14.151449 sshd[4997]: Connection closed by 139.178.68.195 port 38544 Sep 12 18:23:14.151655 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:14.183444 systemd[1]: sshd@22-139.178.90.133:22-139.178.68.195:38544.service: Deactivated successfully. Sep 12 18:23:14.187898 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 18:23:14.191532 systemd-logind[1799]: Session 19 logged out. Waiting for processes to exit. Sep 12 18:23:14.208958 systemd[1]: Started sshd@23-139.178.90.133:22-139.178.68.195:38552.service - OpenSSH per-connection server daemon (139.178.68.195:38552). Sep 12 18:23:14.209636 systemd-logind[1799]: Removed session 19. Sep 12 18:23:14.239901 sshd[5020]: Accepted publickey for core from 139.178.68.195 port 38552 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:14.240528 sshd-session[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:14.243370 systemd-logind[1799]: New session 20 of user core. Sep 12 18:23:14.262084 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 18:23:14.405044 sshd[5024]: Connection closed by 139.178.68.195 port 38552 Sep 12 18:23:14.405210 sshd-session[5020]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:14.423033 systemd[1]: sshd@23-139.178.90.133:22-139.178.68.195:38552.service: Deactivated successfully. Sep 12 18:23:14.424477 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 18:23:14.425623 systemd-logind[1799]: Session 20 logged out. Waiting for processes to exit. Sep 12 18:23:14.426909 systemd[1]: Started sshd@24-139.178.90.133:22-139.178.68.195:38560.service - OpenSSH per-connection server daemon (139.178.68.195:38560). Sep 12 18:23:14.427831 systemd-logind[1799]: Removed session 20. Sep 12 18:23:14.501541 sshd[5045]: Accepted publickey for core from 139.178.68.195 port 38560 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:14.502783 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:14.507493 systemd-logind[1799]: New session 21 of user core. Sep 12 18:23:14.520821 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 18:23:15.495236 sshd[5048]: Connection closed by 139.178.68.195 port 38560 Sep 12 18:23:15.496093 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:15.517073 systemd[1]: sshd@24-139.178.90.133:22-139.178.68.195:38560.service: Deactivated successfully. Sep 12 18:23:15.519157 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 18:23:15.520899 systemd-logind[1799]: Session 21 logged out. Waiting for processes to exit. Sep 12 18:23:15.522556 systemd[1]: Started sshd@25-139.178.90.133:22-139.178.68.195:38570.service - OpenSSH per-connection server daemon (139.178.68.195:38570). Sep 12 18:23:15.523736 systemd-logind[1799]: Removed session 21. Sep 12 18:23:15.597253 sshd[5082]: Accepted publickey for core from 139.178.68.195 port 38570 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:15.599595 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:15.608814 systemd-logind[1799]: New session 22 of user core. Sep 12 18:23:15.634919 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 18:23:15.858127 sshd[5085]: Connection closed by 139.178.68.195 port 38570 Sep 12 18:23:15.858840 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:15.877450 systemd[1]: sshd@25-139.178.90.133:22-139.178.68.195:38570.service: Deactivated successfully. Sep 12 18:23:15.881411 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 18:23:15.884571 systemd-logind[1799]: Session 22 logged out. Waiting for processes to exit. Sep 12 18:23:15.909384 systemd[1]: Started sshd@26-139.178.90.133:22-139.178.68.195:38580.service - OpenSSH per-connection server daemon (139.178.68.195:38580). Sep 12 18:23:15.911973 systemd-logind[1799]: Removed session 22. Sep 12 18:23:15.969911 sshd[5108]: Accepted publickey for core from 139.178.68.195 port 38580 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:15.970615 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:15.973374 systemd-logind[1799]: New session 23 of user core. Sep 12 18:23:15.989843 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 18:23:16.114939 sshd[5113]: Connection closed by 139.178.68.195 port 38580 Sep 12 18:23:16.115111 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:16.116653 systemd[1]: sshd@26-139.178.90.133:22-139.178.68.195:38580.service: Deactivated successfully. Sep 12 18:23:16.117590 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 18:23:16.118277 systemd-logind[1799]: Session 23 logged out. Waiting for processes to exit. Sep 12 18:23:16.118799 systemd-logind[1799]: Removed session 23. Sep 12 18:23:21.135260 systemd[1]: Started sshd@27-139.178.90.133:22-139.178.68.195:33792.service - OpenSSH per-connection server daemon (139.178.68.195:33792). Sep 12 18:23:21.166680 sshd[5143]: Accepted publickey for core from 139.178.68.195 port 33792 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:21.167413 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:21.170225 systemd-logind[1799]: New session 24 of user core. Sep 12 18:23:21.182850 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 18:23:21.267414 sshd[5145]: Connection closed by 139.178.68.195 port 33792 Sep 12 18:23:21.267607 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:21.269283 systemd[1]: sshd@27-139.178.90.133:22-139.178.68.195:33792.service: Deactivated successfully. Sep 12 18:23:21.270237 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 18:23:21.270964 systemd-logind[1799]: Session 24 logged out. Waiting for processes to exit. Sep 12 18:23:21.271491 systemd-logind[1799]: Removed session 24. Sep 12 18:23:26.293893 systemd[1]: Started sshd@28-139.178.90.133:22-139.178.68.195:33796.service - OpenSSH per-connection server daemon (139.178.68.195:33796). Sep 12 18:23:26.349864 sshd[5170]: Accepted publickey for core from 139.178.68.195 port 33796 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:26.351494 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:26.357068 systemd-logind[1799]: New session 25 of user core. Sep 12 18:23:26.369836 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 18:23:26.458231 sshd[5172]: Connection closed by 139.178.68.195 port 33796 Sep 12 18:23:26.458429 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:26.460116 systemd[1]: sshd@28-139.178.90.133:22-139.178.68.195:33796.service: Deactivated successfully. Sep 12 18:23:26.461131 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 18:23:26.461989 systemd-logind[1799]: Session 25 logged out. Waiting for processes to exit. Sep 12 18:23:26.462606 systemd-logind[1799]: Removed session 25. Sep 12 18:23:31.060144 systemd[1]: Started sshd@29-139.178.90.133:22-45.245.61.114:48432.service - OpenSSH per-connection server daemon (45.245.61.114:48432). Sep 12 18:23:31.497898 systemd[1]: Started sshd@30-139.178.90.133:22-139.178.68.195:51990.service - OpenSSH per-connection server daemon (139.178.68.195:51990). Sep 12 18:23:31.533090 sshd[5198]: Accepted publickey for core from 139.178.68.195 port 51990 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:31.533913 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:31.536966 systemd-logind[1799]: New session 26 of user core. Sep 12 18:23:31.553878 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 18:23:31.640937 sshd[5200]: Connection closed by 139.178.68.195 port 51990 Sep 12 18:23:31.641124 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:31.668311 systemd[1]: sshd@30-139.178.90.133:22-139.178.68.195:51990.service: Deactivated successfully. Sep 12 18:23:31.671214 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 18:23:31.673681 systemd-logind[1799]: Session 26 logged out. Waiting for processes to exit. Sep 12 18:23:31.693334 systemd[1]: Started sshd@31-139.178.90.133:22-139.178.68.195:52006.service - OpenSSH per-connection server daemon (139.178.68.195:52006). Sep 12 18:23:31.695905 systemd-logind[1799]: Removed session 26. Sep 12 18:23:31.762706 sshd[5223]: Accepted publickey for core from 139.178.68.195 port 52006 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:31.763369 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:31.766374 systemd-logind[1799]: New session 27 of user core. Sep 12 18:23:31.784059 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 18:23:32.604961 sshd[5195]: Received disconnect from 45.245.61.114 port 48432:11: Bye Bye [preauth] Sep 12 18:23:32.604961 sshd[5195]: Disconnected from authenticating user root 45.245.61.114 port 48432 [preauth] Sep 12 18:23:32.608399 systemd[1]: sshd@29-139.178.90.133:22-45.245.61.114:48432.service: Deactivated successfully. Sep 12 18:23:33.095419 systemd[1]: Started sshd@32-139.178.90.133:22-66.181.171.136:51630.service - OpenSSH per-connection server daemon (66.181.171.136:51630). Sep 12 18:23:33.118911 containerd[1809]: time="2025-09-12T18:23:33.118862000Z" level=info msg="StopContainer for \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\" with timeout 30 (s)" Sep 12 18:23:33.119504 containerd[1809]: time="2025-09-12T18:23:33.119234566Z" level=info msg="Stop container \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\" with signal terminated" Sep 12 18:23:33.130698 systemd[1]: cri-containerd-7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646.scope: Deactivated successfully. Sep 12 18:23:33.153003 containerd[1809]: time="2025-09-12T18:23:33.152970504Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 18:23:33.156899 containerd[1809]: time="2025-09-12T18:23:33.156877742Z" level=info msg="StopContainer for \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\" with timeout 2 (s)" Sep 12 18:23:33.157028 containerd[1809]: time="2025-09-12T18:23:33.157013600Z" level=info msg="Stop container \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\" with signal terminated" Sep 12 18:23:33.157438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646-rootfs.mount: Deactivated successfully. Sep 12 18:23:33.161167 systemd-networkd[1727]: lxc_health: Link DOWN Sep 12 18:23:33.161171 systemd-networkd[1727]: lxc_health: Lost carrier Sep 12 18:23:33.176077 containerd[1809]: time="2025-09-12T18:23:33.176020200Z" level=info msg="shim disconnected" id=7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646 namespace=k8s.io Sep 12 18:23:33.176077 containerd[1809]: time="2025-09-12T18:23:33.176054247Z" level=warning msg="cleaning up after shim disconnected" id=7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646 namespace=k8s.io Sep 12 18:23:33.176077 containerd[1809]: time="2025-09-12T18:23:33.176059414Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:23:33.183071 containerd[1809]: time="2025-09-12T18:23:33.183021029Z" level=info msg="StopContainer for \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\" returns successfully" Sep 12 18:23:33.183389 containerd[1809]: time="2025-09-12T18:23:33.183375191Z" level=info msg="StopPodSandbox for \"b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7\"" Sep 12 18:23:33.183427 containerd[1809]: time="2025-09-12T18:23:33.183397459Z" level=info msg="Container to stop \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:23:33.184663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7-shm.mount: Deactivated successfully. Sep 12 18:23:33.186562 systemd[1]: cri-containerd-b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7.scope: Deactivated successfully. Sep 12 18:23:33.195740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7-rootfs.mount: Deactivated successfully. Sep 12 18:23:33.196226 containerd[1809]: time="2025-09-12T18:23:33.196191959Z" level=info msg="shim disconnected" id=b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7 namespace=k8s.io Sep 12 18:23:33.196278 containerd[1809]: time="2025-09-12T18:23:33.196227979Z" level=warning msg="cleaning up after shim disconnected" id=b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7 namespace=k8s.io Sep 12 18:23:33.196278 containerd[1809]: time="2025-09-12T18:23:33.196236375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:23:33.199505 systemd[1]: cri-containerd-9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72.scope: Deactivated successfully. Sep 12 18:23:33.199713 systemd[1]: cri-containerd-9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72.scope: Consumed 6.668s CPU time, 167.5M memory peak, 144K read from disk, 13.3M written to disk. Sep 12 18:23:33.203138 containerd[1809]: time="2025-09-12T18:23:33.203118831Z" level=info msg="TearDown network for sandbox \"b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7\" successfully" Sep 12 18:23:33.203138 containerd[1809]: time="2025-09-12T18:23:33.203136182Z" level=info msg="StopPodSandbox for \"b5978c5d1f3df937338706629c13f819ef89dcc85a7990dee2765fa526f8e8f7\" returns successfully" Sep 12 18:23:33.209324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72-rootfs.mount: Deactivated successfully. Sep 12 18:23:33.226046 containerd[1809]: time="2025-09-12T18:23:33.226015403Z" level=info msg="shim disconnected" id=9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72 namespace=k8s.io Sep 12 18:23:33.226046 containerd[1809]: time="2025-09-12T18:23:33.226045004Z" level=warning msg="cleaning up after shim disconnected" id=9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72 namespace=k8s.io Sep 12 18:23:33.226145 containerd[1809]: time="2025-09-12T18:23:33.226053200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:23:33.232658 containerd[1809]: time="2025-09-12T18:23:33.232614275Z" level=warning msg="cleanup warnings time=\"2025-09-12T18:23:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 18:23:33.234440 containerd[1809]: time="2025-09-12T18:23:33.234393399Z" level=info msg="StopContainer for \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\" returns successfully" Sep 12 18:23:33.234741 containerd[1809]: time="2025-09-12T18:23:33.234686969Z" level=info msg="StopPodSandbox for \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\"" Sep 12 18:23:33.234741 containerd[1809]: time="2025-09-12T18:23:33.234707544Z" level=info msg="Container to stop \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:23:33.234741 containerd[1809]: time="2025-09-12T18:23:33.234730212Z" level=info msg="Container to stop \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:23:33.234741 containerd[1809]: time="2025-09-12T18:23:33.234736315Z" level=info msg="Container to stop \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:23:33.234741 containerd[1809]: time="2025-09-12T18:23:33.234743577Z" level=info msg="Container to stop \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:23:33.234909 containerd[1809]: time="2025-09-12T18:23:33.234749814Z" level=info msg="Container to stop \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:23:33.238357 systemd[1]: cri-containerd-2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf.scope: Deactivated successfully. Sep 12 18:23:33.250053 kubelet[3147]: I0912 18:23:33.250035 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzmj6\" (UniqueName: \"kubernetes.io/projected/37ed2745-4fb8-4684-a12c-badc5421b9b8-kube-api-access-lzmj6\") pod \"37ed2745-4fb8-4684-a12c-badc5421b9b8\" (UID: \"37ed2745-4fb8-4684-a12c-badc5421b9b8\") " Sep 12 18:23:33.250277 kubelet[3147]: I0912 18:23:33.250060 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37ed2745-4fb8-4684-a12c-badc5421b9b8-cilium-config-path\") pod \"37ed2745-4fb8-4684-a12c-badc5421b9b8\" (UID: \"37ed2745-4fb8-4684-a12c-badc5421b9b8\") " Sep 12 18:23:33.251191 kubelet[3147]: I0912 18:23:33.251181 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ed2745-4fb8-4684-a12c-badc5421b9b8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "37ed2745-4fb8-4684-a12c-badc5421b9b8" (UID: "37ed2745-4fb8-4684-a12c-badc5421b9b8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 18:23:33.267165 containerd[1809]: time="2025-09-12T18:23:33.267129553Z" level=info msg="shim disconnected" id=2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf namespace=k8s.io Sep 12 18:23:33.267165 containerd[1809]: time="2025-09-12T18:23:33.267163858Z" level=warning msg="cleaning up after shim disconnected" id=2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf namespace=k8s.io Sep 12 18:23:33.267251 containerd[1809]: time="2025-09-12T18:23:33.267169766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:23:33.274373 containerd[1809]: time="2025-09-12T18:23:33.274322808Z" level=info msg="TearDown network for sandbox \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" successfully" Sep 12 18:23:33.274373 containerd[1809]: time="2025-09-12T18:23:33.274341509Z" level=info msg="StopPodSandbox for \"2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf\" returns successfully" Sep 12 18:23:33.285114 kubelet[3147]: I0912 18:23:33.285066 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ed2745-4fb8-4684-a12c-badc5421b9b8-kube-api-access-lzmj6" (OuterVolumeSpecName: "kube-api-access-lzmj6") pod "37ed2745-4fb8-4684-a12c-badc5421b9b8" (UID: "37ed2745-4fb8-4684-a12c-badc5421b9b8"). InnerVolumeSpecName "kube-api-access-lzmj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 18:23:33.350910 kubelet[3147]: I0912 18:23:33.350687 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-bpf-maps\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.350910 kubelet[3147]: I0912 18:23:33.350814 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ef487c0-6222-42ea-94f7-36f4cb4ac902-hubble-tls\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.350910 kubelet[3147]: I0912 18:23:33.350842 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 18:23:33.350910 kubelet[3147]: I0912 18:23:33.350874 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-config-path\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.351999 kubelet[3147]: I0912 18:23:33.350934 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-etc-cni-netd\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.351999 kubelet[3147]: I0912 18:23:33.350987 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl6vm\" (UniqueName: \"kubernetes.io/projected/6ef487c0-6222-42ea-94f7-36f4cb4ac902-kube-api-access-cl6vm\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.351999 kubelet[3147]: I0912 18:23:33.351036 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-xtables-lock\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.351999 kubelet[3147]: I0912 18:23:33.351078 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-cgroup\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.351999 kubelet[3147]: I0912 18:23:33.351108 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 18:23:33.351999 kubelet[3147]: I0912 18:23:33.351153 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-run\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.353005 kubelet[3147]: I0912 18:23:33.351236 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-hostproc\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.353005 kubelet[3147]: I0912 18:23:33.351228 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 18:23:33.353005 kubelet[3147]: I0912 18:23:33.351239 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 18:23:33.353005 kubelet[3147]: I0912 18:23:33.351324 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-host-proc-sys-kernel\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.353005 kubelet[3147]: I0912 18:23:33.351337 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 18:23:33.353565 kubelet[3147]: I0912 18:23:33.351383 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-hostproc" (OuterVolumeSpecName: "hostproc") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 18:23:33.353565 kubelet[3147]: I0912 18:23:33.351425 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ef487c0-6222-42ea-94f7-36f4cb4ac902-clustermesh-secrets\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.353565 kubelet[3147]: I0912 18:23:33.351505 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cni-path\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.353565 kubelet[3147]: I0912 18:23:33.351432 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 18:23:33.353565 kubelet[3147]: I0912 18:23:33.351571 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-lib-modules\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.354178 kubelet[3147]: I0912 18:23:33.351662 3147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-host-proc-sys-net\") pod \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\" (UID: \"6ef487c0-6222-42ea-94f7-36f4cb4ac902\") " Sep 12 18:23:33.354178 kubelet[3147]: I0912 18:23:33.351671 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cni-path" (OuterVolumeSpecName: "cni-path") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 18:23:33.354178 kubelet[3147]: I0912 18:23:33.351740 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 18:23:33.354178 kubelet[3147]: I0912 18:23:33.351790 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 18:23:33.354178 kubelet[3147]: I0912 18:23:33.351833 3147 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-etc-cni-netd\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.354178 kubelet[3147]: I0912 18:23:33.351893 3147 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-xtables-lock\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.354928 kubelet[3147]: I0912 18:23:33.351925 3147 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-cgroup\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.354928 kubelet[3147]: I0912 18:23:33.351953 3147 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzmj6\" (UniqueName: \"kubernetes.io/projected/37ed2745-4fb8-4684-a12c-badc5421b9b8-kube-api-access-lzmj6\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.354928 kubelet[3147]: I0912 18:23:33.351980 3147 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-run\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.354928 kubelet[3147]: I0912 18:23:33.352007 3147 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-hostproc\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.354928 kubelet[3147]: I0912 18:23:33.352036 3147 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-host-proc-sys-kernel\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.354928 kubelet[3147]: I0912 18:23:33.352063 3147 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37ed2745-4fb8-4684-a12c-badc5421b9b8-cilium-config-path\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.354928 kubelet[3147]: I0912 18:23:33.352089 3147 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cni-path\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.354928 kubelet[3147]: I0912 18:23:33.352119 3147 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-bpf-maps\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.357644 kubelet[3147]: I0912 18:23:33.357503 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef487c0-6222-42ea-94f7-36f4cb4ac902-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 18:23:33.357644 kubelet[3147]: I0912 18:23:33.357580 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef487c0-6222-42ea-94f7-36f4cb4ac902-kube-api-access-cl6vm" (OuterVolumeSpecName: "kube-api-access-cl6vm") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "kube-api-access-cl6vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 18:23:33.357960 kubelet[3147]: I0912 18:23:33.357584 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ef487c0-6222-42ea-94f7-36f4cb4ac902-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 18:23:33.358684 kubelet[3147]: I0912 18:23:33.358567 3147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6ef487c0-6222-42ea-94f7-36f4cb4ac902" (UID: "6ef487c0-6222-42ea-94f7-36f4cb4ac902"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 18:23:33.453062 kubelet[3147]: I0912 18:23:33.452970 3147 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ef487c0-6222-42ea-94f7-36f4cb4ac902-hubble-tls\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.453062 kubelet[3147]: I0912 18:23:33.453063 3147 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ef487c0-6222-42ea-94f7-36f4cb4ac902-cilium-config-path\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.453517 kubelet[3147]: I0912 18:23:33.453120 3147 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl6vm\" (UniqueName: \"kubernetes.io/projected/6ef487c0-6222-42ea-94f7-36f4cb4ac902-kube-api-access-cl6vm\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.453517 kubelet[3147]: I0912 18:23:33.453172 3147 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ef487c0-6222-42ea-94f7-36f4cb4ac902-clustermesh-secrets\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.453517 kubelet[3147]: I0912 18:23:33.453222 3147 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-lib-modules\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:33.453517 kubelet[3147]: I0912 18:23:33.453270 3147 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ef487c0-6222-42ea-94f7-36f4cb4ac902-host-proc-sys-net\") on node \"ci-4230.2.3-a-0654ef0f4d\" DevicePath \"\"" Sep 12 18:23:34.024588 kubelet[3147]: I0912 18:23:34.024468 3147 scope.go:117] "RemoveContainer" containerID="7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646" Sep 12 18:23:34.027472 containerd[1809]: time="2025-09-12T18:23:34.027410697Z" level=info msg="RemoveContainer for \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\"" Sep 12 18:23:34.029703 containerd[1809]: time="2025-09-12T18:23:34.029691134Z" level=info msg="RemoveContainer for \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\" returns successfully" Sep 12 18:23:34.029790 kubelet[3147]: I0912 18:23:34.029781 3147 scope.go:117] "RemoveContainer" containerID="7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646" Sep 12 18:23:34.029889 containerd[1809]: time="2025-09-12T18:23:34.029873795Z" level=error msg="ContainerStatus for \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\": not found" Sep 12 18:23:34.029948 kubelet[3147]: E0912 18:23:34.029938 3147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\": not found" containerID="7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646" Sep 12 18:23:34.029971 systemd[1]: Removed slice kubepods-besteffort-pod37ed2745_4fb8_4684_a12c_badc5421b9b8.slice - libcontainer container kubepods-besteffort-pod37ed2745_4fb8_4684_a12c_badc5421b9b8.slice. Sep 12 18:23:34.030150 kubelet[3147]: I0912 18:23:34.029956 3147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646"} err="failed to get container status \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a9ee6e6675d70c556af175a50735749bb5375e014533162d59d2dba6a158646\": not found" Sep 12 18:23:34.030150 kubelet[3147]: I0912 18:23:34.030000 3147 scope.go:117] "RemoveContainer" containerID="9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72" Sep 12 18:23:34.030496 containerd[1809]: time="2025-09-12T18:23:34.030485254Z" level=info msg="RemoveContainer for \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\"" Sep 12 18:23:34.030969 systemd[1]: Removed slice kubepods-burstable-pod6ef487c0_6222_42ea_94f7_36f4cb4ac902.slice - libcontainer container kubepods-burstable-pod6ef487c0_6222_42ea_94f7_36f4cb4ac902.slice. Sep 12 18:23:34.031021 systemd[1]: kubepods-burstable-pod6ef487c0_6222_42ea_94f7_36f4cb4ac902.slice: Consumed 6.714s CPU time, 168M memory peak, 144K read from disk, 13.3M written to disk. Sep 12 18:23:34.031724 containerd[1809]: time="2025-09-12T18:23:34.031693145Z" level=info msg="RemoveContainer for \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\" returns successfully" Sep 12 18:23:34.031778 kubelet[3147]: I0912 18:23:34.031764 3147 scope.go:117] "RemoveContainer" containerID="7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220" Sep 12 18:23:34.032181 containerd[1809]: time="2025-09-12T18:23:34.032170428Z" level=info msg="RemoveContainer for \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\"" Sep 12 18:23:34.033199 containerd[1809]: time="2025-09-12T18:23:34.033188971Z" level=info msg="RemoveContainer for \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\" returns successfully" Sep 12 18:23:34.033259 kubelet[3147]: I0912 18:23:34.033251 3147 scope.go:117] "RemoveContainer" containerID="8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601" Sep 12 18:23:34.033642 containerd[1809]: time="2025-09-12T18:23:34.033632258Z" level=info msg="RemoveContainer for \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\"" Sep 12 18:23:34.034824 containerd[1809]: time="2025-09-12T18:23:34.034789964Z" level=info msg="RemoveContainer for \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\" returns successfully" Sep 12 18:23:34.034888 kubelet[3147]: I0912 18:23:34.034861 3147 scope.go:117] "RemoveContainer" containerID="5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154" Sep 12 18:23:34.035281 containerd[1809]: time="2025-09-12T18:23:34.035269999Z" level=info msg="RemoveContainer for \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\"" Sep 12 18:23:34.036581 containerd[1809]: time="2025-09-12T18:23:34.036544495Z" level=info msg="RemoveContainer for \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\" returns successfully" Sep 12 18:23:34.036613 kubelet[3147]: I0912 18:23:34.036601 3147 scope.go:117] "RemoveContainer" containerID="1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6" Sep 12 18:23:34.037155 containerd[1809]: time="2025-09-12T18:23:34.037143191Z" level=info msg="RemoveContainer for \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\"" Sep 12 18:23:34.038217 containerd[1809]: time="2025-09-12T18:23:34.038205882Z" level=info msg="RemoveContainer for \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\" returns successfully" Sep 12 18:23:34.038273 kubelet[3147]: I0912 18:23:34.038264 3147 scope.go:117] "RemoveContainer" containerID="9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72" Sep 12 18:23:34.038356 containerd[1809]: time="2025-09-12T18:23:34.038340596Z" level=error msg="ContainerStatus for \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\": not found" Sep 12 18:23:34.038403 kubelet[3147]: E0912 18:23:34.038394 3147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\": not found" containerID="9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72" Sep 12 18:23:34.038439 kubelet[3147]: I0912 18:23:34.038406 3147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72"} err="failed to get container status \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d84fcb9fcee310bbad9405027b540f8c8a90e474a4fe3cd3cb9b8fc9e9b3c72\": not found" Sep 12 18:23:34.038439 kubelet[3147]: I0912 18:23:34.038415 3147 scope.go:117] "RemoveContainer" containerID="7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220" Sep 12 18:23:34.038499 containerd[1809]: time="2025-09-12T18:23:34.038473121Z" level=error msg="ContainerStatus for \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\": not found" Sep 12 18:23:34.038542 kubelet[3147]: E0912 18:23:34.038532 3147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\": not found" containerID="7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220" Sep 12 18:23:34.038574 kubelet[3147]: I0912 18:23:34.038544 3147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220"} err="failed to get container status \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\": rpc error: code = NotFound desc = an error occurred when try to find container \"7843ea716acf2f2d016c16c1508a7fbe0f037d4e3d925e413f7f32451800e220\": not found" Sep 12 18:23:34.038574 kubelet[3147]: I0912 18:23:34.038553 3147 scope.go:117] "RemoveContainer" containerID="8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601" Sep 12 18:23:34.038630 containerd[1809]: time="2025-09-12T18:23:34.038608986Z" level=error msg="ContainerStatus for \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\": not found" Sep 12 18:23:34.038671 kubelet[3147]: E0912 18:23:34.038661 3147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\": not found" containerID="8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601" Sep 12 18:23:34.038703 kubelet[3147]: I0912 18:23:34.038670 3147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601"} err="failed to get container status \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a622b4858aa6179805dcb575e6a352f87056da9b0c98c2e0c9e520220983601\": not found" Sep 12 18:23:34.038703 kubelet[3147]: I0912 18:23:34.038677 3147 scope.go:117] "RemoveContainer" containerID="5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154" Sep 12 18:23:34.038749 containerd[1809]: time="2025-09-12T18:23:34.038736262Z" level=error msg="ContainerStatus for \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\": not found" Sep 12 18:23:34.038791 kubelet[3147]: E0912 18:23:34.038781 3147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\": not found" containerID="5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154" Sep 12 18:23:34.038814 kubelet[3147]: I0912 18:23:34.038793 3147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154"} err="failed to get container status \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ae78621d1663fc3b4bb5853d56bd9fc54a93603c32ea142f27cef82a483d154\": not found" Sep 12 18:23:34.038814 kubelet[3147]: I0912 18:23:34.038801 3147 scope.go:117] "RemoveContainer" containerID="1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6" Sep 12 18:23:34.038863 containerd[1809]: time="2025-09-12T18:23:34.038853516Z" level=error msg="ContainerStatus for \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\": not found" Sep 12 18:23:34.038899 kubelet[3147]: E0912 18:23:34.038892 3147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\": not found" containerID="1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6" Sep 12 18:23:34.038919 kubelet[3147]: I0912 18:23:34.038902 3147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6"} err="failed to get container status \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"1dbce32f6d78c9e47819057c42a15af2938563c3a1b17b26d57cb0d1604ff7e6\": not found" Sep 12 18:23:34.092855 sshd[5248]: Invalid user from 66.181.171.136 port 51630 Sep 12 18:23:34.129037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf-rootfs.mount: Deactivated successfully. Sep 12 18:23:34.129098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c26be9e9500d54d0637e64b4b0c7d64d6e83037baf2dd181c45b8c470741cdf-shm.mount: Deactivated successfully. Sep 12 18:23:34.129143 systemd[1]: var-lib-kubelet-pods-37ed2745\x2d4fb8\x2d4684\x2da12c\x2dbadc5421b9b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlzmj6.mount: Deactivated successfully. Sep 12 18:23:34.129184 systemd[1]: var-lib-kubelet-pods-6ef487c0\x2d6222\x2d42ea\x2d94f7\x2d36f4cb4ac902-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcl6vm.mount: Deactivated successfully. Sep 12 18:23:34.129224 systemd[1]: var-lib-kubelet-pods-6ef487c0\x2d6222\x2d42ea\x2d94f7\x2d36f4cb4ac902-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 18:23:34.129265 systemd[1]: var-lib-kubelet-pods-6ef487c0\x2d6222\x2d42ea\x2d94f7\x2d36f4cb4ac902-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 18:23:34.832377 kubelet[3147]: I0912 18:23:34.832334 3147 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ed2745-4fb8-4684-a12c-badc5421b9b8" path="/var/lib/kubelet/pods/37ed2745-4fb8-4684-a12c-badc5421b9b8/volumes" Sep 12 18:23:34.832644 kubelet[3147]: I0912 18:23:34.832573 3147 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef487c0-6222-42ea-94f7-36f4cb4ac902" path="/var/lib/kubelet/pods/6ef487c0-6222-42ea-94f7-36f4cb4ac902/volumes" Sep 12 18:23:35.067329 sshd[5226]: Connection closed by 139.178.68.195 port 52006 Sep 12 18:23:35.068257 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:35.088555 systemd[1]: sshd@31-139.178.90.133:22-139.178.68.195:52006.service: Deactivated successfully. Sep 12 18:23:35.089488 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 18:23:35.089900 systemd-logind[1799]: Session 27 logged out. Waiting for processes to exit. Sep 12 18:23:35.090848 systemd[1]: Started sshd@33-139.178.90.133:22-139.178.68.195:52014.service - OpenSSH per-connection server daemon (139.178.68.195:52014). Sep 12 18:23:35.091306 systemd-logind[1799]: Removed session 27. Sep 12 18:23:35.121439 sshd[5405]: Accepted publickey for core from 139.178.68.195 port 52014 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:35.122153 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:35.124614 systemd-logind[1799]: New session 28 of user core. Sep 12 18:23:35.138931 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 18:23:35.478572 sshd[5409]: Connection closed by 139.178.68.195 port 52014 Sep 12 18:23:35.478730 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:35.484454 kubelet[3147]: E0912 18:23:35.484434 3147 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ef487c0-6222-42ea-94f7-36f4cb4ac902" containerName="apply-sysctl-overwrites" Sep 12 18:23:35.484454 kubelet[3147]: E0912 18:23:35.484452 3147 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ef487c0-6222-42ea-94f7-36f4cb4ac902" containerName="clean-cilium-state" Sep 12 18:23:35.484454 kubelet[3147]: E0912 18:23:35.484458 3147 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ef487c0-6222-42ea-94f7-36f4cb4ac902" containerName="cilium-agent" Sep 12 18:23:35.484454 kubelet[3147]: E0912 18:23:35.484463 3147 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ef487c0-6222-42ea-94f7-36f4cb4ac902" containerName="mount-cgroup" Sep 12 18:23:35.484633 kubelet[3147]: E0912 18:23:35.484469 3147 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37ed2745-4fb8-4684-a12c-badc5421b9b8" containerName="cilium-operator" Sep 12 18:23:35.484633 kubelet[3147]: E0912 18:23:35.484474 3147 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ef487c0-6222-42ea-94f7-36f4cb4ac902" containerName="mount-bpf-fs" Sep 12 18:23:35.484633 kubelet[3147]: I0912 18:23:35.484492 3147 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ed2745-4fb8-4684-a12c-badc5421b9b8" containerName="cilium-operator" Sep 12 18:23:35.484633 kubelet[3147]: I0912 18:23:35.484500 3147 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef487c0-6222-42ea-94f7-36f4cb4ac902" containerName="cilium-agent" Sep 12 18:23:35.490011 systemd[1]: sshd@33-139.178.90.133:22-139.178.68.195:52014.service: Deactivated successfully. Sep 12 18:23:35.491030 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 18:23:35.491751 systemd-logind[1799]: Session 28 logged out. Waiting for processes to exit. Sep 12 18:23:35.492733 systemd[1]: Started sshd@34-139.178.90.133:22-139.178.68.195:52022.service - OpenSSH per-connection server daemon (139.178.68.195:52022). Sep 12 18:23:35.493750 systemd-logind[1799]: Removed session 28. Sep 12 18:23:35.495774 systemd[1]: Created slice kubepods-burstable-pod359fd970_1599_404a_aa01_b8a5027e6655.slice - libcontainer container kubepods-burstable-pod359fd970_1599_404a_aa01_b8a5027e6655.slice. Sep 12 18:23:35.525027 sshd[5431]: Accepted publickey for core from 139.178.68.195 port 52022 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:35.525672 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:35.528429 systemd-logind[1799]: New session 29 of user core. Sep 12 18:23:35.548854 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 18:23:35.567556 kubelet[3147]: I0912 18:23:35.567391 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/359fd970-1599-404a-aa01-b8a5027e6655-etc-cni-netd\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.567635 kubelet[3147]: I0912 18:23:35.567596 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/359fd970-1599-404a-aa01-b8a5027e6655-hubble-tls\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.567713 kubelet[3147]: I0912 18:23:35.567678 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/359fd970-1599-404a-aa01-b8a5027e6655-hostproc\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.567713 kubelet[3147]: I0912 18:23:35.567697 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/359fd970-1599-404a-aa01-b8a5027e6655-xtables-lock\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.567787 kubelet[3147]: I0912 18:23:35.567714 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/359fd970-1599-404a-aa01-b8a5027e6655-cilium-config-path\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.567787 kubelet[3147]: I0912 18:23:35.567728 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/359fd970-1599-404a-aa01-b8a5027e6655-cilium-run\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.567907 kubelet[3147]: I0912 18:23:35.567888 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/359fd970-1599-404a-aa01-b8a5027e6655-bpf-maps\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.567951 kubelet[3147]: I0912 18:23:35.567923 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/359fd970-1599-404a-aa01-b8a5027e6655-lib-modules\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.567951 kubelet[3147]: I0912 18:23:35.567941 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/359fd970-1599-404a-aa01-b8a5027e6655-host-proc-sys-kernel\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.568016 kubelet[3147]: I0912 18:23:35.567960 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/359fd970-1599-404a-aa01-b8a5027e6655-host-proc-sys-net\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.568016 kubelet[3147]: I0912 18:23:35.567982 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29sbv\" (UniqueName: \"kubernetes.io/projected/359fd970-1599-404a-aa01-b8a5027e6655-kube-api-access-29sbv\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.568086 kubelet[3147]: I0912 18:23:35.568024 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/359fd970-1599-404a-aa01-b8a5027e6655-cilium-cgroup\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.568086 kubelet[3147]: I0912 18:23:35.568055 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/359fd970-1599-404a-aa01-b8a5027e6655-cni-path\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.568161 kubelet[3147]: I0912 18:23:35.568130 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/359fd970-1599-404a-aa01-b8a5027e6655-cilium-ipsec-secrets\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.568198 kubelet[3147]: I0912 18:23:35.568179 3147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/359fd970-1599-404a-aa01-b8a5027e6655-clustermesh-secrets\") pod \"cilium-mxp5h\" (UID: \"359fd970-1599-404a-aa01-b8a5027e6655\") " pod="kube-system/cilium-mxp5h" Sep 12 18:23:35.597775 sshd[5434]: Connection closed by 139.178.68.195 port 52022 Sep 12 18:23:35.598195 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:35.628880 systemd[1]: sshd@34-139.178.90.133:22-139.178.68.195:52022.service: Deactivated successfully. Sep 12 18:23:35.633060 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 18:23:35.636606 systemd-logind[1799]: Session 29 logged out. Waiting for processes to exit. Sep 12 18:23:35.659519 systemd[1]: Started sshd@35-139.178.90.133:22-139.178.68.195:52038.service - OpenSSH per-connection server daemon (139.178.68.195:52038). Sep 12 18:23:35.662397 systemd-logind[1799]: Removed session 29. Sep 12 18:23:35.715936 sshd[5440]: Accepted publickey for core from 139.178.68.195 port 52038 ssh2: RSA SHA256:3ltq9d2EXxQmjGlrLLscrCzNJjJqrF1VkkCxKVFg/5U Sep 12 18:23:35.716634 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:23:35.719516 systemd-logind[1799]: New session 30 of user core. Sep 12 18:23:35.737872 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 18:23:35.797861 containerd[1809]: time="2025-09-12T18:23:35.797761481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mxp5h,Uid:359fd970-1599-404a-aa01-b8a5027e6655,Namespace:kube-system,Attempt:0,}" Sep 12 18:23:35.807425 containerd[1809]: time="2025-09-12T18:23:35.807378987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 18:23:35.807604 containerd[1809]: time="2025-09-12T18:23:35.807587637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 18:23:35.807652 containerd[1809]: time="2025-09-12T18:23:35.807600325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:23:35.807678 containerd[1809]: time="2025-09-12T18:23:35.807655393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 18:23:35.830823 systemd[1]: Started cri-containerd-02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7.scope - libcontainer container 02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7. Sep 12 18:23:35.841061 containerd[1809]: time="2025-09-12T18:23:35.841011438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mxp5h,Uid:359fd970-1599-404a-aa01-b8a5027e6655,Namespace:kube-system,Attempt:0,} returns sandbox id \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\"" Sep 12 18:23:35.842094 containerd[1809]: time="2025-09-12T18:23:35.842082035Z" level=info msg="CreateContainer within sandbox \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 18:23:35.847252 containerd[1809]: time="2025-09-12T18:23:35.847209537Z" level=info msg="CreateContainer within sandbox \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a0e85d86044fbb75ac7e02df9485b09a840971ac461d6c16a271b5bd4558243a\"" Sep 12 18:23:35.847424 containerd[1809]: time="2025-09-12T18:23:35.847412861Z" level=info msg="StartContainer for \"a0e85d86044fbb75ac7e02df9485b09a840971ac461d6c16a271b5bd4558243a\"" Sep 12 18:23:35.871796 systemd[1]: Started cri-containerd-a0e85d86044fbb75ac7e02df9485b09a840971ac461d6c16a271b5bd4558243a.scope - libcontainer container a0e85d86044fbb75ac7e02df9485b09a840971ac461d6c16a271b5bd4558243a. Sep 12 18:23:35.885020 containerd[1809]: time="2025-09-12T18:23:35.884964146Z" level=info msg="StartContainer for \"a0e85d86044fbb75ac7e02df9485b09a840971ac461d6c16a271b5bd4558243a\" returns successfully" Sep 12 18:23:35.890398 systemd[1]: cri-containerd-a0e85d86044fbb75ac7e02df9485b09a840971ac461d6c16a271b5bd4558243a.scope: Deactivated successfully. Sep 12 18:23:35.928076 containerd[1809]: time="2025-09-12T18:23:35.928038701Z" level=info msg="shim disconnected" id=a0e85d86044fbb75ac7e02df9485b09a840971ac461d6c16a271b5bd4558243a namespace=k8s.io Sep 12 18:23:35.928076 containerd[1809]: time="2025-09-12T18:23:35.928072792Z" level=warning msg="cleaning up after shim disconnected" id=a0e85d86044fbb75ac7e02df9485b09a840971ac461d6c16a271b5bd4558243a namespace=k8s.io Sep 12 18:23:35.928076 containerd[1809]: time="2025-09-12T18:23:35.928079758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:23:36.046222 containerd[1809]: time="2025-09-12T18:23:36.046017153Z" level=info msg="CreateContainer within sandbox \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 18:23:36.053491 containerd[1809]: time="2025-09-12T18:23:36.053447057Z" level=info msg="CreateContainer within sandbox \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ecb1e3233ec53d51b273bc25c44c6cb4b8ac5c2426dfd580bea8093ee1212681\"" Sep 12 18:23:36.053816 containerd[1809]: time="2025-09-12T18:23:36.053749964Z" level=info msg="StartContainer for \"ecb1e3233ec53d51b273bc25c44c6cb4b8ac5c2426dfd580bea8093ee1212681\"" Sep 12 18:23:36.080795 systemd[1]: Started cri-containerd-ecb1e3233ec53d51b273bc25c44c6cb4b8ac5c2426dfd580bea8093ee1212681.scope - libcontainer container ecb1e3233ec53d51b273bc25c44c6cb4b8ac5c2426dfd580bea8093ee1212681. Sep 12 18:23:36.095812 containerd[1809]: time="2025-09-12T18:23:36.095758548Z" level=info msg="StartContainer for \"ecb1e3233ec53d51b273bc25c44c6cb4b8ac5c2426dfd580bea8093ee1212681\" returns successfully" Sep 12 18:23:36.099904 systemd[1]: cri-containerd-ecb1e3233ec53d51b273bc25c44c6cb4b8ac5c2426dfd580bea8093ee1212681.scope: Deactivated successfully. Sep 12 18:23:36.112105 containerd[1809]: time="2025-09-12T18:23:36.112008379Z" level=info msg="shim disconnected" id=ecb1e3233ec53d51b273bc25c44c6cb4b8ac5c2426dfd580bea8093ee1212681 namespace=k8s.io Sep 12 18:23:36.112105 containerd[1809]: time="2025-09-12T18:23:36.112052796Z" level=warning msg="cleaning up after shim disconnected" id=ecb1e3233ec53d51b273bc25c44c6cb4b8ac5c2426dfd580bea8093ee1212681 namespace=k8s.io Sep 12 18:23:36.112105 containerd[1809]: time="2025-09-12T18:23:36.112058112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:23:37.052992 containerd[1809]: time="2025-09-12T18:23:37.052875028Z" level=info msg="CreateContainer within sandbox \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 18:23:37.066595 containerd[1809]: time="2025-09-12T18:23:37.066577434Z" level=info msg="CreateContainer within sandbox \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f89998f121f79e27babd9138b62608bec37c40116056d464a4189f9bead8613c\"" Sep 12 18:23:37.067129 containerd[1809]: time="2025-09-12T18:23:37.067071307Z" level=info msg="StartContainer for \"f89998f121f79e27babd9138b62608bec37c40116056d464a4189f9bead8613c\"" Sep 12 18:23:37.085994 systemd[1]: Started cri-containerd-f89998f121f79e27babd9138b62608bec37c40116056d464a4189f9bead8613c.scope - libcontainer container f89998f121f79e27babd9138b62608bec37c40116056d464a4189f9bead8613c. Sep 12 18:23:37.100112 containerd[1809]: time="2025-09-12T18:23:37.100058146Z" level=info msg="StartContainer for \"f89998f121f79e27babd9138b62608bec37c40116056d464a4189f9bead8613c\" returns successfully" Sep 12 18:23:37.101403 systemd[1]: cri-containerd-f89998f121f79e27babd9138b62608bec37c40116056d464a4189f9bead8613c.scope: Deactivated successfully. Sep 12 18:23:37.124436 containerd[1809]: time="2025-09-12T18:23:37.124369095Z" level=info msg="shim disconnected" id=f89998f121f79e27babd9138b62608bec37c40116056d464a4189f9bead8613c namespace=k8s.io Sep 12 18:23:37.124436 containerd[1809]: time="2025-09-12T18:23:37.124401745Z" level=warning msg="cleaning up after shim disconnected" id=f89998f121f79e27babd9138b62608bec37c40116056d464a4189f9bead8613c namespace=k8s.io Sep 12 18:23:37.124436 containerd[1809]: time="2025-09-12T18:23:37.124407508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:23:37.679700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f89998f121f79e27babd9138b62608bec37c40116056d464a4189f9bead8613c-rootfs.mount: Deactivated successfully. Sep 12 18:23:37.983399 kubelet[3147]: E0912 18:23:37.983311 3147 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 18:23:38.054249 containerd[1809]: time="2025-09-12T18:23:38.054174454Z" level=info msg="CreateContainer within sandbox \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 18:23:38.061360 containerd[1809]: time="2025-09-12T18:23:38.061312424Z" level=info msg="CreateContainer within sandbox \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7815034fd593114e2c396e095181b073268e9522a33fe927170fd04786085d60\"" Sep 12 18:23:38.061613 containerd[1809]: time="2025-09-12T18:23:38.061604174Z" level=info msg="StartContainer for \"7815034fd593114e2c396e095181b073268e9522a33fe927170fd04786085d60\"" Sep 12 18:23:38.084901 systemd[1]: Started cri-containerd-7815034fd593114e2c396e095181b073268e9522a33fe927170fd04786085d60.scope - libcontainer container 7815034fd593114e2c396e095181b073268e9522a33fe927170fd04786085d60. Sep 12 18:23:38.097492 systemd[1]: cri-containerd-7815034fd593114e2c396e095181b073268e9522a33fe927170fd04786085d60.scope: Deactivated successfully. Sep 12 18:23:38.098050 containerd[1809]: time="2025-09-12T18:23:38.098028003Z" level=info msg="StartContainer for \"7815034fd593114e2c396e095181b073268e9522a33fe927170fd04786085d60\" returns successfully" Sep 12 18:23:38.111679 containerd[1809]: time="2025-09-12T18:23:38.111610730Z" level=info msg="shim disconnected" id=7815034fd593114e2c396e095181b073268e9522a33fe927170fd04786085d60 namespace=k8s.io Sep 12 18:23:38.111679 containerd[1809]: time="2025-09-12T18:23:38.111674185Z" level=warning msg="cleaning up after shim disconnected" id=7815034fd593114e2c396e095181b073268e9522a33fe927170fd04786085d60 namespace=k8s.io Sep 12 18:23:38.111679 containerd[1809]: time="2025-09-12T18:23:38.111680742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:23:38.677665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7815034fd593114e2c396e095181b073268e9522a33fe927170fd04786085d60-rootfs.mount: Deactivated successfully. Sep 12 18:23:39.064607 containerd[1809]: time="2025-09-12T18:23:39.064401767Z" level=info msg="CreateContainer within sandbox \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 18:23:39.075825 containerd[1809]: time="2025-09-12T18:23:39.075806589Z" level=info msg="CreateContainer within sandbox \"02a054733380c27cb241ea1e371ff47d11c2fdfdf67a4755b67436eb04f295e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0c9ec07c19fdc0b6157c9ac0f032e53ea925bfe2930f42726306d4e29841bff9\"" Sep 12 18:23:39.076269 containerd[1809]: time="2025-09-12T18:23:39.076235804Z" level=info msg="StartContainer for \"0c9ec07c19fdc0b6157c9ac0f032e53ea925bfe2930f42726306d4e29841bff9\"" Sep 12 18:23:39.098760 systemd[1]: Started cri-containerd-0c9ec07c19fdc0b6157c9ac0f032e53ea925bfe2930f42726306d4e29841bff9.scope - libcontainer container 0c9ec07c19fdc0b6157c9ac0f032e53ea925bfe2930f42726306d4e29841bff9. Sep 12 18:23:39.113588 containerd[1809]: time="2025-09-12T18:23:39.113551595Z" level=info msg="StartContainer for \"0c9ec07c19fdc0b6157c9ac0f032e53ea925bfe2930f42726306d4e29841bff9\" returns successfully" Sep 12 18:23:39.292626 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 18:23:40.104991 kubelet[3147]: I0912 18:23:40.104876 3147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mxp5h" podStartSLOduration=5.10483625 podStartE2EDuration="5.10483625s" podCreationTimestamp="2025-09-12 18:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:23:40.104340685 +0000 UTC m=+447.320233814" watchObservedRunningTime="2025-09-12 18:23:40.10483625 +0000 UTC m=+447.320729362" Sep 12 18:23:41.057640 sshd[5248]: Connection closed by invalid user 66.181.171.136 port 51630 [preauth] Sep 12 18:23:41.058796 systemd[1]: sshd@32-139.178.90.133:22-66.181.171.136:51630.service: Deactivated successfully. Sep 12 18:23:42.563026 systemd-networkd[1727]: lxc_health: Link UP Sep 12 18:23:42.563198 systemd-networkd[1727]: lxc_health: Gained carrier Sep 12 18:23:44.366755 systemd-networkd[1727]: lxc_health: Gained IPv6LL Sep 12 18:23:48.275816 sshd[5448]: Connection closed by 139.178.68.195 port 52038 Sep 12 18:23:48.276023 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Sep 12 18:23:48.277817 systemd[1]: sshd@35-139.178.90.133:22-139.178.68.195:52038.service: Deactivated successfully. Sep 12 18:23:48.278909 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 18:23:48.279681 systemd-logind[1799]: Session 30 logged out. Waiting for processes to exit. Sep 12 18:23:48.280372 systemd-logind[1799]: Removed session 30.