Sep 13 02:23:00.562891 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Sep 13 02:23:00.562905 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 02:23:00.562911 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 02:23:00.562915 kernel: BIOS-provided physical RAM map: Sep 13 02:23:00.562919 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 13 02:23:00.562923 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 13 02:23:00.562927 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 13 02:23:00.562932 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 13 02:23:00.562936 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 13 02:23:00.562940 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfbdfff] usable Sep 13 02:23:00.562944 kernel: BIOS-e820: [mem 0x000000006dfbe000-0x000000006dfbefff] ACPI NVS Sep 13 02:23:00.562948 kernel: BIOS-e820: [mem 0x000000006dfbf000-0x000000006dfbffff] reserved Sep 13 02:23:00.562951 kernel: BIOS-e820: [mem 0x000000006dfc0000-0x0000000077fc6fff] usable Sep 13 02:23:00.562955 kernel: BIOS-e820: [mem 0x0000000077fc7000-0x00000000790a9fff] reserved Sep 13 02:23:00.562961 kernel: BIOS-e820: [mem 0x00000000790aa000-0x0000000079232fff] usable Sep 13 02:23:00.562966 kernel: BIOS-e820: [mem 0x0000000079233000-0x0000000079664fff] ACPI NVS Sep 13 02:23:00.562970 kernel: BIOS-e820: [mem 0x0000000079665000-0x000000007befefff] reserved Sep 13 02:23:00.562974 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Sep 13 02:23:00.562978 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Sep 13 02:23:00.562982 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 02:23:00.562987 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 13 02:23:00.562991 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 13 02:23:00.562995 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 13 02:23:00.563000 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 13 02:23:00.563004 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Sep 13 02:23:00.563008 kernel: NX (Execute Disable) protection: active Sep 13 02:23:00.563012 kernel: SMBIOS 3.2.1 present. Sep 13 02:23:00.563017 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Sep 13 02:23:00.563021 kernel: tsc: Detected 3400.000 MHz processor Sep 13 02:23:00.563025 kernel: tsc: Detected 3399.906 MHz TSC Sep 13 02:23:00.563029 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 02:23:00.563034 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 02:23:00.563039 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Sep 13 02:23:00.563044 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 02:23:00.563048 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Sep 13 02:23:00.563053 kernel: Using GB pages for direct mapping Sep 13 02:23:00.563057 kernel: ACPI: Early table checksum verification disabled Sep 13 02:23:00.563061 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 13 02:23:00.563066 kernel: ACPI: XSDT 0x00000000795460C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 13 02:23:00.563070 kernel: ACPI: FACP 0x0000000079582620 000114 (v06 01072009 AMI 00010013) Sep 13 02:23:00.563077 kernel: ACPI: DSDT 0x0000000079546268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 13 02:23:00.563082 kernel: ACPI: FACS 0x0000000079664F80 000040 Sep 13 02:23:00.563087 kernel: ACPI: APIC 0x0000000079582738 00012C (v04 01072009 AMI 00010013) Sep 13 02:23:00.563092 kernel: ACPI: FPDT 0x0000000079582868 000044 (v01 01072009 AMI 00010013) Sep 13 02:23:00.563096 kernel: ACPI: FIDT 0x00000000795828B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 13 02:23:00.563101 kernel: ACPI: MCFG 0x0000000079582950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 13 02:23:00.563106 kernel: ACPI: SPMI 0x0000000079582990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 13 02:23:00.563111 kernel: ACPI: SSDT 0x00000000795829D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 13 02:23:00.563116 kernel: ACPI: SSDT 0x00000000795844F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 13 02:23:00.563120 kernel: ACPI: SSDT 0x00000000795876C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 13 02:23:00.563125 kernel: ACPI: HPET 0x00000000795899F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 02:23:00.563130 kernel: ACPI: SSDT 0x0000000079589A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 13 02:23:00.563134 kernel: ACPI: SSDT 0x000000007958A9D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 13 02:23:00.563139 kernel: ACPI: UEFI 0x000000007958B2D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 02:23:00.563144 kernel: ACPI: LPIT 0x000000007958B318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 02:23:00.563149 kernel: ACPI: SSDT 0x000000007958B3B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 13 02:23:00.563154 kernel: ACPI: SSDT 0x000000007958DB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 13 02:23:00.563159 kernel: ACPI: DBGP 0x000000007958F078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 02:23:00.563163 kernel: ACPI: DBG2 0x000000007958F0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 13 02:23:00.563168 kernel: ACPI: SSDT 0x000000007958F108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 13 02:23:00.563173 kernel: ACPI: DMAR 0x0000000079590C70 0000A8 (v01 INTEL EDK2 00000002 01000013) Sep 13 02:23:00.563177 kernel: ACPI: SSDT 0x0000000079590D18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 13 02:23:00.563182 kernel: ACPI: TPM2 0x0000000079590E60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 13 02:23:00.563187 kernel: ACPI: SSDT 0x0000000079590E98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 13 02:23:00.563193 kernel: ACPI: WSMT 0x0000000079591C28 000028 (v01 \xf5m 01072009 AMI 00010013) Sep 13 02:23:00.563197 kernel: ACPI: EINJ 0x0000000079591C50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 13 02:23:00.563202 kernel: ACPI: ERST 0x0000000079591D80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 13 02:23:00.563207 kernel: ACPI: BERT 0x0000000079591FB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 13 02:23:00.563212 kernel: ACPI: HEST 0x0000000079591FE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 13 02:23:00.563216 kernel: ACPI: SSDT 0x0000000079592260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 13 02:23:00.563221 kernel: ACPI: Reserving FACP table memory at [mem 0x79582620-0x79582733] Sep 13 02:23:00.563226 kernel: ACPI: Reserving DSDT table memory at [mem 0x79546268-0x7958261e] Sep 13 02:23:00.563230 kernel: ACPI: Reserving FACS table memory at [mem 0x79664f80-0x79664fbf] Sep 13 02:23:00.563236 kernel: ACPI: Reserving APIC table memory at [mem 0x79582738-0x79582863] Sep 13 02:23:00.563240 kernel: ACPI: Reserving FPDT table memory at [mem 0x79582868-0x795828ab] Sep 13 02:23:00.563245 kernel: ACPI: Reserving FIDT table memory at [mem 0x795828b0-0x7958294b] Sep 13 02:23:00.563250 kernel: ACPI: Reserving MCFG table memory at [mem 0x79582950-0x7958298b] Sep 13 02:23:00.563254 kernel: ACPI: Reserving SPMI table memory at [mem 0x79582990-0x795829d0] Sep 13 02:23:00.563259 kernel: ACPI: Reserving SSDT table memory at [mem 0x795829d8-0x795844f3] Sep 13 02:23:00.563264 kernel: ACPI: Reserving SSDT table memory at [mem 0x795844f8-0x795876bd] Sep 13 02:23:00.563268 kernel: ACPI: Reserving SSDT table memory at [mem 0x795876c0-0x795899ea] Sep 13 02:23:00.563273 kernel: ACPI: Reserving HPET table memory at [mem 0x795899f0-0x79589a27] Sep 13 02:23:00.563278 kernel: ACPI: Reserving SSDT table memory at [mem 0x79589a28-0x7958a9d5] Sep 13 02:23:00.563283 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958a9d8-0x7958b2ce] Sep 13 02:23:00.563288 kernel: ACPI: Reserving UEFI table memory at [mem 0x7958b2d0-0x7958b311] Sep 13 02:23:00.563292 kernel: ACPI: Reserving LPIT table memory at [mem 0x7958b318-0x7958b3ab] Sep 13 02:23:00.563297 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958b3b0-0x7958db8d] Sep 13 02:23:00.563302 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958db90-0x7958f071] Sep 13 02:23:00.563306 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958f078-0x7958f0ab] Sep 13 02:23:00.563311 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958f0b0-0x7958f103] Sep 13 02:23:00.563316 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958f108-0x79590c6e] Sep 13 02:23:00.563321 kernel: ACPI: Reserving DMAR table memory at [mem 0x79590c70-0x79590d17] Sep 13 02:23:00.563326 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590d18-0x79590e5b] Sep 13 02:23:00.563330 kernel: ACPI: Reserving TPM2 table memory at [mem 0x79590e60-0x79590e93] Sep 13 02:23:00.563335 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590e98-0x79591c26] Sep 13 02:23:00.563340 kernel: ACPI: Reserving WSMT table memory at [mem 0x79591c28-0x79591c4f] Sep 13 02:23:00.563344 kernel: ACPI: Reserving EINJ table memory at [mem 0x79591c50-0x79591d7f] Sep 13 02:23:00.563349 kernel: ACPI: Reserving ERST table memory at [mem 0x79591d80-0x79591faf] Sep 13 02:23:00.563354 kernel: ACPI: Reserving BERT table memory at [mem 0x79591fb0-0x79591fdf] Sep 13 02:23:00.563358 kernel: ACPI: Reserving HEST table memory at [mem 0x79591fe0-0x7959225b] Sep 13 02:23:00.563364 kernel: ACPI: Reserving SSDT table memory at [mem 0x79592260-0x795923c1] Sep 13 02:23:00.563369 kernel: No NUMA configuration found Sep 13 02:23:00.563373 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Sep 13 02:23:00.563378 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Sep 13 02:23:00.563383 kernel: Zone ranges: Sep 13 02:23:00.563387 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 02:23:00.563392 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 02:23:00.563397 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Sep 13 02:23:00.563404 kernel: Movable zone start for each node Sep 13 02:23:00.563410 kernel: Early memory node ranges Sep 13 02:23:00.563414 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 13 02:23:00.563419 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 13 02:23:00.563424 kernel: node 0: [mem 0x0000000040400000-0x000000006dfbdfff] Sep 13 02:23:00.563428 kernel: node 0: [mem 0x000000006dfc0000-0x0000000077fc6fff] Sep 13 02:23:00.563433 kernel: node 0: [mem 0x00000000790aa000-0x0000000079232fff] Sep 13 02:23:00.563438 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Sep 13 02:23:00.563456 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Sep 13 02:23:00.563461 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Sep 13 02:23:00.563469 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 02:23:00.563474 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 13 02:23:00.563479 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 13 02:23:00.563485 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 13 02:23:00.563490 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Sep 13 02:23:00.563495 kernel: On node 0, zone DMA32: 11468 pages in unavailable ranges Sep 13 02:23:00.563500 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Sep 13 02:23:00.563505 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Sep 13 02:23:00.563511 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 13 02:23:00.563516 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 13 02:23:00.563521 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 13 02:23:00.563525 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 13 02:23:00.563531 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 13 02:23:00.563535 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 13 02:23:00.563540 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 13 02:23:00.563545 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 13 02:23:00.563550 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 13 02:23:00.563556 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 13 02:23:00.563560 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 13 02:23:00.563565 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 13 02:23:00.563570 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 13 02:23:00.563575 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 13 02:23:00.563580 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 13 02:23:00.563585 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 13 02:23:00.563590 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 13 02:23:00.563595 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 13 02:23:00.563600 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 02:23:00.563605 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 02:23:00.563610 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 02:23:00.563615 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 02:23:00.563620 kernel: TSC deadline timer available Sep 13 02:23:00.563625 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 13 02:23:00.563630 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Sep 13 02:23:00.563635 kernel: Booting paravirtualized kernel on bare hardware Sep 13 02:23:00.563640 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 02:23:00.563645 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Sep 13 02:23:00.563650 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Sep 13 02:23:00.563655 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Sep 13 02:23:00.563660 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 13 02:23:00.563665 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222329 Sep 13 02:23:00.563670 kernel: Policy zone: Normal Sep 13 02:23:00.563676 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 02:23:00.563681 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 02:23:00.563686 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 13 02:23:00.563691 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 13 02:23:00.563696 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 02:23:00.563701 kernel: Memory: 32681620K/33411996K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 730116K reserved, 0K cma-reserved) Sep 13 02:23:00.563707 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 13 02:23:00.563712 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 02:23:00.563716 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 02:23:00.563721 kernel: rcu: Hierarchical RCU implementation. Sep 13 02:23:00.563726 kernel: rcu: RCU event tracing is enabled. Sep 13 02:23:00.563732 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 13 02:23:00.563737 kernel: Rude variant of Tasks RCU enabled. Sep 13 02:23:00.563742 kernel: Tracing variant of Tasks RCU enabled. Sep 13 02:23:00.563747 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 02:23:00.563752 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 13 02:23:00.563757 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 13 02:23:00.563762 kernel: random: crng init done Sep 13 02:23:00.563767 kernel: Console: colour dummy device 80x25 Sep 13 02:23:00.563772 kernel: printk: console [tty0] enabled Sep 13 02:23:00.563777 kernel: printk: console [ttyS1] enabled Sep 13 02:23:00.563782 kernel: ACPI: Core revision 20210730 Sep 13 02:23:00.563787 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Sep 13 02:23:00.563792 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 02:23:00.563797 kernel: DMAR: Host address width 39 Sep 13 02:23:00.563802 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Sep 13 02:23:00.563807 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Sep 13 02:23:00.563812 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 13 02:23:00.563817 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 13 02:23:00.563822 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Sep 13 02:23:00.563827 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Sep 13 02:23:00.563832 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Sep 13 02:23:00.563837 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 13 02:23:00.563842 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 13 02:23:00.563847 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 13 02:23:00.563852 kernel: x2apic enabled Sep 13 02:23:00.563857 kernel: Switched APIC routing to cluster x2apic. Sep 13 02:23:00.563862 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 02:23:00.563867 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 13 02:23:00.563872 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 13 02:23:00.563877 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 13 02:23:00.563882 kernel: process: using mwait in idle threads Sep 13 02:23:00.563887 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 02:23:00.563892 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 02:23:00.563897 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 02:23:00.563902 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 02:23:00.563907 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 02:23:00.563913 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 02:23:00.563917 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 13 02:23:00.563922 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 13 02:23:00.563927 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 13 02:23:00.563932 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 02:23:00.563937 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 02:23:00.563942 kernel: TAA: Mitigation: TSX disabled Sep 13 02:23:00.563947 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 13 02:23:00.563952 kernel: SRBDS: Mitigation: Microcode Sep 13 02:23:00.563957 kernel: GDS: Vulnerable: No microcode Sep 13 02:23:00.563962 kernel: active return thunk: its_return_thunk Sep 13 02:23:00.563967 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 02:23:00.563972 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 02:23:00.563977 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 02:23:00.563982 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 02:23:00.563987 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 02:23:00.563992 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 02:23:00.563997 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 02:23:00.564002 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 02:23:00.564007 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 02:23:00.564012 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 13 02:23:00.564017 kernel: Freeing SMP alternatives memory: 32K Sep 13 02:23:00.564022 kernel: pid_max: default: 32768 minimum: 301 Sep 13 02:23:00.564027 kernel: LSM: Security Framework initializing Sep 13 02:23:00.564031 kernel: SELinux: Initializing. Sep 13 02:23:00.564036 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 02:23:00.564041 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 02:23:00.564047 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 13 02:23:00.564052 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 13 02:23:00.564057 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 13 02:23:00.564062 kernel: ... version: 4 Sep 13 02:23:00.564066 kernel: ... bit width: 48 Sep 13 02:23:00.564071 kernel: ... generic registers: 4 Sep 13 02:23:00.564076 kernel: ... value mask: 0000ffffffffffff Sep 13 02:23:00.564081 kernel: ... max period: 00007fffffffffff Sep 13 02:23:00.564086 kernel: ... fixed-purpose events: 3 Sep 13 02:23:00.564092 kernel: ... event mask: 000000070000000f Sep 13 02:23:00.564096 kernel: signal: max sigframe size: 2032 Sep 13 02:23:00.564101 kernel: rcu: Hierarchical SRCU implementation. Sep 13 02:23:00.564106 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 13 02:23:00.564111 kernel: smp: Bringing up secondary CPUs ... Sep 13 02:23:00.564116 kernel: x86: Booting SMP configuration: Sep 13 02:23:00.564121 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Sep 13 02:23:00.564126 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 02:23:00.564132 kernel: #9 #10 #11 #12 #13 #14 #15 Sep 13 02:23:00.564136 kernel: smp: Brought up 1 node, 16 CPUs Sep 13 02:23:00.564141 kernel: smpboot: Max logical packages: 1 Sep 13 02:23:00.564146 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 13 02:23:00.564151 kernel: devtmpfs: initialized Sep 13 02:23:00.564156 kernel: x86/mm: Memory block size: 128MB Sep 13 02:23:00.564161 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfbe000-0x6dfbefff] (4096 bytes) Sep 13 02:23:00.564166 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79233000-0x79664fff] (4399104 bytes) Sep 13 02:23:00.564171 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 02:23:00.564176 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 13 02:23:00.564181 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 02:23:00.564186 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 02:23:00.564191 kernel: audit: initializing netlink subsys (disabled) Sep 13 02:23:00.564196 kernel: audit: type=2000 audit(1757730175.134:1): state=initialized audit_enabled=0 res=1 Sep 13 02:23:00.564201 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 02:23:00.564205 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 02:23:00.564210 kernel: cpuidle: using governor menu Sep 13 02:23:00.564215 kernel: ACPI: bus type PCI registered Sep 13 02:23:00.564221 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 02:23:00.564226 kernel: dca service started, version 1.12.1 Sep 13 02:23:00.564231 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 13 02:23:00.564235 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Sep 13 02:23:00.564240 kernel: PCI: Using configuration type 1 for base access Sep 13 02:23:00.564245 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 13 02:23:00.564250 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 02:23:00.564255 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 02:23:00.564260 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 02:23:00.564265 kernel: ACPI: Added _OSI(Module Device) Sep 13 02:23:00.564270 kernel: ACPI: Added _OSI(Processor Device) Sep 13 02:23:00.564275 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 02:23:00.564280 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 02:23:00.564285 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 02:23:00.564290 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 02:23:00.564295 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 13 02:23:00.564300 kernel: ACPI: Dynamic OEM Table Load: Sep 13 02:23:00.564305 kernel: ACPI: SSDT 0xFFFF9CCE0021CC00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 13 02:23:00.564310 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Sep 13 02:23:00.564315 kernel: ACPI: Dynamic OEM Table Load: Sep 13 02:23:00.564320 kernel: ACPI: SSDT 0xFFFF9CCE01C59800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 13 02:23:00.564325 kernel: ACPI: Dynamic OEM Table Load: Sep 13 02:23:00.564330 kernel: ACPI: SSDT 0xFFFF9CCE01D4C800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 13 02:23:00.564335 kernel: ACPI: Dynamic OEM Table Load: Sep 13 02:23:00.564339 kernel: ACPI: SSDT 0xFFFF9CCE00149000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 13 02:23:00.564344 kernel: ACPI: Interpreter enabled Sep 13 02:23:00.564349 kernel: ACPI: PM: (supports S0 S5) Sep 13 02:23:00.564354 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 02:23:00.564360 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 13 02:23:00.564365 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 13 02:23:00.564369 kernel: HEST: Table parsing has been initialized. Sep 13 02:23:00.564374 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 13 02:23:00.564379 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 02:23:00.564384 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 13 02:23:00.564389 kernel: ACPI: PM: Power Resource [USBC] Sep 13 02:23:00.564394 kernel: ACPI: PM: Power Resource [V0PR] Sep 13 02:23:00.564398 kernel: ACPI: PM: Power Resource [V1PR] Sep 13 02:23:00.564406 kernel: ACPI: PM: Power Resource [V2PR] Sep 13 02:23:00.564411 kernel: ACPI: PM: Power Resource [WRST] Sep 13 02:23:00.564416 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 13 02:23:00.564440 kernel: ACPI: PM: Power Resource [FN00] Sep 13 02:23:00.564445 kernel: ACPI: PM: Power Resource [FN01] Sep 13 02:23:00.564464 kernel: ACPI: PM: Power Resource [FN02] Sep 13 02:23:00.564469 kernel: ACPI: PM: Power Resource [FN03] Sep 13 02:23:00.564473 kernel: ACPI: PM: Power Resource [FN04] Sep 13 02:23:00.564478 kernel: ACPI: PM: Power Resource [PIN] Sep 13 02:23:00.564484 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 13 02:23:00.564549 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 02:23:00.564596 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 13 02:23:00.564638 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 13 02:23:00.564646 kernel: PCI host bridge to bus 0000:00 Sep 13 02:23:00.564690 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 02:23:00.564729 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 02:23:00.564769 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 02:23:00.564807 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Sep 13 02:23:00.564845 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 13 02:23:00.564882 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 13 02:23:00.564932 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 13 02:23:00.564984 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 13 02:23:00.565031 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 13 02:23:00.565082 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Sep 13 02:23:00.565127 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Sep 13 02:23:00.565174 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Sep 13 02:23:00.565218 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Sep 13 02:23:00.565261 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Sep 13 02:23:00.565306 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Sep 13 02:23:00.565355 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 13 02:23:00.565402 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Sep 13 02:23:00.565486 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 13 02:23:00.565530 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Sep 13 02:23:00.565577 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 13 02:23:00.565621 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Sep 13 02:23:00.565666 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 13 02:23:00.565714 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 13 02:23:00.565758 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Sep 13 02:23:00.565801 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Sep 13 02:23:00.565847 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 13 02:23:00.565891 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 02:23:00.565939 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 13 02:23:00.565982 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 02:23:00.566028 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 13 02:23:00.566071 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Sep 13 02:23:00.566114 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 13 02:23:00.566160 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 13 02:23:00.566210 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Sep 13 02:23:00.566255 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 13 02:23:00.566302 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 13 02:23:00.566345 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Sep 13 02:23:00.566389 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 13 02:23:00.566473 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 13 02:23:00.566518 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Sep 13 02:23:00.566562 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Sep 13 02:23:00.566605 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Sep 13 02:23:00.566649 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Sep 13 02:23:00.566691 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Sep 13 02:23:00.566734 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Sep 13 02:23:00.566777 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 13 02:23:00.566827 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 13 02:23:00.566873 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 13 02:23:00.566922 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 13 02:23:00.566967 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 13 02:23:00.567016 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 13 02:23:00.567061 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 13 02:23:00.567108 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 13 02:23:00.567152 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 13 02:23:00.567200 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Sep 13 02:23:00.567244 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Sep 13 02:23:00.567291 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 13 02:23:00.567336 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 02:23:00.567386 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 13 02:23:00.567461 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 13 02:23:00.567525 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Sep 13 02:23:00.567568 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 13 02:23:00.567615 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 13 02:23:00.567660 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 13 02:23:00.567704 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 02:23:00.567754 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Sep 13 02:23:00.567799 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 13 02:23:00.567844 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Sep 13 02:23:00.567889 kernel: pci 0000:02:00.0: PME# supported from D3cold Sep 13 02:23:00.567934 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 13 02:23:00.567980 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 13 02:23:00.568030 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Sep 13 02:23:00.568127 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 13 02:23:00.568172 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Sep 13 02:23:00.568216 kernel: pci 0000:02:00.1: PME# supported from D3cold Sep 13 02:23:00.568261 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 13 02:23:00.568305 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 13 02:23:00.568350 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 13 02:23:00.568394 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Sep 13 02:23:00.568463 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 02:23:00.568527 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 13 02:23:00.568575 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 13 02:23:00.568620 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 13 02:23:00.568665 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Sep 13 02:23:00.568712 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Sep 13 02:23:00.568756 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Sep 13 02:23:00.568801 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 13 02:23:00.568845 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 13 02:23:00.568888 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 13 02:23:00.568931 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Sep 13 02:23:00.568982 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Sep 13 02:23:00.569028 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Sep 13 02:23:00.569074 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Sep 13 02:23:00.569120 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Sep 13 02:23:00.569163 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Sep 13 02:23:00.569207 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Sep 13 02:23:00.569251 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 13 02:23:00.569295 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 13 02:23:00.569337 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Sep 13 02:23:00.569383 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 13 02:23:00.569475 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Sep 13 02:23:00.569519 kernel: pci 0000:07:00.0: enabling Extended Tags Sep 13 02:23:00.569564 kernel: pci 0000:07:00.0: supports D1 D2 Sep 13 02:23:00.569608 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 02:23:00.569652 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 13 02:23:00.569694 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 13 02:23:00.569738 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Sep 13 02:23:00.569787 kernel: pci_bus 0000:08: extended config space not accessible Sep 13 02:23:00.569840 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Sep 13 02:23:00.569888 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Sep 13 02:23:00.569936 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Sep 13 02:23:00.569984 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Sep 13 02:23:00.570033 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 02:23:00.570080 kernel: pci 0000:08:00.0: supports D1 D2 Sep 13 02:23:00.570129 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 02:23:00.570174 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 13 02:23:00.570220 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 13 02:23:00.570265 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Sep 13 02:23:00.570272 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 13 02:23:00.570278 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 13 02:23:00.570283 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 13 02:23:00.570288 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 13 02:23:00.570295 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 13 02:23:00.570300 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 13 02:23:00.570305 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 13 02:23:00.570310 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 13 02:23:00.570316 kernel: iommu: Default domain type: Translated Sep 13 02:23:00.570321 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 02:23:00.570367 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Sep 13 02:23:00.570438 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 02:23:00.570506 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Sep 13 02:23:00.570515 kernel: vgaarb: loaded Sep 13 02:23:00.570521 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 02:23:00.570526 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 02:23:00.570531 kernel: PTP clock support registered Sep 13 02:23:00.570537 kernel: PCI: Using ACPI for IRQ routing Sep 13 02:23:00.570542 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 02:23:00.570547 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 13 02:23:00.570552 kernel: e820: reserve RAM buffer [mem 0x6dfbe000-0x6fffffff] Sep 13 02:23:00.570558 kernel: e820: reserve RAM buffer [mem 0x77fc7000-0x77ffffff] Sep 13 02:23:00.570564 kernel: e820: reserve RAM buffer [mem 0x79233000-0x7bffffff] Sep 13 02:23:00.570569 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Sep 13 02:23:00.570574 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Sep 13 02:23:00.570579 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 02:23:00.570584 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Sep 13 02:23:00.570589 kernel: clocksource: Switched to clocksource tsc-early Sep 13 02:23:00.570595 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 02:23:00.570600 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 02:23:00.570606 kernel: pnp: PnP ACPI init Sep 13 02:23:00.570651 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 13 02:23:00.570697 kernel: pnp 00:02: [dma 0 disabled] Sep 13 02:23:00.570742 kernel: pnp 00:03: [dma 0 disabled] Sep 13 02:23:00.570785 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 13 02:23:00.570825 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 13 02:23:00.570867 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 13 02:23:00.570912 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 13 02:23:00.570952 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 13 02:23:00.570992 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 13 02:23:00.571030 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 13 02:23:00.571069 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 13 02:23:00.571109 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 13 02:23:00.571147 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 13 02:23:00.571188 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 13 02:23:00.571231 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 13 02:23:00.571271 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 13 02:23:00.571310 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 13 02:23:00.571348 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 13 02:23:00.571389 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 13 02:23:00.571454 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 13 02:23:00.571495 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 13 02:23:00.571538 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 13 02:23:00.571546 kernel: pnp: PnP ACPI: found 10 devices Sep 13 02:23:00.571551 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 02:23:00.571557 kernel: NET: Registered PF_INET protocol family Sep 13 02:23:00.571562 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 02:23:00.571568 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 02:23:00.571574 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 02:23:00.571580 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 02:23:00.571585 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 13 02:23:00.571591 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 13 02:23:00.571596 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 02:23:00.571601 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 02:23:00.571607 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 02:23:00.571612 kernel: NET: Registered PF_XDP protocol family Sep 13 02:23:00.571657 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Sep 13 02:23:00.571704 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Sep 13 02:23:00.571749 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Sep 13 02:23:00.571793 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 02:23:00.571840 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 13 02:23:00.571888 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 13 02:23:00.571933 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 13 02:23:00.571980 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 13 02:23:00.572024 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 13 02:23:00.572069 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Sep 13 02:23:00.572114 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 02:23:00.572159 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 13 02:23:00.572203 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 13 02:23:00.572250 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 13 02:23:00.572295 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Sep 13 02:23:00.572340 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 13 02:23:00.572385 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 13 02:23:00.572432 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Sep 13 02:23:00.572477 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 13 02:23:00.572523 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 13 02:23:00.572569 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 13 02:23:00.572614 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Sep 13 02:23:00.572661 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 13 02:23:00.572706 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 13 02:23:00.572750 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Sep 13 02:23:00.572791 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 13 02:23:00.572832 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 02:23:00.572872 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 02:23:00.572910 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 02:23:00.572948 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Sep 13 02:23:00.572986 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 13 02:23:00.573034 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Sep 13 02:23:00.573077 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 02:23:00.573121 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Sep 13 02:23:00.573162 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Sep 13 02:23:00.573207 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 13 02:23:00.573248 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Sep 13 02:23:00.573295 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 13 02:23:00.573336 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Sep 13 02:23:00.573380 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Sep 13 02:23:00.573425 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Sep 13 02:23:00.573433 kernel: PCI: CLS 64 bytes, default 64 Sep 13 02:23:00.573439 kernel: DMAR: No ATSR found Sep 13 02:23:00.573444 kernel: DMAR: No SATC found Sep 13 02:23:00.573451 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Sep 13 02:23:00.573456 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Sep 13 02:23:00.573462 kernel: DMAR: IOMMU feature nwfs inconsistent Sep 13 02:23:00.573467 kernel: DMAR: IOMMU feature pasid inconsistent Sep 13 02:23:00.573472 kernel: DMAR: IOMMU feature eafs inconsistent Sep 13 02:23:00.573477 kernel: DMAR: IOMMU feature prs inconsistent Sep 13 02:23:00.573483 kernel: DMAR: IOMMU feature nest inconsistent Sep 13 02:23:00.573488 kernel: DMAR: IOMMU feature mts inconsistent Sep 13 02:23:00.573493 kernel: DMAR: IOMMU feature sc_support inconsistent Sep 13 02:23:00.573500 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Sep 13 02:23:00.573505 kernel: DMAR: dmar0: Using Queued invalidation Sep 13 02:23:00.573510 kernel: DMAR: dmar1: Using Queued invalidation Sep 13 02:23:00.573556 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 13 02:23:00.573602 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 13 02:23:00.573666 kernel: pci 0000:00:01.1: Adding to iommu group 1 Sep 13 02:23:00.573710 kernel: pci 0000:00:02.0: Adding to iommu group 2 Sep 13 02:23:00.573752 kernel: pci 0000:00:08.0: Adding to iommu group 3 Sep 13 02:23:00.573796 kernel: pci 0000:00:12.0: Adding to iommu group 4 Sep 13 02:23:00.573840 kernel: pci 0000:00:14.0: Adding to iommu group 5 Sep 13 02:23:00.573883 kernel: pci 0000:00:14.2: Adding to iommu group 5 Sep 13 02:23:00.573926 kernel: pci 0000:00:15.0: Adding to iommu group 6 Sep 13 02:23:00.573969 kernel: pci 0000:00:15.1: Adding to iommu group 6 Sep 13 02:23:00.574011 kernel: pci 0000:00:16.0: Adding to iommu group 7 Sep 13 02:23:00.574054 kernel: pci 0000:00:16.1: Adding to iommu group 7 Sep 13 02:23:00.574097 kernel: pci 0000:00:16.4: Adding to iommu group 7 Sep 13 02:23:00.574139 kernel: pci 0000:00:17.0: Adding to iommu group 8 Sep 13 02:23:00.574185 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Sep 13 02:23:00.574228 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Sep 13 02:23:00.574272 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Sep 13 02:23:00.574315 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Sep 13 02:23:00.574358 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Sep 13 02:23:00.574402 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Sep 13 02:23:00.574490 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Sep 13 02:23:00.574534 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Sep 13 02:23:00.574578 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Sep 13 02:23:00.574624 kernel: pci 0000:02:00.0: Adding to iommu group 1 Sep 13 02:23:00.574668 kernel: pci 0000:02:00.1: Adding to iommu group 1 Sep 13 02:23:00.574715 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 13 02:23:00.574759 kernel: pci 0000:05:00.0: Adding to iommu group 17 Sep 13 02:23:00.574805 kernel: pci 0000:07:00.0: Adding to iommu group 18 Sep 13 02:23:00.574852 kernel: pci 0000:08:00.0: Adding to iommu group 18 Sep 13 02:23:00.574859 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 13 02:23:00.574866 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 02:23:00.574872 kernel: software IO TLB: mapped [mem 0x0000000073fc7000-0x0000000077fc7000] (64MB) Sep 13 02:23:00.574877 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Sep 13 02:23:00.574882 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 13 02:23:00.574887 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 13 02:23:00.574893 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 13 02:23:00.574898 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Sep 13 02:23:00.574946 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 13 02:23:00.574954 kernel: Initialise system trusted keyrings Sep 13 02:23:00.574961 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 13 02:23:00.574966 kernel: Key type asymmetric registered Sep 13 02:23:00.574971 kernel: Asymmetric key parser 'x509' registered Sep 13 02:23:00.574977 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 02:23:00.574982 kernel: io scheduler mq-deadline registered Sep 13 02:23:00.574987 kernel: io scheduler kyber registered Sep 13 02:23:00.574993 kernel: io scheduler bfq registered Sep 13 02:23:00.575037 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Sep 13 02:23:00.575082 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Sep 13 02:23:00.575126 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Sep 13 02:23:00.575170 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Sep 13 02:23:00.575214 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Sep 13 02:23:00.575257 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Sep 13 02:23:00.575301 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Sep 13 02:23:00.575349 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 13 02:23:00.575358 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 13 02:23:00.575363 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 13 02:23:00.575369 kernel: pstore: Registered erst as persistent store backend Sep 13 02:23:00.575374 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 02:23:00.575379 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 02:23:00.575385 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 02:23:00.575390 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 02:23:00.575462 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 13 02:23:00.575490 kernel: i8042: PNP: No PS/2 controller found. Sep 13 02:23:00.575531 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 13 02:23:00.575570 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 13 02:23:00.575610 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-13T02:22:59 UTC (1757730179) Sep 13 02:23:00.575650 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 13 02:23:00.575657 kernel: intel_pstate: Intel P-state driver initializing Sep 13 02:23:00.575663 kernel: intel_pstate: Disabling energy efficiency optimization Sep 13 02:23:00.575668 kernel: intel_pstate: HWP enabled Sep 13 02:23:00.575673 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 13 02:23:00.575680 kernel: vesafb: scrolling: redraw Sep 13 02:23:00.575685 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 13 02:23:00.575691 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x0000000081fa0ee4, using 768k, total 768k Sep 13 02:23:00.575696 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 02:23:00.575701 kernel: fb0: VESA VGA frame buffer device Sep 13 02:23:00.575706 kernel: NET: Registered PF_INET6 protocol family Sep 13 02:23:00.575712 kernel: Segment Routing with IPv6 Sep 13 02:23:00.575717 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 02:23:00.575722 kernel: NET: Registered PF_PACKET protocol family Sep 13 02:23:00.575728 kernel: Key type dns_resolver registered Sep 13 02:23:00.575734 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Sep 13 02:23:00.575739 kernel: microcode: Microcode Update Driver: v2.2. Sep 13 02:23:00.575744 kernel: IPI shorthand broadcast: enabled Sep 13 02:23:00.575749 kernel: sched_clock: Marking stable (1861126628, 1360208573)->(4642880055, -1421544854) Sep 13 02:23:00.575754 kernel: registered taskstats version 1 Sep 13 02:23:00.575760 kernel: Loading compiled-in X.509 certificates Sep 13 02:23:00.575765 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 02:23:00.575770 kernel: Key type .fscrypt registered Sep 13 02:23:00.575776 kernel: Key type fscrypt-provisioning registered Sep 13 02:23:00.575781 kernel: pstore: Using crash dump compression: deflate Sep 13 02:23:00.575786 kernel: ima: Allocated hash algorithm: sha1 Sep 13 02:23:00.575792 kernel: ima: No architecture policies found Sep 13 02:23:00.575797 kernel: clk: Disabling unused clocks Sep 13 02:23:00.575802 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 02:23:00.575808 kernel: Write protecting the kernel read-only data: 28672k Sep 13 02:23:00.575813 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 02:23:00.575818 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 02:23:00.575824 kernel: Run /init as init process Sep 13 02:23:00.575829 kernel: with arguments: Sep 13 02:23:00.575835 kernel: /init Sep 13 02:23:00.575840 kernel: with environment: Sep 13 02:23:00.575845 kernel: HOME=/ Sep 13 02:23:00.575850 kernel: TERM=linux Sep 13 02:23:00.575855 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 02:23:00.575862 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 02:23:00.575869 systemd[1]: Detected architecture x86-64. Sep 13 02:23:00.575875 systemd[1]: Running in initrd. Sep 13 02:23:00.575880 systemd[1]: No hostname configured, using default hostname. Sep 13 02:23:00.575885 systemd[1]: Hostname set to . Sep 13 02:23:00.575891 systemd[1]: Initializing machine ID from random generator. Sep 13 02:23:00.575896 systemd[1]: Queued start job for default target initrd.target. Sep 13 02:23:00.575902 systemd[1]: Started systemd-ask-password-console.path. Sep 13 02:23:00.575908 systemd[1]: Reached target cryptsetup.target. Sep 13 02:23:00.575913 systemd[1]: Reached target paths.target. Sep 13 02:23:00.575918 systemd[1]: Reached target slices.target. Sep 13 02:23:00.575924 systemd[1]: Reached target swap.target. Sep 13 02:23:00.575929 systemd[1]: Reached target timers.target. Sep 13 02:23:00.575934 systemd[1]: Listening on iscsid.socket. Sep 13 02:23:00.575940 systemd[1]: Listening on iscsiuio.socket. Sep 13 02:23:00.575945 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 02:23:00.575952 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 02:23:00.575957 systemd[1]: Listening on systemd-journald.socket. Sep 13 02:23:00.575962 kernel: tsc: Refined TSC clocksource calibration: 3408.097 MHz Sep 13 02:23:00.575968 systemd[1]: Listening on systemd-networkd.socket. Sep 13 02:23:00.575973 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x31202fb6a6d, max_idle_ns: 440795332646 ns Sep 13 02:23:00.575979 kernel: clocksource: Switched to clocksource tsc Sep 13 02:23:00.575984 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 02:23:00.575989 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 02:23:00.575995 systemd[1]: Reached target sockets.target. Sep 13 02:23:00.576001 systemd[1]: Starting kmod-static-nodes.service... Sep 13 02:23:00.576006 systemd[1]: Finished network-cleanup.service. Sep 13 02:23:00.576012 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 02:23:00.576017 systemd[1]: Starting systemd-journald.service... Sep 13 02:23:00.576022 systemd[1]: Starting systemd-modules-load.service... Sep 13 02:23:00.576030 systemd-journald[269]: Journal started Sep 13 02:23:00.576056 systemd-journald[269]: Runtime Journal (/run/log/journal/48e39105522c4862a57666c4d7baf84a) is 8.0M, max 639.3M, 631.3M free. Sep 13 02:23:00.577758 systemd-modules-load[270]: Inserted module 'overlay' Sep 13 02:23:00.582000 audit: BPF prog-id=6 op=LOAD Sep 13 02:23:00.601405 kernel: audit: type=1334 audit(1757730180.582:2): prog-id=6 op=LOAD Sep 13 02:23:00.601418 systemd[1]: Starting systemd-resolved.service... Sep 13 02:23:00.652461 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 02:23:00.652477 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 02:23:00.685433 kernel: Bridge firewalling registered Sep 13 02:23:00.685449 systemd[1]: Started systemd-journald.service. Sep 13 02:23:00.699772 systemd-modules-load[270]: Inserted module 'br_netfilter' Sep 13 02:23:00.747554 kernel: audit: type=1130 audit(1757730180.706:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.702552 systemd-resolved[272]: Positive Trust Anchors: Sep 13 02:23:00.812448 kernel: SCSI subsystem initialized Sep 13 02:23:00.812536 kernel: audit: type=1130 audit(1757730180.758:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.702559 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 02:23:00.941564 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 02:23:00.941601 kernel: audit: type=1130 audit(1757730180.831:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.941707 kernel: device-mapper: uevent: version 1.0.3 Sep 13 02:23:00.941727 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 02:23:00.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.702580 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 02:23:01.013536 kernel: audit: type=1130 audit(1757730180.948:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.704184 systemd-resolved[272]: Defaulting to hostname 'linux'. Sep 13 02:23:01.066442 kernel: audit: type=1130 audit(1757730181.020:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:01.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.707652 systemd[1]: Started systemd-resolved.service. Sep 13 02:23:01.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.759586 systemd[1]: Finished kmod-static-nodes.service. Sep 13 02:23:01.136623 kernel: audit: type=1130 audit(1757730181.073:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:00.832557 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 02:23:00.932530 systemd-modules-load[270]: Inserted module 'dm_multipath' Sep 13 02:23:00.949747 systemd[1]: Finished systemd-modules-load.service. Sep 13 02:23:01.021640 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 02:23:01.074669 systemd[1]: Reached target nss-lookup.target. Sep 13 02:23:01.129994 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 02:23:01.137049 systemd[1]: Starting systemd-sysctl.service... Sep 13 02:23:01.150162 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 02:23:01.150906 systemd[1]: Finished systemd-sysctl.service. Sep 13 02:23:01.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:01.153079 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 02:23:01.200474 kernel: audit: type=1130 audit(1757730181.149:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:01.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:01.215738 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 02:23:01.281514 kernel: audit: type=1130 audit(1757730181.214:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:01.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:01.273994 systemd[1]: Starting dracut-cmdline.service... Sep 13 02:23:01.295511 dracut-cmdline[298]: dracut-dracut-053 Sep 13 02:23:01.295511 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Sep 13 02:23:01.295511 dracut-cmdline[298]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 02:23:01.366475 kernel: Loading iSCSI transport class v2.0-870. Sep 13 02:23:01.366488 kernel: iscsi: registered transport (tcp) Sep 13 02:23:01.432552 kernel: iscsi: registered transport (qla4xxx) Sep 13 02:23:01.432569 kernel: QLogic iSCSI HBA Driver Sep 13 02:23:01.448113 systemd[1]: Finished dracut-cmdline.service. Sep 13 02:23:01.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:01.457099 systemd[1]: Starting dracut-pre-udev.service... Sep 13 02:23:01.512468 kernel: raid6: avx2x4 gen() 48878 MB/s Sep 13 02:23:01.547467 kernel: raid6: avx2x4 xor() 21560 MB/s Sep 13 02:23:01.582441 kernel: raid6: avx2x2 gen() 53686 MB/s Sep 13 02:23:01.617443 kernel: raid6: avx2x2 xor() 32159 MB/s Sep 13 02:23:01.653433 kernel: raid6: avx2x1 gen() 44333 MB/s Sep 13 02:23:01.687437 kernel: raid6: avx2x1 xor() 27219 MB/s Sep 13 02:23:01.722474 kernel: raid6: sse2x4 gen() 20811 MB/s Sep 13 02:23:01.756438 kernel: raid6: sse2x4 xor() 11992 MB/s Sep 13 02:23:01.790436 kernel: raid6: sse2x2 gen() 21670 MB/s Sep 13 02:23:01.824474 kernel: raid6: sse2x2 xor() 13411 MB/s Sep 13 02:23:01.858473 kernel: raid6: sse2x1 gen() 18292 MB/s Sep 13 02:23:01.910449 kernel: raid6: sse2x1 xor() 8923 MB/s Sep 13 02:23:01.910507 kernel: raid6: using algorithm avx2x2 gen() 53686 MB/s Sep 13 02:23:01.910515 kernel: raid6: .... xor() 32159 MB/s, rmw enabled Sep 13 02:23:01.928683 kernel: raid6: using avx2x2 recovery algorithm Sep 13 02:23:01.975455 kernel: xor: automatically using best checksumming function avx Sep 13 02:23:02.079409 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 02:23:02.084016 systemd[1]: Finished dracut-pre-udev.service. Sep 13 02:23:02.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:02.091000 audit: BPF prog-id=7 op=LOAD Sep 13 02:23:02.091000 audit: BPF prog-id=8 op=LOAD Sep 13 02:23:02.093481 systemd[1]: Starting systemd-udevd.service... Sep 13 02:23:02.102005 systemd-udevd[479]: Using default interface naming scheme 'v252'. Sep 13 02:23:02.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:02.106571 systemd[1]: Started systemd-udevd.service. Sep 13 02:23:02.146513 dracut-pre-trigger[491]: rd.md=0: removing MD RAID activation Sep 13 02:23:02.123353 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 02:23:02.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:02.152071 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 02:23:02.163238 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 02:23:02.233827 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 02:23:02.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:02.261410 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 02:23:02.289414 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 02:23:02.289476 kernel: libata version 3.00 loaded. Sep 13 02:23:02.308410 kernel: AES CTR mode by8 optimization enabled Sep 13 02:23:02.308443 kernel: ACPI: bus type USB registered Sep 13 02:23:02.308455 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 13 02:23:02.377981 kernel: usbcore: registered new interface driver usbfs Sep 13 02:23:02.378006 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 13 02:23:02.378016 kernel: usbcore: registered new interface driver hub Sep 13 02:23:02.412050 kernel: usbcore: registered new device driver usb Sep 13 02:23:02.450637 kernel: igb 0000:04:00.0: added PHC on eth0 Sep 13 02:23:02.502360 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 13 02:23:02.502426 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:24:72 Sep 13 02:23:02.502482 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Sep 13 02:23:02.502549 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 13 02:23:02.540837 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Sep 13 02:23:03.233955 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 13 02:23:03.234026 kernel: ahci 0000:00:17.0: version 3.0 Sep 13 02:23:03.234082 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 13 02:23:03.234132 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Sep 13 02:23:03.234181 kernel: igb 0000:05:00.0: added PHC on eth1 Sep 13 02:23:03.234232 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 13 02:23:03.234282 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:24:73 Sep 13 02:23:03.234331 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Sep 13 02:23:03.234380 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 13 02:23:03.234458 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 13 02:23:03.234523 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 13 02:23:03.234570 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Sep 13 02:23:03.234619 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 13 02:23:03.234667 kernel: scsi host0: ahci Sep 13 02:23:03.234722 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 13 02:23:03.234770 kernel: scsi host1: ahci Sep 13 02:23:03.234825 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 13 02:23:03.234874 kernel: scsi host2: ahci Sep 13 02:23:03.234930 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 13 02:23:03.234978 kernel: scsi host3: ahci Sep 13 02:23:03.235031 kernel: hub 1-0:1.0: USB hub found Sep 13 02:23:03.235088 kernel: scsi host4: ahci Sep 13 02:23:03.235141 kernel: hub 1-0:1.0: 16 ports detected Sep 13 02:23:03.235195 kernel: scsi host5: ahci Sep 13 02:23:03.235250 kernel: hub 2-0:1.0: USB hub found Sep 13 02:23:03.235305 kernel: scsi host6: ahci Sep 13 02:23:03.235360 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Sep 13 02:23:03.235414 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 13 02:23:03.235508 kernel: hub 2-0:1.0: 10 ports detected Sep 13 02:23:03.235564 kernel: scsi host7: ahci Sep 13 02:23:03.235616 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 139 Sep 13 02:23:03.235623 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 139 Sep 13 02:23:03.235630 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 13 02:23:03.235680 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 139 Sep 13 02:23:03.235687 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 139 Sep 13 02:23:03.235694 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 13 02:23:03.235787 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 139 Sep 13 02:23:03.235797 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 139 Sep 13 02:23:03.235803 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 139 Sep 13 02:23:03.235809 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 139 Sep 13 02:23:03.235816 kernel: hub 1-14:1.0: USB hub found Sep 13 02:23:03.235878 kernel: hub 1-14:1.0: 4 ports detected Sep 13 02:23:03.235933 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Sep 13 02:23:03.235984 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Sep 13 02:23:03.877642 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 13 02:23:03.877711 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 13 02:23:03.877720 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 02:23:03.877727 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 13 02:23:03.877733 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 02:23:03.877740 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 02:23:03.877747 kernel: ata8: SATA link down (SStatus 0 SControl 300) Sep 13 02:23:03.877753 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 13 02:23:03.877859 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 13 02:23:03.877866 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 02:23:03.877873 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 13 02:23:03.877930 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 13 02:23:03.877938 kernel: port_module: 9 callbacks suppressed Sep 13 02:23:03.877945 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Sep 13 02:23:03.877998 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 13 02:23:03.878005 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 13 02:23:03.878058 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 13 02:23:03.878066 kernel: ata1.00: Features: NCQ-prio Sep 13 02:23:03.878072 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 13 02:23:03.878079 kernel: ata2.00: Features: NCQ-prio Sep 13 02:23:03.878086 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 02:23:03.878092 kernel: ata1.00: configured for UDMA/133 Sep 13 02:23:03.878099 kernel: ata2.00: configured for UDMA/133 Sep 13 02:23:03.878105 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 13 02:23:04.335393 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 13 02:23:04.335512 kernel: usbcore: registered new interface driver usbhid Sep 13 02:23:04.335525 kernel: usbhid: USB HID core driver Sep 13 02:23:04.335537 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 13 02:23:04.335549 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Sep 13 02:23:04.335648 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 02:23:04.335661 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 02:23:04.335675 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 13 02:23:04.335774 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 13 02:23:04.335877 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 13 02:23:04.335980 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 13 02:23:04.335994 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 13 02:23:04.336108 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Sep 13 02:23:04.336195 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Sep 13 02:23:04.336271 kernel: sd 1:0:0:0: [sda] Write Protect is off Sep 13 02:23:04.336340 kernel: sd 0:0:0:0: [sdb] Write Protect is off Sep 13 02:23:04.336405 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 13 02:23:04.336474 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 02:23:04.336533 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 13 02:23:04.336595 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 02:23:04.336603 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 02:23:04.336664 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 02:23:04.336671 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Sep 13 02:23:04.336731 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 02:23:04.336739 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 02:23:04.336746 kernel: GPT:9289727 != 937703087 Sep 13 02:23:04.336752 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 02:23:04.336758 kernel: GPT:9289727 != 937703087 Sep 13 02:23:04.336764 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 02:23:04.336771 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 02:23:04.336778 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 02:23:04.336786 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Sep 13 02:23:04.355451 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Sep 13 02:23:04.381502 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Sep 13 02:23:04.403450 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (561) Sep 13 02:23:04.407269 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 02:23:04.418560 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 02:23:04.427064 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 02:23:04.462668 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 02:23:04.490641 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 02:23:04.505025 systemd[1]: Starting disk-uuid.service... Sep 13 02:23:04.525550 disk-uuid[697]: Primary Header is updated. Sep 13 02:23:04.525550 disk-uuid[697]: Secondary Entries is updated. Sep 13 02:23:04.525550 disk-uuid[697]: Secondary Header is updated. Sep 13 02:23:04.579490 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 02:23:04.579503 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 02:23:04.579510 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 02:23:04.579516 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 02:23:04.603365 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 02:23:04.621440 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 02:23:05.603315 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 02:23:05.622297 disk-uuid[698]: The operation has completed successfully. Sep 13 02:23:05.631605 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 02:23:05.660885 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 02:23:05.754837 kernel: audit: type=1130 audit(1757730185.666:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:05.754856 kernel: audit: type=1131 audit(1757730185.666:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:05.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:05.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:05.660929 systemd[1]: Finished disk-uuid.service. Sep 13 02:23:05.783496 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 02:23:05.670816 systemd[1]: Starting verity-setup.service... Sep 13 02:23:05.840097 systemd[1]: Found device dev-mapper-usr.device. Sep 13 02:23:05.852181 systemd[1]: Mounting sysusr-usr.mount... Sep 13 02:23:05.863135 systemd[1]: Finished verity-setup.service. Sep 13 02:23:05.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:05.930412 kernel: audit: type=1130 audit(1757730185.876:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.027487 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 02:23:06.027711 systemd[1]: Mounted sysusr-usr.mount. Sep 13 02:23:06.035742 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 02:23:06.036686 systemd[1]: Starting ignition-setup.service... Sep 13 02:23:06.135460 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 02:23:06.135477 kernel: BTRFS info (device sdb6): using free space tree Sep 13 02:23:06.135488 kernel: BTRFS info (device sdb6): has skinny extents Sep 13 02:23:06.135497 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 02:23:06.050432 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 02:23:06.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.129089 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 02:23:06.242831 kernel: audit: type=1130 audit(1757730186.142:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.242845 kernel: audit: type=1130 audit(1757730186.196:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.143959 systemd[1]: Finished ignition-setup.service. Sep 13 02:23:06.271374 kernel: audit: type=1334 audit(1757730186.249:24): prog-id=9 op=LOAD Sep 13 02:23:06.249000 audit: BPF prog-id=9 op=LOAD Sep 13 02:23:06.198108 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 02:23:06.251276 systemd[1]: Starting systemd-networkd.service... Sep 13 02:23:06.342515 kernel: audit: type=1130 audit(1757730186.294:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.286880 systemd-networkd[877]: lo: Link UP Sep 13 02:23:06.318523 ignition[875]: Ignition 2.14.0 Sep 13 02:23:06.286882 systemd-networkd[877]: lo: Gained carrier Sep 13 02:23:06.318528 ignition[875]: Stage: fetch-offline Sep 13 02:23:06.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.287225 systemd-networkd[877]: Enumeration completed Sep 13 02:23:06.506325 kernel: audit: type=1130 audit(1757730186.382:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.506421 kernel: audit: type=1130 audit(1757730186.437:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.506430 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 13 02:23:06.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.318555 ignition[875]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:23:06.287295 systemd[1]: Started systemd-networkd.service. Sep 13 02:23:06.553488 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Sep 13 02:23:06.318570 ignition[875]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:23:06.287938 systemd-networkd[877]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 02:23:06.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.326581 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:23:06.295554 systemd[1]: Reached target network.target. Sep 13 02:23:06.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.607748 iscsid[902]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 02:23:06.607748 iscsid[902]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 02:23:06.607748 iscsid[902]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 02:23:06.607748 iscsid[902]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 02:23:06.607748 iscsid[902]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 02:23:06.607748 iscsid[902]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 02:23:06.607748 iscsid[902]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 02:23:06.759571 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 13 02:23:06.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:06.326656 ignition[875]: parsed url from cmdline: "" Sep 13 02:23:06.330879 unknown[875]: fetched base config from "system" Sep 13 02:23:06.326658 ignition[875]: no config URL provided Sep 13 02:23:06.330883 unknown[875]: fetched user config from "system" Sep 13 02:23:06.326661 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 02:23:06.351084 systemd[1]: Starting iscsiuio.service... Sep 13 02:23:06.326683 ignition[875]: parsing config with SHA512: b8e9c5e820dbb2c5bd1b8466163f05e6f014db284e458877404ae3a6b30c1069dfe674f5e4d6c386b3cbc5c8c3596c8bea9b0ca28dfbae8c7d3c61807b7886d7 Sep 13 02:23:06.364698 systemd[1]: Started iscsiuio.service. Sep 13 02:23:06.331179 ignition[875]: fetch-offline: fetch-offline passed Sep 13 02:23:06.383680 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 02:23:06.331182 ignition[875]: POST message to Packet Timeline Sep 13 02:23:06.438631 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 02:23:06.331186 ignition[875]: POST Status error: resource requires networking Sep 13 02:23:06.439077 systemd[1]: Starting ignition-kargs.service... Sep 13 02:23:06.331224 ignition[875]: Ignition finished successfully Sep 13 02:23:06.509220 systemd-networkd[877]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 02:23:06.511444 ignition[891]: Ignition 2.14.0 Sep 13 02:23:06.522023 systemd[1]: Starting iscsid.service... Sep 13 02:23:06.511453 ignition[891]: Stage: kargs Sep 13 02:23:06.546551 systemd[1]: Started iscsid.service. Sep 13 02:23:06.511594 ignition[891]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:23:06.561851 systemd[1]: Starting dracut-initqueue.service... Sep 13 02:23:06.511621 ignition[891]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:23:06.581572 systemd[1]: Finished dracut-initqueue.service. Sep 13 02:23:06.516070 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:23:06.600735 systemd[1]: Reached target remote-fs-pre.target. Sep 13 02:23:06.517156 ignition[891]: kargs: kargs passed Sep 13 02:23:06.616613 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 02:23:06.517166 ignition[891]: POST message to Packet Timeline Sep 13 02:23:06.634626 systemd[1]: Reached target remote-fs.target. Sep 13 02:23:06.517187 ignition[891]: GET https://metadata.packet.net/metadata: attempt #1 Sep 13 02:23:06.666989 systemd[1]: Starting dracut-pre-mount.service... Sep 13 02:23:06.521646 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60390->[::1]:53: read: connection refused Sep 13 02:23:06.706837 systemd[1]: Finished dracut-pre-mount.service. Sep 13 02:23:06.722008 ignition[891]: GET https://metadata.packet.net/metadata: attempt #2 Sep 13 02:23:06.723527 systemd-networkd[877]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 02:23:06.722664 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44631->[::1]:53: read: connection refused Sep 13 02:23:06.751817 systemd-networkd[877]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 02:23:06.782211 systemd-networkd[877]: enp2s0f1np1: Link UP Sep 13 02:23:06.782642 systemd-networkd[877]: enp2s0f1np1: Gained carrier Sep 13 02:23:06.793945 systemd-networkd[877]: enp2s0f0np0: Link UP Sep 13 02:23:06.794355 systemd-networkd[877]: eno2: Link UP Sep 13 02:23:06.794770 systemd-networkd[877]: eno1: Link UP Sep 13 02:23:07.122997 ignition[891]: GET https://metadata.packet.net/metadata: attempt #3 Sep 13 02:23:07.124017 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45280->[::1]:53: read: connection refused Sep 13 02:23:07.574538 systemd-networkd[877]: enp2s0f0np0: Gained carrier Sep 13 02:23:07.582627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Sep 13 02:23:07.623609 systemd-networkd[877]: enp2s0f0np0: DHCPv4 address 147.75.203.133/31, gateway 147.75.203.132 acquired from 145.40.83.140 Sep 13 02:23:07.809613 systemd-networkd[877]: enp2s0f1np1: Gained IPv6LL Sep 13 02:23:07.924606 ignition[891]: GET https://metadata.packet.net/metadata: attempt #4 Sep 13 02:23:07.925566 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56859->[::1]:53: read: connection refused Sep 13 02:23:08.641654 systemd-networkd[877]: enp2s0f0np0: Gained IPv6LL Sep 13 02:23:09.526423 ignition[891]: GET https://metadata.packet.net/metadata: attempt #5 Sep 13 02:23:09.527427 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57204->[::1]:53: read: connection refused Sep 13 02:23:12.730032 ignition[891]: GET https://metadata.packet.net/metadata: attempt #6 Sep 13 02:23:13.843160 ignition[891]: GET result: OK Sep 13 02:23:14.320384 ignition[891]: Ignition finished successfully Sep 13 02:23:14.324344 systemd[1]: Finished ignition-kargs.service. Sep 13 02:23:14.406293 kernel: kauditd_printk_skb: 3 callbacks suppressed Sep 13 02:23:14.406309 kernel: audit: type=1130 audit(1757730194.335:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:14.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:14.345826 ignition[922]: Ignition 2.14.0 Sep 13 02:23:14.338620 systemd[1]: Starting ignition-disks.service... Sep 13 02:23:14.345830 ignition[922]: Stage: disks Sep 13 02:23:14.345929 ignition[922]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:23:14.345939 ignition[922]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:23:14.348330 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:23:14.348928 ignition[922]: disks: disks passed Sep 13 02:23:14.348931 ignition[922]: POST message to Packet Timeline Sep 13 02:23:14.348941 ignition[922]: GET https://metadata.packet.net/metadata: attempt #1 Sep 13 02:23:15.405262 ignition[922]: GET result: OK Sep 13 02:23:15.819898 ignition[922]: Ignition finished successfully Sep 13 02:23:15.822923 systemd[1]: Finished ignition-disks.service. Sep 13 02:23:15.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:15.835904 systemd[1]: Reached target initrd-root-device.target. Sep 13 02:23:15.897785 kernel: audit: type=1130 audit(1757730195.834:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:15.897624 systemd[1]: Reached target local-fs-pre.target. Sep 13 02:23:15.911624 systemd[1]: Reached target local-fs.target. Sep 13 02:23:15.925560 systemd[1]: Reached target sysinit.target. Sep 13 02:23:15.925596 systemd[1]: Reached target basic.target. Sep 13 02:23:15.938117 systemd[1]: Starting systemd-fsck-root.service... Sep 13 02:23:15.962201 systemd-fsck[937]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 02:23:15.976017 systemd[1]: Finished systemd-fsck-root.service. Sep 13 02:23:16.063994 kernel: audit: type=1130 audit(1757730195.983:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:16.064082 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 02:23:15.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:15.994707 systemd[1]: Mounting sysroot.mount... Sep 13 02:23:16.071049 systemd[1]: Mounted sysroot.mount. Sep 13 02:23:16.084656 systemd[1]: Reached target initrd-root-fs.target. Sep 13 02:23:16.100130 systemd[1]: Mounting sysroot-usr.mount... Sep 13 02:23:16.108239 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 02:23:16.128032 systemd[1]: Starting flatcar-static-network.service... Sep 13 02:23:16.143547 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 02:23:16.143671 systemd[1]: Reached target ignition-diskful.target. Sep 13 02:23:16.164573 systemd[1]: Mounted sysroot-usr.mount. Sep 13 02:23:16.188457 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 02:23:16.382318 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (955) Sep 13 02:23:16.382335 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 02:23:16.382364 kernel: BTRFS info (device sdb6): using free space tree Sep 13 02:23:16.382372 kernel: BTRFS info (device sdb6): has skinny extents Sep 13 02:23:16.382379 kernel: audit: type=1130 audit(1757730196.303:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:16.382388 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 02:23:16.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:16.200340 systemd[1]: Starting initrd-setup-root.service... Sep 13 02:23:16.398728 coreos-metadata[945]: Sep 13 02:23:16.283 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 02:23:16.418515 coreos-metadata[946]: Sep 13 02:23:16.283 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 02:23:16.284675 systemd[1]: Finished initrd-setup-root.service. Sep 13 02:23:16.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:16.475748 initrd-setup-root[960]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 02:23:16.512597 kernel: audit: type=1130 audit(1757730196.446:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:16.305866 systemd[1]: Starting ignition-mount.service... Sep 13 02:23:16.519607 bash[1020]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 02:23:16.527637 initrd-setup-root[968]: cut: /sysroot/etc/group: No such file or directory Sep 13 02:23:16.389975 systemd[1]: Starting sysroot-boot.service... Sep 13 02:23:16.544588 ignition[1025]: INFO : Ignition 2.14.0 Sep 13 02:23:16.544588 ignition[1025]: INFO : Stage: mount Sep 13 02:23:16.544588 ignition[1025]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:23:16.544588 ignition[1025]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:23:16.544588 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:23:16.544588 ignition[1025]: INFO : mount: mount passed Sep 13 02:23:16.544588 ignition[1025]: INFO : POST message to Packet Timeline Sep 13 02:23:16.544588 ignition[1025]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 02:23:16.624647 initrd-setup-root[976]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 02:23:16.410838 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 02:23:16.642662 initrd-setup-root[984]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 02:23:16.429724 systemd[1]: Finished sysroot-boot.service. Sep 13 02:23:17.391060 coreos-metadata[946]: Sep 13 02:23:17.390 INFO Fetch successful Sep 13 02:23:17.426034 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 13 02:23:17.521549 kernel: audit: type=1130 audit(1757730197.433:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:17.521562 kernel: audit: type=1131 audit(1757730197.433:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:17.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:17.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:17.521607 ignition[1025]: INFO : GET result: OK Sep 13 02:23:17.426085 systemd[1]: Finished flatcar-static-network.service. Sep 13 02:23:17.567449 coreos-metadata[945]: Sep 13 02:23:17.551 INFO Fetch successful Sep 13 02:23:17.581664 coreos-metadata[945]: Sep 13 02:23:17.581 INFO wrote hostname ci-3510.3.8-n-78f707d8f3 to /sysroot/etc/hostname Sep 13 02:23:17.582164 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 02:23:17.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:17.667579 kernel: audit: type=1130 audit(1757730197.608:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:17.997862 ignition[1025]: INFO : Ignition finished successfully Sep 13 02:23:18.000612 systemd[1]: Finished ignition-mount.service. Sep 13 02:23:18.073447 kernel: audit: type=1130 audit(1757730198.013:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:18.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:18.016466 systemd[1]: Starting ignition-files.service... Sep 13 02:23:18.087596 ignition[1039]: INFO : Ignition 2.14.0 Sep 13 02:23:18.087596 ignition[1039]: INFO : Stage: files Sep 13 02:23:18.087596 ignition[1039]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:23:18.087596 ignition[1039]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:23:18.094187 unknown[1039]: wrote ssh authorized keys file for user: core Sep 13 02:23:18.139523 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:23:18.139523 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Sep 13 02:23:18.139523 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 02:23:18.139523 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 02:23:18.139523 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 02:23:18.139523 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 02:23:18.139523 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 02:23:18.139523 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 02:23:18.139523 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 02:23:18.253651 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 02:23:18.457567 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 02:23:18.474625 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 02:23:18.474625 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 02:23:18.906816 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 02:23:19.182801 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 02:23:19.198513 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1056288605" Sep 13 02:23:19.195792 systemd[1]: mnt-oem1056288605.mount: Deactivated successfully. Sep 13 02:23:19.450631 ignition[1039]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1056288605": device or resource busy Sep 13 02:23:19.450631 ignition[1039]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1056288605", trying btrfs: device or resource busy Sep 13 02:23:19.450631 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1056288605" Sep 13 02:23:19.450631 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1056288605" Sep 13 02:23:19.450631 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1056288605" Sep 13 02:23:19.450631 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1056288605" Sep 13 02:23:19.450631 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Sep 13 02:23:19.450631 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 02:23:19.450631 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 02:23:19.672920 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Sep 13 02:23:20.834667 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 02:23:20.834667 ignition[1039]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 02:23:20.834667 ignition[1039]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 02:23:20.834667 ignition[1039]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Sep 13 02:23:20.834667 ignition[1039]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Sep 13 02:23:20.834667 ignition[1039]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Sep 13 02:23:20.916607 ignition[1039]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 02:23:20.916607 ignition[1039]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 02:23:20.916607 ignition[1039]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Sep 13 02:23:20.916607 ignition[1039]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 02:23:20.916607 ignition[1039]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 02:23:20.916607 ignition[1039]: INFO : files: op(15): [started] setting preset to enabled for "packet-phone-home.service" Sep 13 02:23:20.916607 ignition[1039]: INFO : files: op(15): [finished] setting preset to enabled for "packet-phone-home.service" Sep 13 02:23:20.916607 ignition[1039]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Sep 13 02:23:20.916607 ignition[1039]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 02:23:20.916607 ignition[1039]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 02:23:20.916607 ignition[1039]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 02:23:20.916607 ignition[1039]: INFO : files: files passed Sep 13 02:23:20.916607 ignition[1039]: INFO : POST message to Packet Timeline Sep 13 02:23:20.916607 ignition[1039]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 02:23:22.001372 ignition[1039]: INFO : GET result: OK Sep 13 02:23:22.439592 ignition[1039]: INFO : Ignition finished successfully Sep 13 02:23:22.442233 systemd[1]: Finished ignition-files.service. Sep 13 02:23:22.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.461559 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 02:23:22.534620 kernel: audit: type=1130 audit(1757730202.454:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.524615 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 02:23:22.558560 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 02:23:22.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.627438 kernel: audit: type=1130 audit(1757730202.567:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.525037 systemd[1]: Starting ignition-quench.service... Sep 13 02:23:22.753542 kernel: audit: type=1130 audit(1757730202.634:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.753554 kernel: audit: type=1131 audit(1757730202.634:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.541819 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 02:23:22.590110 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 02:23:22.590177 systemd[1]: Finished ignition-quench.service. Sep 13 02:23:22.912890 kernel: audit: type=1130 audit(1757730202.790:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.912902 kernel: audit: type=1131 audit(1757730202.790:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.635646 systemd[1]: Reached target ignition-complete.target. Sep 13 02:23:22.761956 systemd[1]: Starting initrd-parse-etc.service... Sep 13 02:23:22.774769 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 02:23:22.774813 systemd[1]: Finished initrd-parse-etc.service. Sep 13 02:23:22.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.791701 systemd[1]: Reached target initrd-fs.target. Sep 13 02:23:23.041615 kernel: audit: type=1130 audit(1757730202.966:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.921609 systemd[1]: Reached target initrd.target. Sep 13 02:23:22.936634 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 02:23:22.936998 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 02:23:22.951746 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 02:23:23.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:22.968047 systemd[1]: Starting initrd-cleanup.service... Sep 13 02:23:23.175600 kernel: audit: type=1131 audit(1757730203.097:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.037337 systemd[1]: Stopped target nss-lookup.target. Sep 13 02:23:23.049649 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 02:23:23.064636 systemd[1]: Stopped target timers.target. Sep 13 02:23:23.082618 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 02:23:23.082707 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 02:23:23.098732 systemd[1]: Stopped target initrd.target. Sep 13 02:23:23.168643 systemd[1]: Stopped target basic.target. Sep 13 02:23:23.182645 systemd[1]: Stopped target ignition-complete.target. Sep 13 02:23:23.197633 systemd[1]: Stopped target ignition-diskful.target. Sep 13 02:23:23.213605 systemd[1]: Stopped target initrd-root-device.target. Sep 13 02:23:23.228624 systemd[1]: Stopped target remote-fs.target. Sep 13 02:23:23.405417 kernel: audit: type=1131 audit(1757730203.335:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.244687 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 02:23:23.259885 systemd[1]: Stopped target sysinit.target. Sep 13 02:23:23.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.274912 systemd[1]: Stopped target local-fs.target. Sep 13 02:23:23.506601 kernel: audit: type=1131 audit(1757730203.428:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.289897 systemd[1]: Stopped target local-fs-pre.target. Sep 13 02:23:23.305892 systemd[1]: Stopped target swap.target. Sep 13 02:23:23.320791 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 02:23:23.321104 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 02:23:23.337068 systemd[1]: Stopped target cryptsetup.target. Sep 13 02:23:23.413626 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 02:23:23.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.413682 systemd[1]: Stopped dracut-initqueue.service. Sep 13 02:23:23.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.429685 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 02:23:23.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.429773 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 02:23:23.645546 ignition[1089]: INFO : Ignition 2.14.0 Sep 13 02:23:23.645546 ignition[1089]: INFO : Stage: umount Sep 13 02:23:23.645546 ignition[1089]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:23:23.645546 ignition[1089]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:23:23.645546 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:23:23.645546 ignition[1089]: INFO : umount: umount passed Sep 13 02:23:23.645546 ignition[1089]: INFO : POST message to Packet Timeline Sep 13 02:23:23.645546 ignition[1089]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 02:23:23.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.499705 systemd[1]: Stopped target paths.target. Sep 13 02:23:23.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.513622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 02:23:23.517576 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 02:23:23.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:23.528639 systemd[1]: Stopped target slices.target. Sep 13 02:23:23.542606 systemd[1]: Stopped target sockets.target. Sep 13 02:23:23.558610 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 02:23:23.558687 systemd[1]: Closed iscsid.socket. Sep 13 02:23:23.572705 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 02:23:23.572856 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 02:23:23.589926 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 02:23:23.590196 systemd[1]: Stopped ignition-files.service. Sep 13 02:23:23.605967 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 02:23:23.606277 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 02:23:23.622733 systemd[1]: Stopping ignition-mount.service... Sep 13 02:23:23.638184 systemd[1]: Stopping iscsiuio.service... Sep 13 02:23:23.653061 systemd[1]: Stopping sysroot-boot.service... Sep 13 02:23:23.672560 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 02:23:23.672748 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 02:23:23.709006 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 02:23:23.709283 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 02:23:23.726548 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 02:23:23.726857 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 02:23:23.726900 systemd[1]: Stopped iscsiuio.service. Sep 13 02:23:23.748866 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 02:23:23.748918 systemd[1]: Stopped sysroot-boot.service. Sep 13 02:23:23.765966 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 02:23:23.766046 systemd[1]: Closed iscsiuio.socket. Sep 13 02:23:23.779908 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 02:23:23.780017 systemd[1]: Finished initrd-cleanup.service. Sep 13 02:23:24.611897 ignition[1089]: INFO : GET result: OK Sep 13 02:23:25.029746 ignition[1089]: INFO : Ignition finished successfully Sep 13 02:23:25.032220 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 02:23:25.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.032451 systemd[1]: Stopped ignition-mount.service. Sep 13 02:23:25.048849 systemd[1]: Stopped target network.target. Sep 13 02:23:25.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.065601 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 02:23:25.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.065726 systemd[1]: Stopped ignition-disks.service. Sep 13 02:23:25.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.080687 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 02:23:25.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.080803 systemd[1]: Stopped ignition-kargs.service. Sep 13 02:23:25.095690 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 02:23:25.095813 systemd[1]: Stopped ignition-setup.service. Sep 13 02:23:25.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.111676 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 02:23:25.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.188000 audit: BPF prog-id=6 op=UNLOAD Sep 13 02:23:25.111791 systemd[1]: Stopped initrd-setup-root.service. Sep 13 02:23:25.126937 systemd[1]: Stopping systemd-networkd.service... Sep 13 02:23:25.133540 systemd-networkd[877]: enp2s0f0np0: DHCPv6 lease lost Sep 13 02:23:25.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.140556 systemd-networkd[877]: enp2s0f1np1: DHCPv6 lease lost Sep 13 02:23:25.244000 audit: BPF prog-id=9 op=UNLOAD Sep 13 02:23:25.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.141807 systemd[1]: Stopping systemd-resolved.service... Sep 13 02:23:25.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.157317 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 02:23:25.157590 systemd[1]: Stopped systemd-resolved.service. Sep 13 02:23:25.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.174740 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 02:23:25.175000 systemd[1]: Stopped systemd-networkd.service. Sep 13 02:23:25.189091 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 02:23:25.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.189177 systemd[1]: Closed systemd-networkd.socket. Sep 13 02:23:25.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.208037 systemd[1]: Stopping network-cleanup.service... Sep 13 02:23:25.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.221615 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 02:23:25.221752 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 02:23:25.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.237756 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 02:23:25.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.237883 systemd[1]: Stopped systemd-sysctl.service. Sep 13 02:23:25.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.253972 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 02:23:25.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.254092 systemd[1]: Stopped systemd-modules-load.service. Sep 13 02:23:25.269904 systemd[1]: Stopping systemd-udevd.service... Sep 13 02:23:25.287969 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 02:23:25.289218 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 02:23:25.289535 systemd[1]: Stopped systemd-udevd.service. Sep 13 02:23:25.302156 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 02:23:25.302271 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 02:23:25.315731 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 02:23:25.315824 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 02:23:25.331657 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 02:23:25.331777 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 02:23:25.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:25.346747 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 02:23:25.346865 systemd[1]: Stopped dracut-cmdline.service. Sep 13 02:23:25.362740 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 02:23:25.362858 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 02:23:25.379323 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 02:23:25.392478 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 02:23:25.392509 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 02:23:25.409000 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 02:23:25.666281 iscsid[902]: iscsid shutting down. Sep 13 02:23:25.409123 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 02:23:25.424718 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 02:23:25.666451 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Sep 13 02:23:25.424836 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 02:23:25.443083 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 02:23:25.444317 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 02:23:25.444547 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 02:23:25.550611 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 02:23:25.550835 systemd[1]: Stopped network-cleanup.service. Sep 13 02:23:25.559907 systemd[1]: Reached target initrd-switch-root.target. Sep 13 02:23:25.576121 systemd[1]: Starting initrd-switch-root.service... Sep 13 02:23:25.611860 systemd[1]: Switching root. Sep 13 02:23:25.666838 systemd-journald[269]: Journal stopped Sep 13 02:23:29.553027 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 02:23:29.553041 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 02:23:29.553049 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 02:23:29.553055 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 02:23:29.553060 kernel: SELinux: policy capability open_perms=1 Sep 13 02:23:29.553065 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 02:23:29.553072 kernel: SELinux: policy capability always_check_network=0 Sep 13 02:23:29.553078 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 02:23:29.553084 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 02:23:29.553089 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 02:23:29.553095 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 02:23:29.553101 systemd[1]: Successfully loaded SELinux policy in 332.080ms. Sep 13 02:23:29.553108 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.256ms. Sep 13 02:23:29.553115 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 02:23:29.553123 systemd[1]: Detected architecture x86-64. Sep 13 02:23:29.553130 systemd[1]: Detected first boot. Sep 13 02:23:29.553137 systemd[1]: Hostname set to . Sep 13 02:23:29.553143 systemd[1]: Initializing machine ID from random generator. Sep 13 02:23:29.553150 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 02:23:29.553156 systemd[1]: Populated /etc with preset unit settings. Sep 13 02:23:29.553163 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 02:23:29.553170 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 02:23:29.553177 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:23:29.553183 kernel: kauditd_printk_skb: 49 callbacks suppressed Sep 13 02:23:29.553189 kernel: audit: type=1334 audit(1757730208.060:92): prog-id=12 op=LOAD Sep 13 02:23:29.553196 kernel: audit: type=1334 audit(1757730208.060:93): prog-id=3 op=UNLOAD Sep 13 02:23:29.553202 kernel: audit: type=1334 audit(1757730208.105:94): prog-id=13 op=LOAD Sep 13 02:23:29.553208 kernel: audit: type=1334 audit(1757730208.150:95): prog-id=14 op=LOAD Sep 13 02:23:29.553213 kernel: audit: type=1334 audit(1757730208.150:96): prog-id=4 op=UNLOAD Sep 13 02:23:29.553219 kernel: audit: type=1334 audit(1757730208.150:97): prog-id=5 op=UNLOAD Sep 13 02:23:29.553225 kernel: audit: type=1334 audit(1757730208.194:98): prog-id=15 op=LOAD Sep 13 02:23:29.553231 kernel: audit: type=1334 audit(1757730208.194:99): prog-id=12 op=UNLOAD Sep 13 02:23:29.553236 kernel: audit: type=1334 audit(1757730208.256:100): prog-id=16 op=LOAD Sep 13 02:23:29.553242 kernel: audit: type=1334 audit(1757730208.276:101): prog-id=17 op=LOAD Sep 13 02:23:29.553249 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 02:23:29.553255 systemd[1]: Stopped iscsid.service. Sep 13 02:23:29.553261 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 02:23:29.553268 systemd[1]: Stopped initrd-switch-root.service. Sep 13 02:23:29.553274 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 02:23:29.553281 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 02:23:29.553289 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 02:23:29.553296 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 02:23:29.553303 systemd[1]: Created slice system-getty.slice. Sep 13 02:23:29.553310 systemd[1]: Created slice system-modprobe.slice. Sep 13 02:23:29.553317 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 02:23:29.553323 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 02:23:29.553330 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 02:23:29.553337 systemd[1]: Created slice user.slice. Sep 13 02:23:29.553343 systemd[1]: Started systemd-ask-password-console.path. Sep 13 02:23:29.553350 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 02:23:29.553356 systemd[1]: Set up automount boot.automount. Sep 13 02:23:29.553364 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 02:23:29.553370 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 02:23:29.553377 systemd[1]: Stopped target initrd-fs.target. Sep 13 02:23:29.553384 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 02:23:29.553390 systemd[1]: Reached target integritysetup.target. Sep 13 02:23:29.553397 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 02:23:29.553407 systemd[1]: Reached target remote-fs.target. Sep 13 02:23:29.553416 systemd[1]: Reached target slices.target. Sep 13 02:23:29.553423 systemd[1]: Reached target swap.target. Sep 13 02:23:29.553429 systemd[1]: Reached target torcx.target. Sep 13 02:23:29.553436 systemd[1]: Reached target veritysetup.target. Sep 13 02:23:29.553443 systemd[1]: Listening on systemd-coredump.socket. Sep 13 02:23:29.553450 systemd[1]: Listening on systemd-initctl.socket. Sep 13 02:23:29.553457 systemd[1]: Listening on systemd-networkd.socket. Sep 13 02:23:29.553464 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 02:23:29.553471 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 02:23:29.553478 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 02:23:29.553485 systemd[1]: Mounting dev-hugepages.mount... Sep 13 02:23:29.553492 systemd[1]: Mounting dev-mqueue.mount... Sep 13 02:23:29.553498 systemd[1]: Mounting media.mount... Sep 13 02:23:29.553505 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:23:29.553513 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 02:23:29.553519 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 02:23:29.553526 systemd[1]: Mounting tmp.mount... Sep 13 02:23:29.553533 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 02:23:29.553540 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 02:23:29.553546 systemd[1]: Starting kmod-static-nodes.service... Sep 13 02:23:29.553553 systemd[1]: Starting modprobe@configfs.service... Sep 13 02:23:29.553560 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 02:23:29.553567 systemd[1]: Starting modprobe@drm.service... Sep 13 02:23:29.553574 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 02:23:29.553581 systemd[1]: Starting modprobe@fuse.service... Sep 13 02:23:29.553588 kernel: fuse: init (API version 7.34) Sep 13 02:23:29.553594 systemd[1]: Starting modprobe@loop.service... Sep 13 02:23:29.553601 kernel: loop: module loaded Sep 13 02:23:29.553607 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 02:23:29.553614 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 02:23:29.553621 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 02:23:29.553628 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 02:23:29.553636 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 02:23:29.553642 systemd[1]: Stopped systemd-journald.service. Sep 13 02:23:29.553649 systemd[1]: Starting systemd-journald.service... Sep 13 02:23:29.553656 systemd[1]: Starting systemd-modules-load.service... Sep 13 02:23:29.553664 systemd-journald[1238]: Journal started Sep 13 02:23:29.553691 systemd-journald[1238]: Runtime Journal (/run/log/journal/61dd7b27e04a4142b73e5e46ee6d9dba) is 8.0M, max 639.3M, 631.3M free. Sep 13 02:23:26.099000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 02:23:26.413000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 02:23:26.416000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 02:23:26.416000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 02:23:26.416000 audit: BPF prog-id=10 op=LOAD Sep 13 02:23:26.416000 audit: BPF prog-id=10 op=UNLOAD Sep 13 02:23:26.416000 audit: BPF prog-id=11 op=LOAD Sep 13 02:23:26.416000 audit: BPF prog-id=11 op=UNLOAD Sep 13 02:23:26.481000 audit[1129]: AVC avc: denied { associate } for pid=1129 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 02:23:26.481000 audit[1129]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58e4 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1112 pid=1129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 02:23:26.481000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 02:23:26.508000 audit[1129]: AVC avc: denied { associate } for pid=1129 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 02:23:26.508000 audit[1129]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a59c9 a2=1ed a3=0 items=2 ppid=1112 pid=1129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 02:23:26.508000 audit: CWD cwd="/" Sep 13 02:23:26.508000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:26.508000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:26.508000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 02:23:28.060000 audit: BPF prog-id=12 op=LOAD Sep 13 02:23:28.060000 audit: BPF prog-id=3 op=UNLOAD Sep 13 02:23:28.105000 audit: BPF prog-id=13 op=LOAD Sep 13 02:23:28.150000 audit: BPF prog-id=14 op=LOAD Sep 13 02:23:28.150000 audit: BPF prog-id=4 op=UNLOAD Sep 13 02:23:28.150000 audit: BPF prog-id=5 op=UNLOAD Sep 13 02:23:28.194000 audit: BPF prog-id=15 op=LOAD Sep 13 02:23:28.194000 audit: BPF prog-id=12 op=UNLOAD Sep 13 02:23:28.256000 audit: BPF prog-id=16 op=LOAD Sep 13 02:23:28.276000 audit: BPF prog-id=17 op=LOAD Sep 13 02:23:28.276000 audit: BPF prog-id=13 op=UNLOAD Sep 13 02:23:28.276000 audit: BPF prog-id=14 op=UNLOAD Sep 13 02:23:28.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:28.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:28.339000 audit: BPF prog-id=15 op=UNLOAD Sep 13 02:23:28.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:28.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.524000 audit: BPF prog-id=18 op=LOAD Sep 13 02:23:29.524000 audit: BPF prog-id=19 op=LOAD Sep 13 02:23:29.525000 audit: BPF prog-id=20 op=LOAD Sep 13 02:23:29.525000 audit: BPF prog-id=16 op=UNLOAD Sep 13 02:23:29.525000 audit: BPF prog-id=17 op=UNLOAD Sep 13 02:23:29.549000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 02:23:29.549000 audit[1238]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd2612a620 a2=4000 a3=7ffd2612a6bc items=0 ppid=1 pid=1238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 02:23:29.549000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 02:23:26.480324 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 02:23:28.060248 systemd[1]: Queued start job for default target multi-user.target. Sep 13 02:23:26.480744 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 02:23:28.060255 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Sep 13 02:23:26.480758 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 02:23:28.278571 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 02:23:26.480779 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 02:23:26.480786 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 02:23:26.480805 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 02:23:26.480813 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 02:23:26.480938 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 02:23:26.480964 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 02:23:26.480973 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 02:23:26.481878 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 02:23:26.481904 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 02:23:26.481918 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 02:23:26.481928 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 02:23:26.481940 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 02:23:26.481949 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 02:23:27.694622 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 02:23:27.694766 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 02:23:27.694824 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 02:23:27.694919 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 02:23:27.694950 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 02:23:27.694986 /usr/lib/systemd/system-generators/torcx-generator[1129]: time="2025-09-13T02:23:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 02:23:29.583563 systemd[1]: Starting systemd-network-generator.service... Sep 13 02:23:29.605463 systemd[1]: Starting systemd-remount-fs.service... Sep 13 02:23:29.627448 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 02:23:29.659908 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 02:23:29.659930 systemd[1]: Stopped verity-setup.service. Sep 13 02:23:29.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.694448 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:23:29.708492 systemd[1]: Started systemd-journald.service. Sep 13 02:23:29.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.716929 systemd[1]: Mounted dev-hugepages.mount. Sep 13 02:23:29.723663 systemd[1]: Mounted dev-mqueue.mount. Sep 13 02:23:29.730648 systemd[1]: Mounted media.mount. Sep 13 02:23:29.737646 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 02:23:29.745637 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 02:23:29.753618 systemd[1]: Mounted tmp.mount. Sep 13 02:23:29.760676 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 02:23:29.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.768700 systemd[1]: Finished kmod-static-nodes.service. Sep 13 02:23:29.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.776708 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 02:23:29.776832 systemd[1]: Finished modprobe@configfs.service. Sep 13 02:23:29.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.785774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 02:23:29.785892 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 02:23:29.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.794942 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 02:23:29.795103 systemd[1]: Finished modprobe@drm.service. Sep 13 02:23:29.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.804081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 02:23:29.804335 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 02:23:29.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.813249 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 02:23:29.813596 systemd[1]: Finished modprobe@fuse.service. Sep 13 02:23:29.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.822223 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 02:23:29.822557 systemd[1]: Finished modprobe@loop.service. Sep 13 02:23:29.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.831237 systemd[1]: Finished systemd-modules-load.service. Sep 13 02:23:29.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.840263 systemd[1]: Finished systemd-network-generator.service. Sep 13 02:23:29.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.849275 systemd[1]: Finished systemd-remount-fs.service. Sep 13 02:23:29.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.858198 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 02:23:29.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.867748 systemd[1]: Reached target network-pre.target. Sep 13 02:23:29.878191 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 02:23:29.887103 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 02:23:29.893579 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 02:23:29.894595 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 02:23:29.902061 systemd[1]: Starting systemd-journal-flush.service... Sep 13 02:23:29.905947 systemd-journald[1238]: Time spent on flushing to /var/log/journal/61dd7b27e04a4142b73e5e46ee6d9dba is 17.214ms for 1613 entries. Sep 13 02:23:29.905947 systemd-journald[1238]: System Journal (/var/log/journal/61dd7b27e04a4142b73e5e46ee6d9dba) is 8.0M, max 195.6M, 187.6M free. Sep 13 02:23:29.949114 systemd-journald[1238]: Received client request to flush runtime journal. Sep 13 02:23:29.918508 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 02:23:29.919096 systemd[1]: Starting systemd-random-seed.service... Sep 13 02:23:29.929518 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 02:23:29.930143 systemd[1]: Starting systemd-sysctl.service... Sep 13 02:23:29.937094 systemd[1]: Starting systemd-sysusers.service... Sep 13 02:23:29.944016 systemd[1]: Starting systemd-udev-settle.service... Sep 13 02:23:29.951624 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 02:23:29.959581 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 02:23:29.967622 systemd[1]: Finished systemd-journal-flush.service. Sep 13 02:23:29.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.975707 systemd[1]: Finished systemd-random-seed.service. Sep 13 02:23:29.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.983705 systemd[1]: Finished systemd-sysctl.service. Sep 13 02:23:29.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:29.991644 systemd[1]: Finished systemd-sysusers.service. Sep 13 02:23:29.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:30.000666 systemd[1]: Reached target first-boot-complete.target. Sep 13 02:23:30.009159 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 02:23:30.018627 udevadm[1254]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 02:23:30.027369 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 02:23:30.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:30.212434 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 02:23:30.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:30.219000 audit: BPF prog-id=21 op=LOAD Sep 13 02:23:30.219000 audit: BPF prog-id=22 op=LOAD Sep 13 02:23:30.219000 audit: BPF prog-id=7 op=UNLOAD Sep 13 02:23:30.219000 audit: BPF prog-id=8 op=UNLOAD Sep 13 02:23:30.221687 systemd[1]: Starting systemd-udevd.service... Sep 13 02:23:30.233249 systemd-udevd[1257]: Using default interface naming scheme 'v252'. Sep 13 02:23:30.251001 systemd[1]: Started systemd-udevd.service. Sep 13 02:23:30.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:30.261966 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Sep 13 02:23:30.261000 audit: BPF prog-id=23 op=LOAD Sep 13 02:23:30.263222 systemd[1]: Starting systemd-networkd.service... Sep 13 02:23:30.281000 audit: BPF prog-id=24 op=LOAD Sep 13 02:23:30.296046 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 13 02:23:30.296135 kernel: ACPI: button: Sleep Button [SLPB] Sep 13 02:23:30.296165 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 02:23:30.294000 audit: BPF prog-id=25 op=LOAD Sep 13 02:23:30.311000 audit: BPF prog-id=26 op=LOAD Sep 13 02:23:30.313393 systemd[1]: Starting systemd-userdbd.service... Sep 13 02:23:30.313463 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 02:23:30.328405 kernel: ACPI: button: Power Button [PWRF] Sep 13 02:23:30.356411 kernel: IPMI message handler: version 39.2 Sep 13 02:23:30.299000 audit[1265]: AVC avc: denied { confidentiality } for pid=1265 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 02:23:30.371414 kernel: ipmi device interface Sep 13 02:23:30.373595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 02:23:30.299000 audit[1265]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fb056b5210 a1=4d9cc a2=7f5dbd7f2bc5 a3=5 items=42 ppid=1257 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 02:23:30.299000 audit: CWD cwd="/" Sep 13 02:23:30.299000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=1 name=(null) inode=22689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=2 name=(null) inode=22689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=3 name=(null) inode=22690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=4 name=(null) inode=22689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=5 name=(null) inode=22691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=6 name=(null) inode=22689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=7 name=(null) inode=22692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=8 name=(null) inode=22692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=9 name=(null) inode=22693 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=10 name=(null) inode=22692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=11 name=(null) inode=22694 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=12 name=(null) inode=22692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.395424 kernel: ipmi_si: IPMI System Interface driver Sep 13 02:23:30.299000 audit: PATH item=13 name=(null) inode=22695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=14 name=(null) inode=22692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=15 name=(null) inode=22696 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=16 name=(null) inode=22692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=17 name=(null) inode=22697 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=18 name=(null) inode=22689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=19 name=(null) inode=22698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=20 name=(null) inode=22698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=21 name=(null) inode=22699 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=22 name=(null) inode=22698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=23 name=(null) inode=22700 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=24 name=(null) inode=22698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=25 name=(null) inode=22701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=26 name=(null) inode=22698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=27 name=(null) inode=22702 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=28 name=(null) inode=22698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=29 name=(null) inode=22703 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=30 name=(null) inode=22689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=31 name=(null) inode=22704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=32 name=(null) inode=22704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=33 name=(null) inode=22705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=34 name=(null) inode=22704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=35 name=(null) inode=22706 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=36 name=(null) inode=22704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=37 name=(null) inode=22707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=38 name=(null) inode=22704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=39 name=(null) inode=22708 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=40 name=(null) inode=22704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PATH item=41 name=(null) inode=22709 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:23:30.299000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 02:23:30.428440 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 13 02:23:30.477697 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 13 02:23:30.495648 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 13 02:23:30.495663 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 13 02:23:30.495742 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 13 02:23:30.495754 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 13 02:23:30.495841 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 13 02:23:30.593014 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 13 02:23:30.593120 kernel: iTCO_vendor_support: vendor-support=0 Sep 13 02:23:30.593135 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 13 02:23:30.593200 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 13 02:23:30.593211 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 13 02:23:30.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:30.536315 systemd[1]: Started systemd-userdbd.service. Sep 13 02:23:30.645415 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 13 02:23:30.645622 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 13 02:23:30.661912 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 13 02:23:30.715419 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Sep 13 02:23:30.738320 systemd-networkd[1301]: bond0: netdev ready Sep 13 02:23:30.740986 systemd-networkd[1301]: lo: Link UP Sep 13 02:23:30.740989 systemd-networkd[1301]: lo: Gained carrier Sep 13 02:23:30.741525 systemd-networkd[1301]: Enumeration completed Sep 13 02:23:30.741655 systemd[1]: Started systemd-networkd.service. Sep 13 02:23:30.741848 systemd-networkd[1301]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 13 02:23:30.742496 systemd-networkd[1301]: enp2s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:8f:96:a7.network. Sep 13 02:23:30.746406 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Sep 13 02:23:30.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:30.788846 kernel: intel_rapl_common: Found RAPL domain package Sep 13 02:23:30.788894 kernel: intel_rapl_common: Found RAPL domain core Sep 13 02:23:30.805239 kernel: intel_rapl_common: Found RAPL domain uncore Sep 13 02:23:30.805265 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 13 02:23:30.805374 kernel: intel_rapl_common: Found RAPL domain dram Sep 13 02:23:30.855406 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 13 02:23:30.909435 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 13 02:23:30.931452 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Sep 13 02:23:30.931494 systemd-networkd[1301]: enp2s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:8f:96:a6.network. Sep 13 02:23:30.973434 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 13 02:23:31.103504 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 13 02:23:31.138444 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 13 02:23:31.160416 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Sep 13 02:23:31.179450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Sep 13 02:23:31.190099 systemd-networkd[1301]: bond0: Link UP Sep 13 02:23:31.190479 systemd-networkd[1301]: enp2s0f1np1: Link UP Sep 13 02:23:31.190731 systemd-networkd[1301]: enp2s0f1np1: Gained carrier Sep 13 02:23:31.192472 systemd-networkd[1301]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:8f:96:a6.network. Sep 13 02:23:31.228394 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 13 02:23:31.228429 kernel: bond0: active interface up! Sep 13 02:23:31.253407 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 13 02:23:31.267673 systemd[1]: Finished systemd-udev-settle.service. Sep 13 02:23:31.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.277168 systemd[1]: Starting lvm2-activation-early.service... Sep 13 02:23:31.293029 lvm[1361]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 02:23:31.322018 systemd[1]: Finished lvm2-activation-early.service. Sep 13 02:23:31.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.331570 systemd[1]: Reached target cryptsetup.target. Sep 13 02:23:31.342060 systemd[1]: Starting lvm2-activation.service... Sep 13 02:23:31.348363 lvm[1362]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 02:23:31.380429 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.408419 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.435420 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.457407 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.459095 systemd[1]: Finished lvm2-activation.service. Sep 13 02:23:31.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.477512 systemd[1]: Reached target local-fs-pre.target. Sep 13 02:23:31.480407 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.496489 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 02:23:31.496506 systemd[1]: Reached target local-fs.target. Sep 13 02:23:31.501406 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.518477 systemd[1]: Reached target machines.target. Sep 13 02:23:31.523404 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.540134 systemd[1]: Starting ldconfig.service... Sep 13 02:23:31.544424 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.566151 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 02:23:31.566173 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:23:31.566406 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.566737 systemd[1]: Starting systemd-boot-update.service... Sep 13 02:23:31.582922 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 02:23:31.588406 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.606010 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 02:23:31.610178 systemd[1]: Starting systemd-sysext.service... Sep 13 02:23:31.610377 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1364 (bootctl) Sep 13 02:23:31.610449 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.611031 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 02:23:31.626443 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 02:23:31.631413 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.651415 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.651678 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 02:23:31.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.651858 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 02:23:31.651941 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 02:23:31.671492 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.672935 kernel: loop0: detected capacity change from 0 to 224512 Sep 13 02:23:31.672952 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.723466 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.742494 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.742669 systemd-networkd[1301]: enp2s0f0np0: Link UP Sep 13 02:23:31.742868 systemd-networkd[1301]: bond0: Gained carrier Sep 13 02:23:31.742963 systemd-networkd[1301]: enp2s0f0np0: Gained carrier Sep 13 02:23:31.774424 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:23:31.774452 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Sep 13 02:23:31.776766 systemd-networkd[1301]: enp2s0f1np1: Link DOWN Sep 13 02:23:31.776769 systemd-networkd[1301]: enp2s0f1np1: Lost carrier Sep 13 02:23:31.786470 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 02:23:31.786796 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 02:23:31.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.815407 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 02:23:31.829812 systemd-fsck[1374]: fsck.fat 4.2 (2021-01-31) Sep 13 02:23:31.829812 systemd-fsck[1374]: /dev/sdb1: 790 files, 120761/258078 clusters Sep 13 02:23:31.830588 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 02:23:31.847406 kernel: loop1: detected capacity change from 0 to 224512 Sep 13 02:23:31.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.848170 systemd[1]: Mounting boot.mount... Sep 13 02:23:31.861758 systemd[1]: Mounted boot.mount. Sep 13 02:23:31.863142 (sd-sysext)[1377]: Using extensions 'kubernetes'. Sep 13 02:23:31.863330 (sd-sysext)[1377]: Merged extensions into '/usr'. Sep 13 02:23:31.877509 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:23:31.878311 systemd[1]: Mounting usr-share-oem.mount... Sep 13 02:23:31.885638 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 02:23:31.886463 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 02:23:31.895276 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 02:23:31.904470 systemd[1]: Starting modprobe@loop.service... Sep 13 02:23:31.912528 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 02:23:31.912654 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:23:31.912788 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:23:31.916056 systemd[1]: Finished systemd-boot-update.service. Sep 13 02:23:31.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.928724 systemd[1]: Mounted usr-share-oem.mount. Sep 13 02:23:31.938452 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 13 02:23:31.953077 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 02:23:31.953335 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 02:23:31.959420 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Sep 13 02:23:31.960496 systemd-networkd[1301]: enp2s0f1np1: Link UP Sep 13 02:23:31.960999 systemd-networkd[1301]: enp2s0f1np1: Gained carrier Sep 13 02:23:31.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.967359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 02:23:31.967628 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 02:23:31.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.977126 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 02:23:31.977309 systemd[1]: Finished modprobe@loop.service. Sep 13 02:23:31.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:31.986223 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 02:23:31.986395 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 02:23:31.987687 systemd[1]: Finished systemd-sysext.service. Sep 13 02:23:32.002448 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Sep 13 02:23:32.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:32.019422 systemd[1]: Starting ensure-sysext.service... Sep 13 02:23:32.020433 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 13 02:23:32.027100 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 02:23:32.034310 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 02:23:32.036106 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 02:23:32.036956 systemd[1]: Reloading. Sep 13 02:23:32.037359 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 02:23:32.067899 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2025-09-13T02:23:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 02:23:32.067918 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2025-09-13T02:23:32Z" level=info msg="torcx already run" Sep 13 02:23:32.090143 ldconfig[1363]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 02:23:32.130490 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 02:23:32.130498 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 02:23:32.141743 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:23:32.182000 audit: BPF prog-id=27 op=LOAD Sep 13 02:23:32.182000 audit: BPF prog-id=18 op=UNLOAD Sep 13 02:23:32.183000 audit: BPF prog-id=28 op=LOAD Sep 13 02:23:32.183000 audit: BPF prog-id=29 op=LOAD Sep 13 02:23:32.183000 audit: BPF prog-id=19 op=UNLOAD Sep 13 02:23:32.183000 audit: BPF prog-id=20 op=UNLOAD Sep 13 02:23:32.183000 audit: BPF prog-id=30 op=LOAD Sep 13 02:23:32.183000 audit: BPF prog-id=23 op=UNLOAD Sep 13 02:23:32.184000 audit: BPF prog-id=31 op=LOAD Sep 13 02:23:32.184000 audit: BPF prog-id=24 op=UNLOAD Sep 13 02:23:32.184000 audit: BPF prog-id=32 op=LOAD Sep 13 02:23:32.184000 audit: BPF prog-id=33 op=LOAD Sep 13 02:23:32.184000 audit: BPF prog-id=25 op=UNLOAD Sep 13 02:23:32.184000 audit: BPF prog-id=26 op=UNLOAD Sep 13 02:23:32.184000 audit: BPF prog-id=34 op=LOAD Sep 13 02:23:32.184000 audit: BPF prog-id=35 op=LOAD Sep 13 02:23:32.184000 audit: BPF prog-id=21 op=UNLOAD Sep 13 02:23:32.184000 audit: BPF prog-id=22 op=UNLOAD Sep 13 02:23:32.187587 systemd[1]: Finished ldconfig.service. Sep 13 02:23:32.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:32.194999 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 02:23:32.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:23:32.205890 systemd[1]: Starting audit-rules.service... Sep 13 02:23:32.213073 systemd[1]: Starting clean-ca-certificates.service... Sep 13 02:23:32.222162 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 02:23:32.222000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 02:23:32.222000 audit[1483]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff05b8a130 a2=420 a3=0 items=0 ppid=1466 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 02:23:32.222000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 02:23:32.224341 augenrules[1483]: No rules Sep 13 02:23:32.231549 systemd[1]: Starting systemd-resolved.service... Sep 13 02:23:32.239497 systemd[1]: Starting systemd-timesyncd.service... Sep 13 02:23:32.247077 systemd[1]: Starting systemd-update-utmp.service... Sep 13 02:23:32.254055 systemd[1]: Finished audit-rules.service. Sep 13 02:23:32.260759 systemd[1]: Finished clean-ca-certificates.service. Sep 13 02:23:32.268774 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 02:23:32.282955 systemd[1]: Finished systemd-update-utmp.service. Sep 13 02:23:32.292323 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 02:23:32.292954 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 02:23:32.300016 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 02:23:32.307989 systemd[1]: Starting modprobe@loop.service... Sep 13 02:23:32.314527 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 02:23:32.314603 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:23:32.314621 systemd-resolved[1488]: Positive Trust Anchors: Sep 13 02:23:32.314627 systemd-resolved[1488]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 02:23:32.314645 systemd-resolved[1488]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 02:23:32.315315 systemd[1]: Starting systemd-update-done.service... Sep 13 02:23:32.318573 systemd-resolved[1488]: Using system hostname 'ci-3510.3.8-n-78f707d8f3'. Sep 13 02:23:32.322527 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 02:23:32.323020 systemd[1]: Started systemd-timesyncd.service. Sep 13 02:23:32.331832 systemd[1]: Started systemd-resolved.service. Sep 13 02:23:32.339642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 02:23:32.339713 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 02:23:32.347742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 02:23:32.347815 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 02:23:32.356753 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 02:23:32.356835 systemd[1]: Finished modprobe@loop.service. Sep 13 02:23:32.365831 systemd[1]: Finished systemd-update-done.service. Sep 13 02:23:32.373911 systemd[1]: Reached target network.target. Sep 13 02:23:32.382686 systemd[1]: Reached target nss-lookup.target. Sep 13 02:23:32.390749 systemd[1]: Reached target time-set.target. Sep 13 02:23:32.398829 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 02:23:32.399156 systemd[1]: Reached target sysinit.target. Sep 13 02:23:32.408069 systemd[1]: Started motdgen.path. Sep 13 02:23:32.414993 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 02:23:32.425265 systemd[1]: Started logrotate.timer. Sep 13 02:23:32.433131 systemd[1]: Started mdadm.timer. Sep 13 02:23:32.439935 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 02:23:32.448801 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 02:23:32.449109 systemd[1]: Reached target paths.target. Sep 13 02:23:32.455882 systemd[1]: Reached target timers.target. Sep 13 02:23:32.463470 systemd[1]: Listening on dbus.socket. Sep 13 02:23:32.473363 systemd[1]: Starting docker.socket... Sep 13 02:23:32.486504 systemd[1]: Listening on sshd.socket. Sep 13 02:23:32.494093 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:23:32.494445 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 02:23:32.498520 systemd[1]: Listening on docker.socket. Sep 13 02:23:32.510122 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 02:23:32.510457 systemd[1]: Reached target sockets.target. Sep 13 02:23:32.518876 systemd[1]: Reached target basic.target. Sep 13 02:23:32.525850 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 02:23:32.526133 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 02:23:32.529006 systemd[1]: Starting containerd.service... Sep 13 02:23:32.538189 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 02:23:32.547017 systemd[1]: Starting coreos-metadata.service... Sep 13 02:23:32.554055 systemd[1]: Starting dbus.service... Sep 13 02:23:32.560173 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 02:23:32.564921 jq[1507]: false Sep 13 02:23:32.567075 systemd[1]: Starting extend-filesystems.service... Sep 13 02:23:32.571387 coreos-metadata[1500]: Sep 13 02:23:32.571 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 02:23:32.572131 dbus-daemon[1506]: [system] SELinux support is enabled Sep 13 02:23:32.573487 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 02:23:32.574243 systemd[1]: Starting modprobe@drm.service... Sep 13 02:23:32.574654 extend-filesystems[1508]: Found loop1 Sep 13 02:23:32.594489 extend-filesystems[1508]: Found sda Sep 13 02:23:32.594489 extend-filesystems[1508]: Found sdb Sep 13 02:23:32.594489 extend-filesystems[1508]: Found sdb1 Sep 13 02:23:32.594489 extend-filesystems[1508]: Found sdb2 Sep 13 02:23:32.594489 extend-filesystems[1508]: Found sdb3 Sep 13 02:23:32.594489 extend-filesystems[1508]: Found usr Sep 13 02:23:32.594489 extend-filesystems[1508]: Found sdb4 Sep 13 02:23:32.594489 extend-filesystems[1508]: Found sdb6 Sep 13 02:23:32.594489 extend-filesystems[1508]: Found sdb7 Sep 13 02:23:32.594489 extend-filesystems[1508]: Found sdb9 Sep 13 02:23:32.594489 extend-filesystems[1508]: Checking size of /dev/sdb9 Sep 13 02:23:32.594489 extend-filesystems[1508]: Resized partition /dev/sdb9 Sep 13 02:23:32.732454 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Sep 13 02:23:32.732493 coreos-metadata[1503]: Sep 13 02:23:32.577 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 02:23:32.581186 systemd[1]: Starting motdgen.service... Sep 13 02:23:32.732743 extend-filesystems[1518]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 02:23:32.610306 systemd[1]: Starting prepare-helm.service... Sep 13 02:23:32.643291 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 02:23:32.651397 systemd[1]: Starting sshd-keygen.service... Sep 13 02:23:32.671125 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 02:23:32.676653 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:23:32.747879 update_engine[1538]: I0913 02:23:32.745313 1538 main.cc:92] Flatcar Update Engine starting Sep 13 02:23:32.678780 systemd[1]: Starting tcsd.service... Sep 13 02:23:32.748055 jq[1539]: true Sep 13 02:23:32.689238 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 02:23:32.690438 systemd[1]: Starting update-engine.service... Sep 13 02:23:32.707463 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 02:23:32.725971 systemd[1]: Started dbus.service. Sep 13 02:23:32.741321 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 02:23:32.741448 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 02:23:32.741689 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 02:23:32.741756 systemd[1]: Finished modprobe@drm.service. Sep 13 02:23:32.748814 update_engine[1538]: I0913 02:23:32.748804 1538 update_check_scheduler.cc:74] Next update check in 9m3s Sep 13 02:23:32.755825 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 02:23:32.755908 systemd[1]: Finished motdgen.service. Sep 13 02:23:32.763071 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 02:23:32.763159 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 02:23:32.769491 systemd-networkd[1301]: bond0: Gained IPv6LL Sep 13 02:23:32.769706 systemd-timesyncd[1489]: Network configuration changed, trying to establish connection. Sep 13 02:23:32.774206 jq[1543]: true Sep 13 02:23:32.774954 systemd[1]: Finished ensure-sysext.service. Sep 13 02:23:32.783126 env[1544]: time="2025-09-13T02:23:32.783078148Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 02:23:32.783651 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 13 02:23:32.783763 systemd[1]: Condition check resulted in tcsd.service being skipped. Sep 13 02:23:32.784613 tar[1541]: linux-amd64/LICENSE Sep 13 02:23:32.784838 tar[1541]: linux-amd64/helm Sep 13 02:23:32.791367 env[1544]: time="2025-09-13T02:23:32.791345021Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 02:23:32.791440 env[1544]: time="2025-09-13T02:23:32.791429913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 02:23:32.792014 env[1544]: time="2025-09-13T02:23:32.791998240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 02:23:32.792014 env[1544]: time="2025-09-13T02:23:32.792012964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 02:23:32.792130 env[1544]: time="2025-09-13T02:23:32.792118905Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 02:23:32.792130 env[1544]: time="2025-09-13T02:23:32.792129146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 02:23:32.792189 env[1544]: time="2025-09-13T02:23:32.792136680Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 02:23:32.792189 env[1544]: time="2025-09-13T02:23:32.792142000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 02:23:32.792243 env[1544]: time="2025-09-13T02:23:32.792187216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 02:23:32.792322 env[1544]: time="2025-09-13T02:23:32.792312772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 02:23:32.792403 env[1544]: time="2025-09-13T02:23:32.792388722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 02:23:32.792436 env[1544]: time="2025-09-13T02:23:32.792399366Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 02:23:32.792469 env[1544]: time="2025-09-13T02:23:32.792432987Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 02:23:32.792469 env[1544]: time="2025-09-13T02:23:32.792441039Z" level=info msg="metadata content store policy set" policy=shared Sep 13 02:23:32.793353 systemd[1]: Started update-engine.service. Sep 13 02:23:32.801742 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:23:32.802646 systemd[1]: Started locksmithd.service. Sep 13 02:23:32.807657 env[1544]: time="2025-09-13T02:23:32.807640942Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 02:23:32.807694 env[1544]: time="2025-09-13T02:23:32.807667730Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 02:23:32.807694 env[1544]: time="2025-09-13T02:23:32.807681682Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 02:23:32.810265 env[1544]: time="2025-09-13T02:23:32.807708258Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 02:23:32.810265 env[1544]: time="2025-09-13T02:23:32.807722049Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 02:23:32.810265 env[1544]: time="2025-09-13T02:23:32.807737074Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 02:23:32.810265 env[1544]: time="2025-09-13T02:23:32.807745934Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 02:23:32.810265 env[1544]: time="2025-09-13T02:23:32.807754899Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 02:23:32.810265 env[1544]: time="2025-09-13T02:23:32.807763089Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 02:23:32.810265 env[1544]: time="2025-09-13T02:23:32.807771667Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 02:23:32.810265 env[1544]: time="2025-09-13T02:23:32.807778746Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 02:23:32.810265 env[1544]: time="2025-09-13T02:23:32.807785579Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 02:23:32.809473 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 02:23:32.810459 env[1544]: time="2025-09-13T02:23:32.810307404Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 02:23:32.810459 env[1544]: time="2025-09-13T02:23:32.810359570Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 02:23:32.809489 systemd[1]: Reached target system-config.target. Sep 13 02:23:32.810521 env[1544]: time="2025-09-13T02:23:32.810502560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 02:23:32.810538 env[1544]: time="2025-09-13T02:23:32.810520610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810538 env[1544]: time="2025-09-13T02:23:32.810528132Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 02:23:32.810567 env[1544]: time="2025-09-13T02:23:32.810552487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810567 env[1544]: time="2025-09-13T02:23:32.810560083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810601 env[1544]: time="2025-09-13T02:23:32.810566973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810601 env[1544]: time="2025-09-13T02:23:32.810572915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810601 env[1544]: time="2025-09-13T02:23:32.810579741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810601 env[1544]: time="2025-09-13T02:23:32.810586186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810601 env[1544]: time="2025-09-13T02:23:32.810592254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810601 env[1544]: time="2025-09-13T02:23:32.810598511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810698 env[1544]: time="2025-09-13T02:23:32.810606992Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 02:23:32.810725 env[1544]: time="2025-09-13T02:23:32.810704295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810725 env[1544]: time="2025-09-13T02:23:32.810717003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810781 env[1544]: time="2025-09-13T02:23:32.810726290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810781 env[1544]: time="2025-09-13T02:23:32.810732706Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 02:23:32.810781 env[1544]: time="2025-09-13T02:23:32.810741200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 02:23:32.810781 env[1544]: time="2025-09-13T02:23:32.810747394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 02:23:32.810781 env[1544]: time="2025-09-13T02:23:32.810757971Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 02:23:32.810781 env[1544]: time="2025-09-13T02:23:32.810783673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 02:23:32.810953 env[1544]: time="2025-09-13T02:23:32.810894941Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 02:23:32.810953 env[1544]: time="2025-09-13T02:23:32.810931306Z" level=info msg="Connect containerd service" Sep 13 02:23:32.810953 env[1544]: time="2025-09-13T02:23:32.810949186Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 02:23:32.818604 env[1544]: time="2025-09-13T02:23:32.811217873Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 02:23:32.818604 env[1544]: time="2025-09-13T02:23:32.811304449Z" level=info msg="Start subscribing containerd event" Sep 13 02:23:32.818604 env[1544]: time="2025-09-13T02:23:32.811474753Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 02:23:32.818604 env[1544]: time="2025-09-13T02:23:32.811483504Z" level=info msg="Start recovering state" Sep 13 02:23:32.818604 env[1544]: time="2025-09-13T02:23:32.811502674Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 02:23:32.818604 env[1544]: time="2025-09-13T02:23:32.811534803Z" level=info msg="containerd successfully booted in 0.028809s" Sep 13 02:23:32.818604 env[1544]: time="2025-09-13T02:23:32.811540629Z" level=info msg="Start event monitor" Sep 13 02:23:32.818604 env[1544]: time="2025-09-13T02:23:32.811589518Z" level=info msg="Start snapshots syncer" Sep 13 02:23:32.818604 env[1544]: time="2025-09-13T02:23:32.811648416Z" level=info msg="Start cni network conf syncer for default" Sep 13 02:23:32.818604 env[1544]: time="2025-09-13T02:23:32.811672727Z" level=info msg="Start streaming server" Sep 13 02:23:32.819003 systemd[1]: Starting systemd-logind.service... Sep 13 02:23:32.823396 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Sep 13 02:23:32.825485 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 02:23:32.825512 systemd[1]: Reached target user-config.target. Sep 13 02:23:32.833448 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:23:32.833623 systemd[1]: Started containerd.service. Sep 13 02:23:32.840627 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 02:23:32.844622 systemd-logind[1581]: Watching system buttons on /dev/input/event3 (Power Button) Sep 13 02:23:32.844633 systemd-logind[1581]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 02:23:32.844644 systemd-logind[1581]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 13 02:23:32.844744 systemd-logind[1581]: New seat seat0. Sep 13 02:23:32.850684 systemd[1]: Started systemd-logind.service. Sep 13 02:23:32.864698 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 02:23:33.075187 tar[1541]: linux-amd64/README.md Sep 13 02:23:33.077809 systemd[1]: Finished prepare-helm.service. Sep 13 02:23:33.089633 systemd-timesyncd[1489]: Network configuration changed, trying to establish connection. Sep 13 02:23:33.089722 systemd-timesyncd[1489]: Network configuration changed, trying to establish connection. Sep 13 02:23:33.090372 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 02:23:33.105634 systemd[1]: Reached target network-online.target. Sep 13 02:23:33.109444 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Sep 13 02:23:33.121615 systemd[1]: Starting kubelet.service... Sep 13 02:23:33.140327 extend-filesystems[1518]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Sep 13 02:23:33.140327 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 13 02:23:33.140327 extend-filesystems[1518]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Sep 13 02:23:33.180502 extend-filesystems[1508]: Resized filesystem in /dev/sdb9 Sep 13 02:23:33.140893 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 02:23:33.140977 systemd[1]: Finished extend-filesystems.service. Sep 13 02:23:33.384466 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Sep 13 02:23:33.847311 systemd[1]: Started kubelet.service. Sep 13 02:23:33.927069 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 02:23:33.938910 systemd[1]: Finished sshd-keygen.service. Sep 13 02:23:33.947373 systemd[1]: Starting issuegen.service... Sep 13 02:23:33.955812 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 02:23:33.955889 systemd[1]: Finished issuegen.service. Sep 13 02:23:33.964196 systemd[1]: Starting systemd-user-sessions.service... Sep 13 02:23:33.972741 systemd[1]: Finished systemd-user-sessions.service. Sep 13 02:23:33.981203 systemd[1]: Started getty@tty1.service. Sep 13 02:23:33.989104 systemd[1]: Started serial-getty@ttyS1.service. Sep 13 02:23:33.997624 systemd[1]: Reached target getty.target. Sep 13 02:23:34.326246 kubelet[1597]: E0913 02:23:34.326158 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:23:34.327289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:23:34.327372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:23:38.722361 coreos-metadata[1500]: Sep 13 02:23:38.722 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Sep 13 02:23:38.723054 coreos-metadata[1503]: Sep 13 02:23:38.722 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Sep 13 02:23:39.011172 login[1618]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 02:23:39.017981 login[1617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 02:23:39.019791 systemd-logind[1581]: New session 1 of user core. Sep 13 02:23:39.020516 systemd[1]: Created slice user-500.slice. Sep 13 02:23:39.021130 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 02:23:39.022427 systemd-logind[1581]: New session 2 of user core. Sep 13 02:23:39.026745 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 02:23:39.027446 systemd[1]: Starting user@500.service... Sep 13 02:23:39.029445 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:23:39.116882 systemd[1630]: Queued start job for default target default.target. Sep 13 02:23:39.117121 systemd[1630]: Reached target paths.target. Sep 13 02:23:39.117133 systemd[1630]: Reached target sockets.target. Sep 13 02:23:39.117141 systemd[1630]: Reached target timers.target. Sep 13 02:23:39.117148 systemd[1630]: Reached target basic.target. Sep 13 02:23:39.117168 systemd[1630]: Reached target default.target. Sep 13 02:23:39.117183 systemd[1630]: Startup finished in 84ms. Sep 13 02:23:39.117229 systemd[1]: Started user@500.service. Sep 13 02:23:39.117821 systemd[1]: Started session-1.scope. Sep 13 02:23:39.118221 systemd[1]: Started session-2.scope. Sep 13 02:23:39.304455 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Sep 13 02:23:39.311436 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Sep 13 02:23:39.548876 systemd[1]: Created slice system-sshd.slice. Sep 13 02:23:39.549567 systemd[1]: Started sshd@0-147.75.203.133:22-139.178.89.65:47802.service. Sep 13 02:23:39.591177 sshd[1651]: Accepted publickey for core from 139.178.89.65 port 47802 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:23:39.594076 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:23:39.603742 systemd-logind[1581]: New session 3 of user core. Sep 13 02:23:39.606221 systemd[1]: Started session-3.scope. Sep 13 02:23:39.672141 systemd[1]: Started sshd@1-147.75.203.133:22-139.178.89.65:52778.service. Sep 13 02:23:39.700375 sshd[1656]: Accepted publickey for core from 139.178.89.65 port 52778 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:23:39.701092 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:23:39.703428 systemd-logind[1581]: New session 4 of user core. Sep 13 02:23:39.703915 systemd[1]: Started session-4.scope. Sep 13 02:23:39.722330 coreos-metadata[1503]: Sep 13 02:23:39.722 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 13 02:23:39.722380 coreos-metadata[1500]: Sep 13 02:23:39.722 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 13 02:23:39.756354 sshd[1656]: pam_unix(sshd:session): session closed for user core Sep 13 02:23:39.760367 systemd[1]: sshd@1-147.75.203.133:22-139.178.89.65:52778.service: Deactivated successfully. Sep 13 02:23:39.761574 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 02:23:39.762716 systemd-logind[1581]: Session 4 logged out. Waiting for processes to exit. Sep 13 02:23:39.764621 systemd[1]: Started sshd@2-147.75.203.133:22-139.178.89.65:52780.service. Sep 13 02:23:39.766508 systemd-logind[1581]: Removed session 4. Sep 13 02:23:39.797196 sshd[1662]: Accepted publickey for core from 139.178.89.65 port 52780 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:23:39.798099 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:23:39.801098 systemd-logind[1581]: New session 5 of user core. Sep 13 02:23:39.801805 systemd[1]: Started session-5.scope. Sep 13 02:23:39.865350 sshd[1662]: pam_unix(sshd:session): session closed for user core Sep 13 02:23:39.870688 systemd[1]: sshd@2-147.75.203.133:22-139.178.89.65:52780.service: Deactivated successfully. Sep 13 02:23:39.872205 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 02:23:39.873640 systemd-logind[1581]: Session 5 logged out. Waiting for processes to exit. Sep 13 02:23:39.875658 systemd-logind[1581]: Removed session 5. Sep 13 02:23:40.845497 coreos-metadata[1500]: Sep 13 02:23:40.845 INFO Fetch successful Sep 13 02:23:40.851099 coreos-metadata[1503]: Sep 13 02:23:40.851 INFO Fetch successful Sep 13 02:23:40.927687 systemd[1]: Finished coreos-metadata.service. Sep 13 02:23:40.928457 systemd[1]: Started packet-phone-home.service. Sep 13 02:23:40.931909 unknown[1500]: wrote ssh authorized keys file for user: core Sep 13 02:23:40.937130 curl[1670]: % Total % Received % Xferd Average Speed Time Time Time Current Sep 13 02:23:40.937284 curl[1670]: Dload Upload Total Spent Left Speed Sep 13 02:23:40.945597 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Sep 13 02:23:40.945796 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 02:23:40.945958 systemd[1]: Reached target multi-user.target. Sep 13 02:23:40.946643 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 02:23:40.950740 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 02:23:40.950811 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 02:23:40.950948 systemd[1]: Startup finished in 2.041s (kernel) + 25.908s (initrd) + 15.204s (userspace) = 43.154s. Sep 13 02:23:41.360698 curl[1670]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Sep 13 02:23:41.362722 systemd[1]: packet-phone-home.service: Deactivated successfully. Sep 13 02:23:44.390944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 02:23:44.391421 systemd[1]: Stopped kubelet.service. Sep 13 02:23:44.394461 systemd[1]: Starting kubelet.service... Sep 13 02:23:44.644793 systemd[1]: Started kubelet.service. Sep 13 02:23:44.725491 kubelet[1682]: E0913 02:23:44.725384 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:23:44.729431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:23:44.729586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:23:49.876630 systemd[1]: Started sshd@3-147.75.203.133:22-139.178.89.65:48136.service. Sep 13 02:23:49.911795 sshd[1701]: Accepted publickey for core from 139.178.89.65 port 48136 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:23:49.912503 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:23:49.914872 systemd-logind[1581]: New session 6 of user core. Sep 13 02:23:49.915338 systemd[1]: Started session-6.scope. Sep 13 02:23:49.966109 sshd[1701]: pam_unix(sshd:session): session closed for user core Sep 13 02:23:49.967649 systemd[1]: sshd@3-147.75.203.133:22-139.178.89.65:48136.service: Deactivated successfully. Sep 13 02:23:49.967935 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 02:23:49.968213 systemd-logind[1581]: Session 6 logged out. Waiting for processes to exit. Sep 13 02:23:49.968798 systemd[1]: Started sshd@4-147.75.203.133:22-139.178.89.65:48138.service. Sep 13 02:23:49.969249 systemd-logind[1581]: Removed session 6. Sep 13 02:23:50.009871 sshd[1707]: Accepted publickey for core from 139.178.89.65 port 48138 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:23:50.010947 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:23:50.014338 systemd-logind[1581]: New session 7 of user core. Sep 13 02:23:50.015088 systemd[1]: Started session-7.scope. Sep 13 02:23:50.067719 sshd[1707]: pam_unix(sshd:session): session closed for user core Sep 13 02:23:50.069196 systemd[1]: sshd@4-147.75.203.133:22-139.178.89.65:48138.service: Deactivated successfully. Sep 13 02:23:50.069490 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 02:23:50.069797 systemd-logind[1581]: Session 7 logged out. Waiting for processes to exit. Sep 13 02:23:50.070349 systemd[1]: Started sshd@5-147.75.203.133:22-139.178.89.65:48140.service. Sep 13 02:23:50.070824 systemd-logind[1581]: Removed session 7. Sep 13 02:23:50.148904 sshd[1713]: Accepted publickey for core from 139.178.89.65 port 48140 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:23:50.151093 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:23:50.158135 systemd-logind[1581]: New session 8 of user core. Sep 13 02:23:50.159656 systemd[1]: Started session-8.scope. Sep 13 02:23:50.235184 sshd[1713]: pam_unix(sshd:session): session closed for user core Sep 13 02:23:50.241961 systemd[1]: sshd@5-147.75.203.133:22-139.178.89.65:48140.service: Deactivated successfully. Sep 13 02:23:50.242546 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 02:23:50.242918 systemd-logind[1581]: Session 8 logged out. Waiting for processes to exit. Sep 13 02:23:50.243373 systemd[1]: Started sshd@6-147.75.203.133:22-139.178.89.65:48148.service. Sep 13 02:23:50.243857 systemd-logind[1581]: Removed session 8. Sep 13 02:23:50.284404 sshd[1720]: Accepted publickey for core from 139.178.89.65 port 48148 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:23:50.285220 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:23:50.287878 systemd-logind[1581]: New session 9 of user core. Sep 13 02:23:50.288434 systemd[1]: Started session-9.scope. Sep 13 02:23:50.364937 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 02:23:50.365578 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 02:23:50.416015 systemd[1]: Starting docker.service... Sep 13 02:23:50.443895 env[1737]: time="2025-09-13T02:23:50.443826241Z" level=info msg="Starting up" Sep 13 02:23:50.444605 env[1737]: time="2025-09-13T02:23:50.444560837Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 02:23:50.444605 env[1737]: time="2025-09-13T02:23:50.444574170Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 02:23:50.444605 env[1737]: time="2025-09-13T02:23:50.444589114Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 02:23:50.444605 env[1737]: time="2025-09-13T02:23:50.444597061Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 02:23:50.445647 env[1737]: time="2025-09-13T02:23:50.445608521Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 02:23:50.445647 env[1737]: time="2025-09-13T02:23:50.445620530Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 02:23:50.445647 env[1737]: time="2025-09-13T02:23:50.445631419Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 02:23:50.445647 env[1737]: time="2025-09-13T02:23:50.445638870Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 02:23:50.459639 env[1737]: time="2025-09-13T02:23:50.459594600Z" level=info msg="Loading containers: start." Sep 13 02:23:50.712468 kernel: Initializing XFRM netlink socket Sep 13 02:23:50.777287 env[1737]: time="2025-09-13T02:23:50.777242046Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 02:23:50.778011 systemd-timesyncd[1489]: Network configuration changed, trying to establish connection. Sep 13 02:23:50.833149 systemd-networkd[1301]: docker0: Link UP Sep 13 02:23:50.849921 env[1737]: time="2025-09-13T02:23:50.849862520Z" level=info msg="Loading containers: done." Sep 13 02:23:50.862831 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck795268458-merged.mount: Deactivated successfully. Sep 13 02:23:50.879801 env[1737]: time="2025-09-13T02:23:50.879688198Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 02:23:50.880106 env[1737]: time="2025-09-13T02:23:50.880073718Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 02:23:50.880325 env[1737]: time="2025-09-13T02:23:50.880283995Z" level=info msg="Daemon has completed initialization" Sep 13 02:23:50.905056 systemd[1]: Started docker.service. Sep 13 02:23:50.919352 env[1737]: time="2025-09-13T02:23:50.919253770Z" level=info msg="API listen on /run/docker.sock" Sep 13 02:23:51.761810 systemd-resolved[1488]: Clock change detected. Flushing caches. Sep 13 02:23:51.761974 systemd-timesyncd[1489]: Contacted time server [2604:4300:a:299::164]:123 (2.flatcar.pool.ntp.org). Sep 13 02:23:51.762096 systemd-timesyncd[1489]: Initial clock synchronization to Sat 2025-09-13 02:23:51.761667 UTC. Sep 13 02:23:52.959598 env[1544]: time="2025-09-13T02:23:52.959461186Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 02:23:53.737818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597894240.mount: Deactivated successfully. Sep 13 02:23:54.971746 env[1544]: time="2025-09-13T02:23:54.971689315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:54.972335 env[1544]: time="2025-09-13T02:23:54.972294026Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:54.973527 env[1544]: time="2025-09-13T02:23:54.973478456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:54.974758 env[1544]: time="2025-09-13T02:23:54.974711085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:54.976135 env[1544]: time="2025-09-13T02:23:54.976106253Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 02:23:54.976779 env[1544]: time="2025-09-13T02:23:54.976747628Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 02:23:55.482847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 02:23:55.483428 systemd[1]: Stopped kubelet.service. Sep 13 02:23:55.486489 systemd[1]: Starting kubelet.service... Sep 13 02:23:55.682953 systemd[1]: Started kubelet.service. Sep 13 02:23:55.725355 kubelet[1891]: E0913 02:23:55.725306 1891 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:23:55.726343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:23:55.726414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:23:56.531670 env[1544]: time="2025-09-13T02:23:56.531606953Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:56.532347 env[1544]: time="2025-09-13T02:23:56.532295732Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:56.533421 env[1544]: time="2025-09-13T02:23:56.533380281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:56.534568 env[1544]: time="2025-09-13T02:23:56.534527057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:56.535051 env[1544]: time="2025-09-13T02:23:56.535008300Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 02:23:56.535420 env[1544]: time="2025-09-13T02:23:56.535385797Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 02:23:57.823850 env[1544]: time="2025-09-13T02:23:57.823792265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:57.825358 env[1544]: time="2025-09-13T02:23:57.825322924Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:57.827953 env[1544]: time="2025-09-13T02:23:57.827884925Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:57.830146 env[1544]: time="2025-09-13T02:23:57.830076415Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:57.831251 env[1544]: time="2025-09-13T02:23:57.831180102Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 02:23:57.831669 env[1544]: time="2025-09-13T02:23:57.831607568Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 02:23:58.774565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256296347.mount: Deactivated successfully. Sep 13 02:23:59.192384 env[1544]: time="2025-09-13T02:23:59.192305274Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:59.192918 env[1544]: time="2025-09-13T02:23:59.192905248Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:59.193469 env[1544]: time="2025-09-13T02:23:59.193455399Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:59.194416 env[1544]: time="2025-09-13T02:23:59.194403128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:23:59.194565 env[1544]: time="2025-09-13T02:23:59.194547355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 02:23:59.194990 env[1544]: time="2025-09-13T02:23:59.194949391Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 02:23:59.793259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725820743.mount: Deactivated successfully. Sep 13 02:24:00.538472 env[1544]: time="2025-09-13T02:24:00.538394279Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:00.539129 env[1544]: time="2025-09-13T02:24:00.539081263Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:00.540521 env[1544]: time="2025-09-13T02:24:00.540477846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:00.541411 env[1544]: time="2025-09-13T02:24:00.541352463Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:00.541918 env[1544]: time="2025-09-13T02:24:00.541875189Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 02:24:00.542307 env[1544]: time="2025-09-13T02:24:00.542244988Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 02:24:01.121661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203856835.mount: Deactivated successfully. Sep 13 02:24:01.122940 env[1544]: time="2025-09-13T02:24:01.122924115Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:01.123589 env[1544]: time="2025-09-13T02:24:01.123574851Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:01.124410 env[1544]: time="2025-09-13T02:24:01.124376773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:01.125087 env[1544]: time="2025-09-13T02:24:01.125074105Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:01.125427 env[1544]: time="2025-09-13T02:24:01.125412111Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 02:24:01.125773 env[1544]: time="2025-09-13T02:24:01.125744347Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 02:24:01.793543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757358593.mount: Deactivated successfully. Sep 13 02:24:03.443705 env[1544]: time="2025-09-13T02:24:03.443650918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:03.444342 env[1544]: time="2025-09-13T02:24:03.444295322Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:03.446431 env[1544]: time="2025-09-13T02:24:03.446394321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:03.447741 env[1544]: time="2025-09-13T02:24:03.447678323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:03.448313 env[1544]: time="2025-09-13T02:24:03.448262578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 02:24:05.434215 systemd[1]: Stopped kubelet.service. Sep 13 02:24:05.435461 systemd[1]: Starting kubelet.service... Sep 13 02:24:05.450654 systemd[1]: Reloading. Sep 13 02:24:05.485279 /usr/lib/systemd/system-generators/torcx-generator[1981]: time="2025-09-13T02:24:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 02:24:05.485294 /usr/lib/systemd/system-generators/torcx-generator[1981]: time="2025-09-13T02:24:05Z" level=info msg="torcx already run" Sep 13 02:24:05.542902 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 02:24:05.542912 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 02:24:05.556585 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:24:05.624486 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 02:24:05.624527 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 02:24:05.624632 systemd[1]: Stopped kubelet.service. Sep 13 02:24:05.625466 systemd[1]: Starting kubelet.service... Sep 13 02:24:05.862203 systemd[1]: Started kubelet.service. Sep 13 02:24:05.898184 kubelet[2046]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:24:05.898184 kubelet[2046]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 02:24:05.898184 kubelet[2046]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:24:05.898416 kubelet[2046]: I0913 02:24:05.898189 2046 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 02:24:06.202345 kubelet[2046]: I0913 02:24:06.202259 2046 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 02:24:06.202345 kubelet[2046]: I0913 02:24:06.202273 2046 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 02:24:06.202425 kubelet[2046]: I0913 02:24:06.202420 2046 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 02:24:06.232113 kubelet[2046]: E0913 02:24:06.232100 2046 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.203.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 02:24:06.234885 kubelet[2046]: I0913 02:24:06.234874 2046 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 02:24:06.240980 kubelet[2046]: E0913 02:24:06.240963 2046 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 02:24:06.240980 kubelet[2046]: I0913 02:24:06.240981 2046 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 02:24:06.263524 kubelet[2046]: I0913 02:24:06.263482 2046 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 02:24:06.265150 kubelet[2046]: I0913 02:24:06.265093 2046 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 02:24:06.265312 kubelet[2046]: I0913 02:24:06.265121 2046 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-78f707d8f3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 02:24:06.265312 kubelet[2046]: I0913 02:24:06.265293 2046 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 02:24:06.265312 kubelet[2046]: I0913 02:24:06.265304 2046 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 02:24:06.265503 kubelet[2046]: I0913 02:24:06.265408 2046 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:24:06.269732 kubelet[2046]: I0913 02:24:06.269690 2046 kubelet.go:446] "Attempting to sync node with API server" Sep 13 02:24:06.269732 kubelet[2046]: I0913 02:24:06.269718 2046 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 02:24:06.269732 kubelet[2046]: I0913 02:24:06.269736 2046 kubelet.go:352] "Adding apiserver pod source" Sep 13 02:24:06.269912 kubelet[2046]: I0913 02:24:06.269746 2046 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 02:24:06.291241 kubelet[2046]: W0913 02:24:06.291115 2046 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.203.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 02:24:06.291241 kubelet[2046]: W0913 02:24:06.291120 2046 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.203.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-78f707d8f3&limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 02:24:06.291241 kubelet[2046]: E0913 02:24:06.291230 2046 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.203.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 02:24:06.291542 kubelet[2046]: E0913 02:24:06.291239 2046 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.203.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-78f707d8f3&limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 02:24:06.294494 kubelet[2046]: I0913 02:24:06.294431 2046 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 02:24:06.295448 kubelet[2046]: I0913 02:24:06.295376 2046 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 02:24:06.304231 kubelet[2046]: W0913 02:24:06.304114 2046 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 02:24:06.316863 kubelet[2046]: I0913 02:24:06.316812 2046 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 02:24:06.317045 kubelet[2046]: I0913 02:24:06.316907 2046 server.go:1287] "Started kubelet" Sep 13 02:24:06.317284 kubelet[2046]: I0913 02:24:06.317202 2046 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 02:24:06.320565 kubelet[2046]: I0913 02:24:06.320510 2046 server.go:479] "Adding debug handlers to kubelet server" Sep 13 02:24:06.329982 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 02:24:06.330180 kubelet[2046]: I0913 02:24:06.330119 2046 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 02:24:06.330387 kubelet[2046]: I0913 02:24:06.330183 2046 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 02:24:06.330387 kubelet[2046]: I0913 02:24:06.330359 2046 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 02:24:06.330729 kubelet[2046]: I0913 02:24:06.330476 2046 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 02:24:06.330729 kubelet[2046]: I0913 02:24:06.330563 2046 reconciler.go:26] "Reconciler: start to sync state" Sep 13 02:24:06.330729 kubelet[2046]: E0913 02:24:06.330660 2046 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-78f707d8f3\" not found" Sep 13 02:24:06.331387 kubelet[2046]: E0913 02:24:06.331213 2046 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-78f707d8f3?timeout=10s\": dial tcp 147.75.203.133:6443: connect: connection refused" interval="200ms" Sep 13 02:24:06.331578 kubelet[2046]: W0913 02:24:06.331366 2046 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.203.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 02:24:06.331578 kubelet[2046]: E0913 02:24:06.331507 2046 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.203.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 02:24:06.331871 kubelet[2046]: I0913 02:24:06.331576 2046 factory.go:221] Registration of the systemd container factory successfully Sep 13 02:24:06.331871 kubelet[2046]: I0913 02:24:06.331790 2046 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 02:24:06.334037 kubelet[2046]: I0913 02:24:06.333993 2046 factory.go:221] Registration of the containerd container factory successfully Sep 13 02:24:06.350502 kubelet[2046]: I0913 02:24:06.349654 2046 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 02:24:06.351494 kubelet[2046]: E0913 02:24:06.351419 2046 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 02:24:06.351958 kubelet[2046]: I0913 02:24:06.351453 2046 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 02:24:06.360581 kubelet[2046]: E0913 02:24:06.357069 2046 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.203.133:6443/api/v1/namespaces/default/events\": dial tcp 147.75.203.133:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-78f707d8f3.1864b6589ead1118 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-78f707d8f3,UID:ci-3510.3.8-n-78f707d8f3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-78f707d8f3,},FirstTimestamp:2025-09-13 02:24:06.316855576 +0000 UTC m=+0.449892890,LastTimestamp:2025-09-13 02:24:06.316855576 +0000 UTC m=+0.449892890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-78f707d8f3,}" Sep 13 02:24:06.378491 kubelet[2046]: I0913 02:24:06.378398 2046 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 02:24:06.380253 kubelet[2046]: I0913 02:24:06.380213 2046 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 02:24:06.380253 kubelet[2046]: I0913 02:24:06.380255 2046 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 02:24:06.380454 kubelet[2046]: I0913 02:24:06.380284 2046 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 02:24:06.380454 kubelet[2046]: I0913 02:24:06.380298 2046 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 02:24:06.380454 kubelet[2046]: E0913 02:24:06.380413 2046 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 02:24:06.380877 kubelet[2046]: W0913 02:24:06.380830 2046 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.203.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 02:24:06.381016 kubelet[2046]: E0913 02:24:06.380896 2046 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.203.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 02:24:06.384460 kubelet[2046]: I0913 02:24:06.384389 2046 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 02:24:06.384460 kubelet[2046]: I0913 02:24:06.384418 2046 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 02:24:06.384460 kubelet[2046]: I0913 02:24:06.384447 2046 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:24:06.397899 kubelet[2046]: I0913 02:24:06.397834 2046 policy_none.go:49] "None policy: Start" Sep 13 02:24:06.397899 kubelet[2046]: I0913 02:24:06.397869 2046 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 02:24:06.397899 kubelet[2046]: I0913 02:24:06.397891 2046 state_mem.go:35] "Initializing new in-memory state store" Sep 13 02:24:06.404420 systemd[1]: Created slice kubepods.slice. Sep 13 02:24:06.411968 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 02:24:06.417748 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 02:24:06.431265 kubelet[2046]: E0913 02:24:06.431200 2046 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-78f707d8f3\" not found" Sep 13 02:24:06.431411 kubelet[2046]: I0913 02:24:06.431366 2046 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 02:24:06.431704 kubelet[2046]: I0913 02:24:06.431642 2046 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 02:24:06.431704 kubelet[2046]: I0913 02:24:06.431674 2046 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 02:24:06.431937 kubelet[2046]: I0913 02:24:06.431911 2046 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 02:24:06.433005 kubelet[2046]: E0913 02:24:06.432968 2046 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 02:24:06.433160 kubelet[2046]: E0913 02:24:06.433046 2046 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-78f707d8f3\" not found" Sep 13 02:24:06.500541 systemd[1]: Created slice kubepods-burstable-pod5af573f9019db832c8a09f46f06b636b.slice. Sep 13 02:24:06.519048 kubelet[2046]: E0913 02:24:06.518970 2046 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-78f707d8f3\" not found" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.525231 systemd[1]: Created slice kubepods-burstable-pod617cfb16944f05cdeca57ae67c8bb421.slice. Sep 13 02:24:06.531437 kubelet[2046]: I0913 02:24:06.531357 2046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5af573f9019db832c8a09f46f06b636b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-78f707d8f3\" (UID: \"5af573f9019db832c8a09f46f06b636b\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.532212 kubelet[2046]: E0913 02:24:06.532074 2046 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-78f707d8f3?timeout=10s\": dial tcp 147.75.203.133:6443: connect: connection refused" interval="400ms" Sep 13 02:24:06.535079 kubelet[2046]: I0913 02:24:06.535029 2046 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.535810 kubelet[2046]: E0913 02:24:06.535699 2046 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.203.133:6443/api/v1/nodes\": dial tcp 147.75.203.133:6443: connect: connection refused" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.543010 kubelet[2046]: E0913 02:24:06.542935 2046 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-78f707d8f3\" not found" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.549467 systemd[1]: Created slice kubepods-burstable-pod3f9c2b3e869807c18dd7e80fb25a2793.slice. Sep 13 02:24:06.553373 kubelet[2046]: E0913 02:24:06.553297 2046 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-78f707d8f3\" not found" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.631642 kubelet[2046]: I0913 02:24:06.631549 2046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f9c2b3e869807c18dd7e80fb25a2793-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" (UID: \"3f9c2b3e869807c18dd7e80fb25a2793\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.631846 kubelet[2046]: I0913 02:24:06.631652 2046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f9c2b3e869807c18dd7e80fb25a2793-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" (UID: \"3f9c2b3e869807c18dd7e80fb25a2793\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.631846 kubelet[2046]: I0913 02:24:06.631713 2046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/617cfb16944f05cdeca57ae67c8bb421-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-78f707d8f3\" (UID: \"617cfb16944f05cdeca57ae67c8bb421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.631846 kubelet[2046]: I0913 02:24:06.631758 2046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f9c2b3e869807c18dd7e80fb25a2793-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" (UID: \"3f9c2b3e869807c18dd7e80fb25a2793\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.632118 kubelet[2046]: I0913 02:24:06.631897 2046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/617cfb16944f05cdeca57ae67c8bb421-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-78f707d8f3\" (UID: \"617cfb16944f05cdeca57ae67c8bb421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.632118 kubelet[2046]: I0913 02:24:06.632001 2046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f9c2b3e869807c18dd7e80fb25a2793-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" (UID: \"3f9c2b3e869807c18dd7e80fb25a2793\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.632118 kubelet[2046]: I0913 02:24:06.632069 2046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f9c2b3e869807c18dd7e80fb25a2793-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" (UID: \"3f9c2b3e869807c18dd7e80fb25a2793\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.632442 kubelet[2046]: I0913 02:24:06.632226 2046 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/617cfb16944f05cdeca57ae67c8bb421-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-78f707d8f3\" (UID: \"617cfb16944f05cdeca57ae67c8bb421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.739911 kubelet[2046]: I0913 02:24:06.739829 2046 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.740443 kubelet[2046]: E0913 02:24:06.740392 2046 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.203.133:6443/api/v1/nodes\": dial tcp 147.75.203.133:6443: connect: connection refused" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:06.821610 env[1544]: time="2025-09-13T02:24:06.821399684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-78f707d8f3,Uid:5af573f9019db832c8a09f46f06b636b,Namespace:kube-system,Attempt:0,}" Sep 13 02:24:06.844666 env[1544]: time="2025-09-13T02:24:06.844549576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-78f707d8f3,Uid:617cfb16944f05cdeca57ae67c8bb421,Namespace:kube-system,Attempt:0,}" Sep 13 02:24:06.854720 env[1544]: time="2025-09-13T02:24:06.854638379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-78f707d8f3,Uid:3f9c2b3e869807c18dd7e80fb25a2793,Namespace:kube-system,Attempt:0,}" Sep 13 02:24:06.932964 kubelet[2046]: E0913 02:24:06.932895 2046 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-78f707d8f3?timeout=10s\": dial tcp 147.75.203.133:6443: connect: connection refused" interval="800ms" Sep 13 02:24:07.144528 kubelet[2046]: I0913 02:24:07.144363 2046 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:07.145186 kubelet[2046]: E0913 02:24:07.145028 2046 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.203.133:6443/api/v1/nodes\": dial tcp 147.75.203.133:6443: connect: connection refused" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:07.263454 kubelet[2046]: W0913 02:24:07.263294 2046 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.203.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 02:24:07.263664 kubelet[2046]: E0913 02:24:07.263442 2046 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.203.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 02:24:07.358430 kubelet[2046]: W0913 02:24:07.358335 2046 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.203.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 02:24:07.358430 kubelet[2046]: E0913 02:24:07.358414 2046 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.203.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 02:24:07.428618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040144343.mount: Deactivated successfully. Sep 13 02:24:07.429791 env[1544]: time="2025-09-13T02:24:07.429745355Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.430645 env[1544]: time="2025-09-13T02:24:07.430588170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.431451 env[1544]: time="2025-09-13T02:24:07.431402670Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.432081 env[1544]: time="2025-09-13T02:24:07.432045812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.432555 env[1544]: time="2025-09-13T02:24:07.432515649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.433761 env[1544]: time="2025-09-13T02:24:07.433725983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.435051 env[1544]: time="2025-09-13T02:24:07.435014779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.436195 env[1544]: time="2025-09-13T02:24:07.436122350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.437088 env[1544]: time="2025-09-13T02:24:07.437052825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.438021 env[1544]: time="2025-09-13T02:24:07.438008346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.438502 env[1544]: time="2025-09-13T02:24:07.438474697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.438889 env[1544]: time="2025-09-13T02:24:07.438876590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:07.443208 env[1544]: time="2025-09-13T02:24:07.443178617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:24:07.443208 env[1544]: time="2025-09-13T02:24:07.443199730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:24:07.443316 env[1544]: time="2025-09-13T02:24:07.443209170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:24:07.443316 env[1544]: time="2025-09-13T02:24:07.443284481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d691e60ebefc36795ebddcea3e0c28a15040664da8720a1c09c8c07ae916b806 pid=2098 runtime=io.containerd.runc.v2 Sep 13 02:24:07.444008 env[1544]: time="2025-09-13T02:24:07.443983887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:24:07.444008 env[1544]: time="2025-09-13T02:24:07.444001564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:24:07.444067 env[1544]: time="2025-09-13T02:24:07.444008758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:24:07.444161 env[1544]: time="2025-09-13T02:24:07.444133026Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0dd95aff2bab01f2dfc11f1fe2e30d33e4e989741355bf232ddc098b46f1ac23 pid=2109 runtime=io.containerd.runc.v2 Sep 13 02:24:07.446038 env[1544]: time="2025-09-13T02:24:07.445991905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:24:07.446038 env[1544]: time="2025-09-13T02:24:07.446017364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:24:07.446038 env[1544]: time="2025-09-13T02:24:07.446024644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:24:07.446175 env[1544]: time="2025-09-13T02:24:07.446093587Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d34397a75989bc6639c04b27589960b332e435c620c701e32ea64166da992e2 pid=2134 runtime=io.containerd.runc.v2 Sep 13 02:24:07.449650 systemd[1]: Started cri-containerd-d691e60ebefc36795ebddcea3e0c28a15040664da8720a1c09c8c07ae916b806.scope. Sep 13 02:24:07.451096 systemd[1]: Started cri-containerd-0dd95aff2bab01f2dfc11f1fe2e30d33e4e989741355bf232ddc098b46f1ac23.scope. Sep 13 02:24:07.452541 systemd[1]: Started cri-containerd-8d34397a75989bc6639c04b27589960b332e435c620c701e32ea64166da992e2.scope. Sep 13 02:24:07.471932 env[1544]: time="2025-09-13T02:24:07.471897733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-78f707d8f3,Uid:5af573f9019db832c8a09f46f06b636b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d691e60ebefc36795ebddcea3e0c28a15040664da8720a1c09c8c07ae916b806\"" Sep 13 02:24:07.472450 env[1544]: time="2025-09-13T02:24:07.472430171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-78f707d8f3,Uid:617cfb16944f05cdeca57ae67c8bb421,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dd95aff2bab01f2dfc11f1fe2e30d33e4e989741355bf232ddc098b46f1ac23\"" Sep 13 02:24:07.474592 env[1544]: time="2025-09-13T02:24:07.474569354Z" level=info msg="CreateContainer within sandbox \"d691e60ebefc36795ebddcea3e0c28a15040664da8720a1c09c8c07ae916b806\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 02:24:07.475183 env[1544]: time="2025-09-13T02:24:07.475163677Z" level=info msg="CreateContainer within sandbox \"0dd95aff2bab01f2dfc11f1fe2e30d33e4e989741355bf232ddc098b46f1ac23\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 02:24:07.475286 env[1544]: time="2025-09-13T02:24:07.475271934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-78f707d8f3,Uid:3f9c2b3e869807c18dd7e80fb25a2793,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d34397a75989bc6639c04b27589960b332e435c620c701e32ea64166da992e2\"" Sep 13 02:24:07.476147 env[1544]: time="2025-09-13T02:24:07.476130562Z" level=info msg="CreateContainer within sandbox \"8d34397a75989bc6639c04b27589960b332e435c620c701e32ea64166da992e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 02:24:07.480320 env[1544]: time="2025-09-13T02:24:07.480305540Z" level=info msg="CreateContainer within sandbox \"d691e60ebefc36795ebddcea3e0c28a15040664da8720a1c09c8c07ae916b806\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8a8c749f19e3ba66c2b8fe6dc3951eaed8828cfb2865af527213eabfadda1a64\"" Sep 13 02:24:07.480540 env[1544]: time="2025-09-13T02:24:07.480527474Z" level=info msg="StartContainer for \"8a8c749f19e3ba66c2b8fe6dc3951eaed8828cfb2865af527213eabfadda1a64\"" Sep 13 02:24:07.482254 env[1544]: time="2025-09-13T02:24:07.482236136Z" level=info msg="CreateContainer within sandbox \"0dd95aff2bab01f2dfc11f1fe2e30d33e4e989741355bf232ddc098b46f1ac23\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"97a936960a7c9082001dd680193cf6a953d7d01ac24a208c648d2845cbbfcff5\"" Sep 13 02:24:07.482446 env[1544]: time="2025-09-13T02:24:07.482435074Z" level=info msg="StartContainer for \"97a936960a7c9082001dd680193cf6a953d7d01ac24a208c648d2845cbbfcff5\"" Sep 13 02:24:07.483092 env[1544]: time="2025-09-13T02:24:07.483048390Z" level=info msg="CreateContainer within sandbox \"8d34397a75989bc6639c04b27589960b332e435c620c701e32ea64166da992e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"13fdf7ab1b0c84577a478c9fa7c477fceecaf3a450e4b364547ea5f0687e56ab\"" Sep 13 02:24:07.483206 env[1544]: time="2025-09-13T02:24:07.483193533Z" level=info msg="StartContainer for \"13fdf7ab1b0c84577a478c9fa7c477fceecaf3a450e4b364547ea5f0687e56ab\"" Sep 13 02:24:07.489462 systemd[1]: Started cri-containerd-8a8c749f19e3ba66c2b8fe6dc3951eaed8828cfb2865af527213eabfadda1a64.scope. Sep 13 02:24:07.491431 systemd[1]: Started cri-containerd-13fdf7ab1b0c84577a478c9fa7c477fceecaf3a450e4b364547ea5f0687e56ab.scope. Sep 13 02:24:07.492071 systemd[1]: Started cri-containerd-97a936960a7c9082001dd680193cf6a953d7d01ac24a208c648d2845cbbfcff5.scope. Sep 13 02:24:07.500814 kubelet[2046]: W0913 02:24:07.500775 2046 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.203.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 02:24:07.500903 kubelet[2046]: E0913 02:24:07.500822 2046 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.203.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 02:24:07.514128 env[1544]: time="2025-09-13T02:24:07.514099945Z" level=info msg="StartContainer for \"8a8c749f19e3ba66c2b8fe6dc3951eaed8828cfb2865af527213eabfadda1a64\" returns successfully" Sep 13 02:24:07.514940 env[1544]: time="2025-09-13T02:24:07.514918068Z" level=info msg="StartContainer for \"97a936960a7c9082001dd680193cf6a953d7d01ac24a208c648d2845cbbfcff5\" returns successfully" Sep 13 02:24:07.515407 env[1544]: time="2025-09-13T02:24:07.515396120Z" level=info msg="StartContainer for \"13fdf7ab1b0c84577a478c9fa7c477fceecaf3a450e4b364547ea5f0687e56ab\" returns successfully" Sep 13 02:24:07.947935 kubelet[2046]: I0913 02:24:07.947439 2046 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.116825 kubelet[2046]: E0913 02:24:08.116803 2046 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-78f707d8f3\" not found" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.218100 kubelet[2046]: I0913 02:24:08.218045 2046 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.218100 kubelet[2046]: E0913 02:24:08.218065 2046 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-78f707d8f3\": node \"ci-3510.3.8-n-78f707d8f3\" not found" Sep 13 02:24:08.230628 kubelet[2046]: I0913 02:24:08.230615 2046 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.233205 kubelet[2046]: E0913 02:24:08.233158 2046 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-78f707d8f3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.233205 kubelet[2046]: I0913 02:24:08.233172 2046 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.234043 kubelet[2046]: E0913 02:24:08.234031 2046 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-78f707d8f3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.234043 kubelet[2046]: I0913 02:24:08.234042 2046 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.234786 kubelet[2046]: E0913 02:24:08.234773 2046 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.270155 kubelet[2046]: I0913 02:24:08.270116 2046 apiserver.go:52] "Watching apiserver" Sep 13 02:24:08.330603 kubelet[2046]: I0913 02:24:08.330557 2046 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 02:24:08.386498 kubelet[2046]: I0913 02:24:08.386457 2046 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.388356 kubelet[2046]: I0913 02:24:08.388330 2046 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.389202 kubelet[2046]: E0913 02:24:08.389166 2046 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.390123 kubelet[2046]: I0913 02:24:08.390097 2046 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.391022 kubelet[2046]: E0913 02:24:08.390995 2046 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-78f707d8f3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:08.392804 kubelet[2046]: E0913 02:24:08.392720 2046 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-78f707d8f3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:09.392484 kubelet[2046]: I0913 02:24:09.392397 2046 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:09.393338 kubelet[2046]: I0913 02:24:09.392508 2046 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:09.406741 kubelet[2046]: W0913 02:24:09.406674 2046 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 02:24:09.407004 kubelet[2046]: W0913 02:24:09.406811 2046 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 02:24:10.351565 systemd[1]: Reloading. Sep 13 02:24:10.408016 /usr/lib/systemd/system-generators/torcx-generator[2389]: time="2025-09-13T02:24:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 02:24:10.408037 /usr/lib/systemd/system-generators/torcx-generator[2389]: time="2025-09-13T02:24:10Z" level=info msg="torcx already run" Sep 13 02:24:10.492330 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 02:24:10.492341 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 02:24:10.506803 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:24:10.577742 systemd[1]: Stopping kubelet.service... Sep 13 02:24:10.599498 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 02:24:10.599603 systemd[1]: Stopped kubelet.service. Sep 13 02:24:10.599627 systemd[1]: kubelet.service: Consumed 1.131s CPU time. Sep 13 02:24:10.600526 systemd[1]: Starting kubelet.service... Sep 13 02:24:10.823755 systemd[1]: Started kubelet.service. Sep 13 02:24:10.846003 kubelet[2454]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:24:10.846003 kubelet[2454]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 02:24:10.846003 kubelet[2454]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:24:10.846279 kubelet[2454]: I0913 02:24:10.846033 2454 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 02:24:10.850137 kubelet[2454]: I0913 02:24:10.850123 2454 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 02:24:10.850137 kubelet[2454]: I0913 02:24:10.850135 2454 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 02:24:10.850279 kubelet[2454]: I0913 02:24:10.850272 2454 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 02:24:10.850949 kubelet[2454]: I0913 02:24:10.850941 2454 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 02:24:10.852132 kubelet[2454]: I0913 02:24:10.852123 2454 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 02:24:10.853573 kubelet[2454]: E0913 02:24:10.853554 2454 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 02:24:10.853619 kubelet[2454]: I0913 02:24:10.853575 2454 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 02:24:10.871743 kubelet[2454]: I0913 02:24:10.871697 2454 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 02:24:10.871823 kubelet[2454]: I0913 02:24:10.871807 2454 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 02:24:10.871947 kubelet[2454]: I0913 02:24:10.871821 2454 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-78f707d8f3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 02:24:10.871947 kubelet[2454]: I0913 02:24:10.871928 2454 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 02:24:10.871947 kubelet[2454]: I0913 02:24:10.871936 2454 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 02:24:10.872050 kubelet[2454]: I0913 02:24:10.871964 2454 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:24:10.872071 kubelet[2454]: I0913 02:24:10.872056 2454 kubelet.go:446] "Attempting to sync node with API server" Sep 13 02:24:10.872071 kubelet[2454]: I0913 02:24:10.872067 2454 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 02:24:10.872109 kubelet[2454]: I0913 02:24:10.872076 2454 kubelet.go:352] "Adding apiserver pod source" Sep 13 02:24:10.872109 kubelet[2454]: I0913 02:24:10.872082 2454 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 02:24:10.872496 kubelet[2454]: I0913 02:24:10.872473 2454 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 02:24:10.872742 kubelet[2454]: I0913 02:24:10.872706 2454 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 02:24:10.872975 kubelet[2454]: I0913 02:24:10.872944 2454 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 02:24:10.872975 kubelet[2454]: I0913 02:24:10.872960 2454 server.go:1287] "Started kubelet" Sep 13 02:24:10.873022 kubelet[2454]: I0913 02:24:10.872997 2454 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 02:24:10.873055 kubelet[2454]: I0913 02:24:10.873000 2454 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 02:24:10.873214 kubelet[2454]: I0913 02:24:10.873172 2454 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 02:24:10.874018 kubelet[2454]: I0913 02:24:10.874007 2454 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 02:24:10.874018 kubelet[2454]: I0913 02:24:10.874013 2454 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 02:24:10.874115 kubelet[2454]: E0913 02:24:10.874037 2454 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 02:24:10.874115 kubelet[2454]: I0913 02:24:10.874080 2454 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 02:24:10.874115 kubelet[2454]: E0913 02:24:10.874070 2454 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-78f707d8f3\" not found" Sep 13 02:24:10.874115 kubelet[2454]: I0913 02:24:10.874104 2454 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 02:24:10.874263 kubelet[2454]: I0913 02:24:10.874198 2454 reconciler.go:26] "Reconciler: start to sync state" Sep 13 02:24:10.874526 kubelet[2454]: I0913 02:24:10.874508 2454 server.go:479] "Adding debug handlers to kubelet server" Sep 13 02:24:10.874628 kubelet[2454]: I0913 02:24:10.874605 2454 factory.go:221] Registration of the systemd container factory successfully Sep 13 02:24:10.874788 kubelet[2454]: I0913 02:24:10.874765 2454 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 02:24:10.876701 kubelet[2454]: I0913 02:24:10.876686 2454 factory.go:221] Registration of the containerd container factory successfully Sep 13 02:24:10.880040 kubelet[2454]: I0913 02:24:10.880017 2454 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 02:24:10.880732 kubelet[2454]: I0913 02:24:10.880613 2454 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 02:24:10.880820 kubelet[2454]: I0913 02:24:10.880810 2454 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 02:24:10.880927 kubelet[2454]: I0913 02:24:10.880917 2454 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 02:24:10.880987 kubelet[2454]: I0913 02:24:10.880978 2454 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 02:24:10.881096 kubelet[2454]: E0913 02:24:10.881078 2454 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 02:24:10.889869 kubelet[2454]: I0913 02:24:10.889824 2454 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 02:24:10.889869 kubelet[2454]: I0913 02:24:10.889833 2454 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 02:24:10.889869 kubelet[2454]: I0913 02:24:10.889842 2454 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:24:10.889986 kubelet[2454]: I0913 02:24:10.889929 2454 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 02:24:10.889986 kubelet[2454]: I0913 02:24:10.889936 2454 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 02:24:10.889986 kubelet[2454]: I0913 02:24:10.889947 2454 policy_none.go:49] "None policy: Start" Sep 13 02:24:10.889986 kubelet[2454]: I0913 02:24:10.889951 2454 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 02:24:10.889986 kubelet[2454]: I0913 02:24:10.889957 2454 state_mem.go:35] "Initializing new in-memory state store" Sep 13 02:24:10.890076 kubelet[2454]: I0913 02:24:10.890012 2454 state_mem.go:75] "Updated machine memory state" Sep 13 02:24:10.891670 kubelet[2454]: I0913 02:24:10.891633 2454 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 02:24:10.891765 kubelet[2454]: I0913 02:24:10.891717 2454 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 02:24:10.891765 kubelet[2454]: I0913 02:24:10.891724 2454 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 02:24:10.891819 kubelet[2454]: I0913 02:24:10.891804 2454 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 02:24:10.892118 kubelet[2454]: E0913 02:24:10.892108 2454 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 02:24:10.982802 kubelet[2454]: I0913 02:24:10.982676 2454 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:10.983020 kubelet[2454]: I0913 02:24:10.982902 2454 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:10.983020 kubelet[2454]: I0913 02:24:10.982957 2454 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:10.990910 kubelet[2454]: W0913 02:24:10.990839 2454 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 02:24:10.990910 kubelet[2454]: W0913 02:24:10.990847 2454 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 02:24:10.991226 kubelet[2454]: E0913 02:24:10.991001 2454 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-78f707d8f3\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:10.991879 kubelet[2454]: W0913 02:24:10.991806 2454 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 02:24:10.992081 kubelet[2454]: E0913 02:24:10.991894 2454 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-78f707d8f3\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:10.997564 kubelet[2454]: I0913 02:24:10.997478 2454 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.007713 kubelet[2454]: I0913 02:24:11.007627 2454 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.007921 kubelet[2454]: I0913 02:24:11.007754 2454 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.175762 kubelet[2454]: I0913 02:24:11.175541 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/617cfb16944f05cdeca57ae67c8bb421-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-78f707d8f3\" (UID: \"617cfb16944f05cdeca57ae67c8bb421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.175762 kubelet[2454]: I0913 02:24:11.175637 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/617cfb16944f05cdeca57ae67c8bb421-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-78f707d8f3\" (UID: \"617cfb16944f05cdeca57ae67c8bb421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.176098 kubelet[2454]: I0913 02:24:11.175781 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f9c2b3e869807c18dd7e80fb25a2793-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" (UID: \"3f9c2b3e869807c18dd7e80fb25a2793\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.176098 kubelet[2454]: I0913 02:24:11.175904 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f9c2b3e869807c18dd7e80fb25a2793-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" (UID: \"3f9c2b3e869807c18dd7e80fb25a2793\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.176098 kubelet[2454]: I0913 02:24:11.175959 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/617cfb16944f05cdeca57ae67c8bb421-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-78f707d8f3\" (UID: \"617cfb16944f05cdeca57ae67c8bb421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.176098 kubelet[2454]: I0913 02:24:11.176014 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f9c2b3e869807c18dd7e80fb25a2793-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" (UID: \"3f9c2b3e869807c18dd7e80fb25a2793\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.176098 kubelet[2454]: I0913 02:24:11.176065 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f9c2b3e869807c18dd7e80fb25a2793-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" (UID: \"3f9c2b3e869807c18dd7e80fb25a2793\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.176878 kubelet[2454]: I0913 02:24:11.176115 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f9c2b3e869807c18dd7e80fb25a2793-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" (UID: \"3f9c2b3e869807c18dd7e80fb25a2793\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.176878 kubelet[2454]: I0913 02:24:11.176215 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5af573f9019db832c8a09f46f06b636b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-78f707d8f3\" (UID: \"5af573f9019db832c8a09f46f06b636b\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.342845 sudo[2499]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 02:24:11.343494 sudo[2499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 02:24:11.676940 sudo[2499]: pam_unix(sudo:session): session closed for user root Sep 13 02:24:11.873088 kubelet[2454]: I0913 02:24:11.873072 2454 apiserver.go:52] "Watching apiserver" Sep 13 02:24:11.874222 kubelet[2454]: I0913 02:24:11.874213 2454 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 02:24:11.885097 kubelet[2454]: I0913 02:24:11.885088 2454 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.885156 kubelet[2454]: I0913 02:24:11.885125 2454 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.885194 kubelet[2454]: I0913 02:24:11.885157 2454 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.903385 kubelet[2454]: W0913 02:24:11.903348 2454 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 02:24:11.903467 kubelet[2454]: E0913 02:24:11.903408 2454 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-78f707d8f3\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.903971 kubelet[2454]: W0913 02:24:11.903963 2454 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 02:24:11.903971 kubelet[2454]: W0913 02:24:11.903971 2454 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 02:24:11.904034 kubelet[2454]: E0913 02:24:11.903984 2454 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-78f707d8f3\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.904034 kubelet[2454]: E0913 02:24:11.903989 2454 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-78f707d8f3\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" Sep 13 02:24:11.908195 kubelet[2454]: I0913 02:24:11.908130 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-78f707d8f3" podStartSLOduration=2.908122363 podStartE2EDuration="2.908122363s" podCreationTimestamp="2025-09-13 02:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:24:11.903455971 +0000 UTC m=+1.075913615" watchObservedRunningTime="2025-09-13 02:24:11.908122363 +0000 UTC m=+1.080580006" Sep 13 02:24:11.908276 kubelet[2454]: I0913 02:24:11.908198 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-78f707d8f3" podStartSLOduration=1.90819483 podStartE2EDuration="1.90819483s" podCreationTimestamp="2025-09-13 02:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:24:11.908176646 +0000 UTC m=+1.080634291" watchObservedRunningTime="2025-09-13 02:24:11.90819483 +0000 UTC m=+1.080652470" Sep 13 02:24:11.912633 kubelet[2454]: I0913 02:24:11.912600 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-78f707d8f3" podStartSLOduration=2.912595133 podStartE2EDuration="2.912595133s" podCreationTimestamp="2025-09-13 02:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:24:11.912414786 +0000 UTC m=+1.084872430" watchObservedRunningTime="2025-09-13 02:24:11.912595133 +0000 UTC m=+1.085052773" Sep 13 02:24:13.170038 sudo[1723]: pam_unix(sudo:session): session closed for user root Sep 13 02:24:13.170975 sshd[1720]: pam_unix(sshd:session): session closed for user core Sep 13 02:24:13.172483 systemd[1]: sshd@6-147.75.203.133:22-139.178.89.65:48148.service: Deactivated successfully. Sep 13 02:24:13.172926 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 02:24:13.173009 systemd[1]: session-9.scope: Consumed 3.565s CPU time. Sep 13 02:24:13.173303 systemd-logind[1581]: Session 9 logged out. Waiting for processes to exit. Sep 13 02:24:13.173853 systemd-logind[1581]: Removed session 9. Sep 13 02:24:16.966908 kubelet[2454]: I0913 02:24:16.966842 2454 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 02:24:16.967967 env[1544]: time="2025-09-13T02:24:16.967561840Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 02:24:16.968605 kubelet[2454]: I0913 02:24:16.967986 2454 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 02:24:18.006701 systemd[1]: Created slice kubepods-besteffort-pod181a5af0_7d43_4539_bddd_d7a73401c6f3.slice. Sep 13 02:24:18.025010 kubelet[2454]: I0913 02:24:18.024926 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-cgroup\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.025010 kubelet[2454]: I0913 02:24:18.024991 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a521111b-2ecd-4d41-a2d5-bf26b3b73592-clustermesh-secrets\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.025683 kubelet[2454]: I0913 02:24:18.025029 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/181a5af0-7d43-4539-bddd-d7a73401c6f3-lib-modules\") pod \"kube-proxy-k65b2\" (UID: \"181a5af0-7d43-4539-bddd-d7a73401c6f3\") " pod="kube-system/kube-proxy-k65b2" Sep 13 02:24:18.025683 kubelet[2454]: I0913 02:24:18.025063 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-etc-cni-netd\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.025683 kubelet[2454]: I0913 02:24:18.025138 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-lib-modules\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.025683 kubelet[2454]: I0913 02:24:18.025219 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-xtables-lock\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.025683 kubelet[2454]: I0913 02:24:18.025252 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a521111b-2ecd-4d41-a2d5-bf26b3b73592-hubble-tls\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.025683 kubelet[2454]: I0913 02:24:18.025288 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-host-proc-sys-net\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.026263 kubelet[2454]: I0913 02:24:18.025320 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/181a5af0-7d43-4539-bddd-d7a73401c6f3-xtables-lock\") pod \"kube-proxy-k65b2\" (UID: \"181a5af0-7d43-4539-bddd-d7a73401c6f3\") " pod="kube-system/kube-proxy-k65b2" Sep 13 02:24:18.026263 kubelet[2454]: I0913 02:24:18.025353 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqgs\" (UniqueName: \"kubernetes.io/projected/181a5af0-7d43-4539-bddd-d7a73401c6f3-kube-api-access-6xqgs\") pod \"kube-proxy-k65b2\" (UID: \"181a5af0-7d43-4539-bddd-d7a73401c6f3\") " pod="kube-system/kube-proxy-k65b2" Sep 13 02:24:18.026263 kubelet[2454]: I0913 02:24:18.025431 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-hostproc\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.026263 kubelet[2454]: I0913 02:24:18.025492 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cni-path\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.026263 kubelet[2454]: I0913 02:24:18.025531 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/181a5af0-7d43-4539-bddd-d7a73401c6f3-kube-proxy\") pod \"kube-proxy-k65b2\" (UID: \"181a5af0-7d43-4539-bddd-d7a73401c6f3\") " pod="kube-system/kube-proxy-k65b2" Sep 13 02:24:18.026263 kubelet[2454]: I0913 02:24:18.025562 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-run\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.026609 kubelet[2454]: I0913 02:24:18.025592 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-bpf-maps\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.026609 kubelet[2454]: I0913 02:24:18.025621 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-host-proc-sys-kernel\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.026609 kubelet[2454]: I0913 02:24:18.025686 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2g96\" (UniqueName: \"kubernetes.io/projected/a521111b-2ecd-4d41-a2d5-bf26b3b73592-kube-api-access-q2g96\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.026609 kubelet[2454]: I0913 02:24:18.025765 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-config-path\") pod \"cilium-mzfw9\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " pod="kube-system/cilium-mzfw9" Sep 13 02:24:18.029597 systemd[1]: Created slice kubepods-burstable-poda521111b_2ecd_4d41_a2d5_bf26b3b73592.slice. Sep 13 02:24:18.092672 systemd[1]: Created slice kubepods-besteffort-pod859ad558_0a9c_47fc_9412_b545b356a61e.slice. Sep 13 02:24:18.126306 kubelet[2454]: I0913 02:24:18.126207 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/859ad558-0a9c-47fc-9412-b545b356a61e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-k7hs8\" (UID: \"859ad558-0a9c-47fc-9412-b545b356a61e\") " pod="kube-system/cilium-operator-6c4d7847fc-k7hs8" Sep 13 02:24:18.126512 kubelet[2454]: I0913 02:24:18.126307 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6bsd\" (UniqueName: \"kubernetes.io/projected/859ad558-0a9c-47fc-9412-b545b356a61e-kube-api-access-k6bsd\") pod \"cilium-operator-6c4d7847fc-k7hs8\" (UID: \"859ad558-0a9c-47fc-9412-b545b356a61e\") " pod="kube-system/cilium-operator-6c4d7847fc-k7hs8" Sep 13 02:24:18.126927 kubelet[2454]: I0913 02:24:18.126859 2454 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 02:24:18.327648 env[1544]: time="2025-09-13T02:24:18.327431940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k65b2,Uid:181a5af0-7d43-4539-bddd-d7a73401c6f3,Namespace:kube-system,Attempt:0,}" Sep 13 02:24:18.335014 env[1544]: time="2025-09-13T02:24:18.334910959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mzfw9,Uid:a521111b-2ecd-4d41-a2d5-bf26b3b73592,Namespace:kube-system,Attempt:0,}" Sep 13 02:24:18.352133 env[1544]: time="2025-09-13T02:24:18.351976074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:24:18.352133 env[1544]: time="2025-09-13T02:24:18.352089821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:24:18.352536 env[1544]: time="2025-09-13T02:24:18.352159885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:24:18.352654 env[1544]: time="2025-09-13T02:24:18.352546357Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a82dca54e925488866eb93a4165997bea3411cb285786311fea5564acce13f4 pid=2608 runtime=io.containerd.runc.v2 Sep 13 02:24:18.363339 env[1544]: time="2025-09-13T02:24:18.363185074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:24:18.363339 env[1544]: time="2025-09-13T02:24:18.363286360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:24:18.363339 env[1544]: time="2025-09-13T02:24:18.363324918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:24:18.363893 env[1544]: time="2025-09-13T02:24:18.363774007Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf pid=2624 runtime=io.containerd.runc.v2 Sep 13 02:24:18.379064 systemd[1]: Started cri-containerd-4a82dca54e925488866eb93a4165997bea3411cb285786311fea5564acce13f4.scope. Sep 13 02:24:18.386072 systemd[1]: Started cri-containerd-dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf.scope. Sep 13 02:24:18.388481 update_engine[1538]: I0913 02:24:18.388453 1538 update_attempter.cc:509] Updating boot flags... Sep 13 02:24:18.396973 env[1544]: time="2025-09-13T02:24:18.396940693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k7hs8,Uid:859ad558-0a9c-47fc-9412-b545b356a61e,Namespace:kube-system,Attempt:0,}" Sep 13 02:24:18.401047 env[1544]: time="2025-09-13T02:24:18.400990150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k65b2,Uid:181a5af0-7d43-4539-bddd-d7a73401c6f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a82dca54e925488866eb93a4165997bea3411cb285786311fea5564acce13f4\"" Sep 13 02:24:18.403251 env[1544]: time="2025-09-13T02:24:18.403219729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mzfw9,Uid:a521111b-2ecd-4d41-a2d5-bf26b3b73592,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\"" Sep 13 02:24:18.403762 env[1544]: time="2025-09-13T02:24:18.403738677Z" level=info msg="CreateContainer within sandbox \"4a82dca54e925488866eb93a4165997bea3411cb285786311fea5564acce13f4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 02:24:18.404248 env[1544]: time="2025-09-13T02:24:18.404224826Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 02:24:18.406564 env[1544]: time="2025-09-13T02:24:18.406529513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:24:18.406564 env[1544]: time="2025-09-13T02:24:18.406553392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:24:18.406564 env[1544]: time="2025-09-13T02:24:18.406561887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:24:18.406843 env[1544]: time="2025-09-13T02:24:18.406641845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0 pid=2695 runtime=io.containerd.runc.v2 Sep 13 02:24:18.410586 env[1544]: time="2025-09-13T02:24:18.410548575Z" level=info msg="CreateContainer within sandbox \"4a82dca54e925488866eb93a4165997bea3411cb285786311fea5564acce13f4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92f4768d9fd9ba1852109e67811aee5ff5856afe08773f7a6e33666c2afce8be\"" Sep 13 02:24:18.410949 env[1544]: time="2025-09-13T02:24:18.410933560Z" level=info msg="StartContainer for \"92f4768d9fd9ba1852109e67811aee5ff5856afe08773f7a6e33666c2afce8be\"" Sep 13 02:24:18.421398 systemd[1]: Started cri-containerd-8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0.scope. Sep 13 02:24:18.438257 systemd[1]: Started cri-containerd-92f4768d9fd9ba1852109e67811aee5ff5856afe08773f7a6e33666c2afce8be.scope. Sep 13 02:24:18.460261 env[1544]: time="2025-09-13T02:24:18.460201680Z" level=info msg="StartContainer for \"92f4768d9fd9ba1852109e67811aee5ff5856afe08773f7a6e33666c2afce8be\" returns successfully" Sep 13 02:24:18.464607 env[1544]: time="2025-09-13T02:24:18.464577895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k7hs8,Uid:859ad558-0a9c-47fc-9412-b545b356a61e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0\"" Sep 13 02:24:18.924774 kubelet[2454]: I0913 02:24:18.924663 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k65b2" podStartSLOduration=1.9246269360000001 podStartE2EDuration="1.924626936s" podCreationTimestamp="2025-09-13 02:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:24:18.924574514 +0000 UTC m=+8.097032221" watchObservedRunningTime="2025-09-13 02:24:18.924626936 +0000 UTC m=+8.097084624" Sep 13 02:24:22.622462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887209619.mount: Deactivated successfully. Sep 13 02:24:24.318557 env[1544]: time="2025-09-13T02:24:24.318533112Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:24.319172 env[1544]: time="2025-09-13T02:24:24.319145590Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:24.319897 env[1544]: time="2025-09-13T02:24:24.319885927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:24.321426 env[1544]: time="2025-09-13T02:24:24.321388581Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 02:24:24.322167 env[1544]: time="2025-09-13T02:24:24.322152717Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 02:24:24.322718 env[1544]: time="2025-09-13T02:24:24.322701784Z" level=info msg="CreateContainer within sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 02:24:24.327740 env[1544]: time="2025-09-13T02:24:24.327718938Z" level=info msg="CreateContainer within sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\"" Sep 13 02:24:24.328036 env[1544]: time="2025-09-13T02:24:24.328020253Z" level=info msg="StartContainer for \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\"" Sep 13 02:24:24.353651 systemd[1]: Started cri-containerd-28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1.scope. Sep 13 02:24:24.365122 env[1544]: time="2025-09-13T02:24:24.365061249Z" level=info msg="StartContainer for \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\" returns successfully" Sep 13 02:24:24.370615 systemd[1]: cri-containerd-28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1.scope: Deactivated successfully. Sep 13 02:24:25.329775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1-rootfs.mount: Deactivated successfully. Sep 13 02:24:25.460701 env[1544]: time="2025-09-13T02:24:25.460611443Z" level=info msg="shim disconnected" id=28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1 Sep 13 02:24:25.461503 env[1544]: time="2025-09-13T02:24:25.460700231Z" level=warning msg="cleaning up after shim disconnected" id=28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1 namespace=k8s.io Sep 13 02:24:25.461503 env[1544]: time="2025-09-13T02:24:25.460736897Z" level=info msg="cleaning up dead shim" Sep 13 02:24:25.472950 env[1544]: time="2025-09-13T02:24:25.472910039Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:24:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2966 runtime=io.containerd.runc.v2\n" Sep 13 02:24:25.920054 env[1544]: time="2025-09-13T02:24:25.920035844Z" level=info msg="CreateContainer within sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 02:24:25.924395 env[1544]: time="2025-09-13T02:24:25.924374198Z" level=info msg="CreateContainer within sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\"" Sep 13 02:24:25.924617 env[1544]: time="2025-09-13T02:24:25.924603761Z" level=info msg="StartContainer for \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\"" Sep 13 02:24:25.925605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922558015.mount: Deactivated successfully. Sep 13 02:24:25.933949 systemd[1]: Started cri-containerd-4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319.scope. Sep 13 02:24:25.945966 env[1544]: time="2025-09-13T02:24:25.945940109Z" level=info msg="StartContainer for \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\" returns successfully" Sep 13 02:24:25.953702 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 02:24:25.953860 systemd[1]: Stopped systemd-sysctl.service. Sep 13 02:24:25.953997 systemd[1]: Stopping systemd-sysctl.service... Sep 13 02:24:25.955031 systemd[1]: Starting systemd-sysctl.service... Sep 13 02:24:25.955270 systemd[1]: cri-containerd-4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319.scope: Deactivated successfully. Sep 13 02:24:25.959504 systemd[1]: Finished systemd-sysctl.service. Sep 13 02:24:25.965938 env[1544]: time="2025-09-13T02:24:25.965912372Z" level=info msg="shim disconnected" id=4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319 Sep 13 02:24:25.966025 env[1544]: time="2025-09-13T02:24:25.965939518Z" level=warning msg="cleaning up after shim disconnected" id=4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319 namespace=k8s.io Sep 13 02:24:25.966025 env[1544]: time="2025-09-13T02:24:25.965945882Z" level=info msg="cleaning up dead shim" Sep 13 02:24:25.970159 env[1544]: time="2025-09-13T02:24:25.970132988Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:24:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3028 runtime=io.containerd.runc.v2\n" Sep 13 02:24:26.326961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319-rootfs.mount: Deactivated successfully. Sep 13 02:24:26.592094 env[1544]: time="2025-09-13T02:24:26.592017506Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:26.592646 env[1544]: time="2025-09-13T02:24:26.592633202Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:26.593730 env[1544]: time="2025-09-13T02:24:26.593681938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:24:26.593942 env[1544]: time="2025-09-13T02:24:26.593896993Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 02:24:26.595232 env[1544]: time="2025-09-13T02:24:26.595216778Z" level=info msg="CreateContainer within sandbox \"8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 02:24:26.599842 env[1544]: time="2025-09-13T02:24:26.599801370Z" level=info msg="CreateContainer within sandbox \"8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\"" Sep 13 02:24:26.600070 env[1544]: time="2025-09-13T02:24:26.600033744Z" level=info msg="StartContainer for \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\"" Sep 13 02:24:26.608675 systemd[1]: Started cri-containerd-13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486.scope. Sep 13 02:24:26.619999 env[1544]: time="2025-09-13T02:24:26.619976016Z" level=info msg="StartContainer for \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\" returns successfully" Sep 13 02:24:26.927952 env[1544]: time="2025-09-13T02:24:26.927884786Z" level=info msg="CreateContainer within sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 02:24:26.932513 kubelet[2454]: I0913 02:24:26.932441 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-k7hs8" podStartSLOduration=0.803067963 podStartE2EDuration="8.932411914s" podCreationTimestamp="2025-09-13 02:24:18 +0000 UTC" firstStartedPulling="2025-09-13 02:24:18.465176222 +0000 UTC m=+7.637633863" lastFinishedPulling="2025-09-13 02:24:26.594520169 +0000 UTC m=+15.766977814" observedRunningTime="2025-09-13 02:24:26.932039862 +0000 UTC m=+16.104497505" watchObservedRunningTime="2025-09-13 02:24:26.932411914 +0000 UTC m=+16.104869554" Sep 13 02:24:26.951814 env[1544]: time="2025-09-13T02:24:26.951788430Z" level=info msg="CreateContainer within sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\"" Sep 13 02:24:26.952236 env[1544]: time="2025-09-13T02:24:26.952224058Z" level=info msg="StartContainer for \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\"" Sep 13 02:24:26.960914 systemd[1]: Started cri-containerd-3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e.scope. Sep 13 02:24:26.974076 env[1544]: time="2025-09-13T02:24:26.974047350Z" level=info msg="StartContainer for \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\" returns successfully" Sep 13 02:24:26.975589 systemd[1]: cri-containerd-3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e.scope: Deactivated successfully. Sep 13 02:24:27.139753 env[1544]: time="2025-09-13T02:24:27.139720487Z" level=info msg="shim disconnected" id=3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e Sep 13 02:24:27.139882 env[1544]: time="2025-09-13T02:24:27.139753883Z" level=warning msg="cleaning up after shim disconnected" id=3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e namespace=k8s.io Sep 13 02:24:27.139882 env[1544]: time="2025-09-13T02:24:27.139762356Z" level=info msg="cleaning up dead shim" Sep 13 02:24:27.144489 env[1544]: time="2025-09-13T02:24:27.144436752Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:24:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3135 runtime=io.containerd.runc.v2\n" Sep 13 02:24:27.929622 env[1544]: time="2025-09-13T02:24:27.929598930Z" level=info msg="CreateContainer within sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 02:24:27.935857 env[1544]: time="2025-09-13T02:24:27.935802788Z" level=info msg="CreateContainer within sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\"" Sep 13 02:24:27.936152 env[1544]: time="2025-09-13T02:24:27.936128256Z" level=info msg="StartContainer for \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\"" Sep 13 02:24:27.947945 systemd[1]: Started cri-containerd-a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841.scope. Sep 13 02:24:27.966518 env[1544]: time="2025-09-13T02:24:27.966455323Z" level=info msg="StartContainer for \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\" returns successfully" Sep 13 02:24:27.967297 systemd[1]: cri-containerd-a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841.scope: Deactivated successfully. Sep 13 02:24:27.982217 env[1544]: time="2025-09-13T02:24:27.982175888Z" level=info msg="shim disconnected" id=a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841 Sep 13 02:24:27.982373 env[1544]: time="2025-09-13T02:24:27.982223181Z" level=warning msg="cleaning up after shim disconnected" id=a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841 namespace=k8s.io Sep 13 02:24:27.982373 env[1544]: time="2025-09-13T02:24:27.982234263Z" level=info msg="cleaning up dead shim" Sep 13 02:24:27.988224 env[1544]: time="2025-09-13T02:24:27.988171414Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:24:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3189 runtime=io.containerd.runc.v2\n" Sep 13 02:24:28.331042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841-rootfs.mount: Deactivated successfully. Sep 13 02:24:28.939932 env[1544]: time="2025-09-13T02:24:28.939854298Z" level=info msg="CreateContainer within sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 02:24:28.957850 env[1544]: time="2025-09-13T02:24:28.957722476Z" level=info msg="CreateContainer within sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\"" Sep 13 02:24:28.958780 env[1544]: time="2025-09-13T02:24:28.958679026Z" level=info msg="StartContainer for \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\"" Sep 13 02:24:28.996570 systemd[1]: Started cri-containerd-6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0.scope. Sep 13 02:24:29.027343 env[1544]: time="2025-09-13T02:24:29.027292234Z" level=info msg="StartContainer for \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\" returns successfully" Sep 13 02:24:29.116082 kubelet[2454]: I0913 02:24:29.116040 2454 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 02:24:29.134986 systemd[1]: Created slice kubepods-burstable-pod9a87ae18_d7a7_4ff0_b56d_ecb9ecf7ccae.slice. Sep 13 02:24:29.137152 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 02:24:29.137876 systemd[1]: Created slice kubepods-burstable-pod6512d400_f24f_4049_8bb5_46b05855c606.slice. Sep 13 02:24:29.202806 kubelet[2454]: I0913 02:24:29.202700 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t2kx\" (UniqueName: \"kubernetes.io/projected/6512d400-f24f-4049-8bb5-46b05855c606-kube-api-access-4t2kx\") pod \"coredns-668d6bf9bc-nts7l\" (UID: \"6512d400-f24f-4049-8bb5-46b05855c606\") " pod="kube-system/coredns-668d6bf9bc-nts7l" Sep 13 02:24:29.202806 kubelet[2454]: I0913 02:24:29.202735 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwfrp\" (UniqueName: \"kubernetes.io/projected/9a87ae18-d7a7-4ff0-b56d-ecb9ecf7ccae-kube-api-access-hwfrp\") pod \"coredns-668d6bf9bc-rq52r\" (UID: \"9a87ae18-d7a7-4ff0-b56d-ecb9ecf7ccae\") " pod="kube-system/coredns-668d6bf9bc-rq52r" Sep 13 02:24:29.202806 kubelet[2454]: I0913 02:24:29.202749 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6512d400-f24f-4049-8bb5-46b05855c606-config-volume\") pod \"coredns-668d6bf9bc-nts7l\" (UID: \"6512d400-f24f-4049-8bb5-46b05855c606\") " pod="kube-system/coredns-668d6bf9bc-nts7l" Sep 13 02:24:29.202806 kubelet[2454]: I0913 02:24:29.202766 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a87ae18-d7a7-4ff0-b56d-ecb9ecf7ccae-config-volume\") pod \"coredns-668d6bf9bc-rq52r\" (UID: \"9a87ae18-d7a7-4ff0-b56d-ecb9ecf7ccae\") " pod="kube-system/coredns-668d6bf9bc-rq52r" Sep 13 02:24:29.404199 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 02:24:29.437302 env[1544]: time="2025-09-13T02:24:29.437279075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rq52r,Uid:9a87ae18-d7a7-4ff0-b56d-ecb9ecf7ccae,Namespace:kube-system,Attempt:0,}" Sep 13 02:24:29.440638 env[1544]: time="2025-09-13T02:24:29.440622702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nts7l,Uid:6512d400-f24f-4049-8bb5-46b05855c606,Namespace:kube-system,Attempt:0,}" Sep 13 02:24:29.950856 kubelet[2454]: I0913 02:24:29.950807 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mzfw9" podStartSLOduration=7.032723221 podStartE2EDuration="12.950790242s" podCreationTimestamp="2025-09-13 02:24:17 +0000 UTC" firstStartedPulling="2025-09-13 02:24:18.403931249 +0000 UTC m=+7.576388894" lastFinishedPulling="2025-09-13 02:24:24.321998274 +0000 UTC m=+13.494455915" observedRunningTime="2025-09-13 02:24:29.950571526 +0000 UTC m=+19.123029180" watchObservedRunningTime="2025-09-13 02:24:29.950790242 +0000 UTC m=+19.123247896" Sep 13 02:24:31.006017 systemd-networkd[1301]: cilium_host: Link UP Sep 13 02:24:31.006120 systemd-networkd[1301]: cilium_net: Link UP Sep 13 02:24:31.013177 systemd-networkd[1301]: cilium_net: Gained carrier Sep 13 02:24:31.020322 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 02:24:31.020363 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 02:24:31.020385 systemd-networkd[1301]: cilium_host: Gained carrier Sep 13 02:24:31.065941 systemd-networkd[1301]: cilium_vxlan: Link UP Sep 13 02:24:31.065944 systemd-networkd[1301]: cilium_vxlan: Gained carrier Sep 13 02:24:31.200200 kernel: NET: Registered PF_ALG protocol family Sep 13 02:24:31.409278 systemd-networkd[1301]: cilium_net: Gained IPv6LL Sep 13 02:24:31.693507 systemd-networkd[1301]: lxc_health: Link UP Sep 13 02:24:31.719161 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 02:24:31.719330 systemd-networkd[1301]: lxc_health: Gained carrier Sep 13 02:24:31.793208 systemd-networkd[1301]: cilium_host: Gained IPv6LL Sep 13 02:24:31.962162 systemd-networkd[1301]: lxc7e38525eaa67: Link UP Sep 13 02:24:31.962248 systemd-networkd[1301]: lxc859b987c1279: Link UP Sep 13 02:24:31.986196 kernel: eth0: renamed from tmp3db5f Sep 13 02:24:32.001194 kernel: eth0: renamed from tmp222dc Sep 13 02:24:32.022151 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 02:24:32.022226 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7e38525eaa67: link becomes ready Sep 13 02:24:32.036249 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 02:24:32.043822 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc859b987c1279: link becomes ready Sep 13 02:24:32.044180 systemd-networkd[1301]: lxc7e38525eaa67: Gained carrier Sep 13 02:24:32.044294 systemd-networkd[1301]: lxc859b987c1279: Gained carrier Sep 13 02:24:32.881281 systemd-networkd[1301]: cilium_vxlan: Gained IPv6LL Sep 13 02:24:33.393268 systemd-networkd[1301]: lxc7e38525eaa67: Gained IPv6LL Sep 13 02:24:33.393466 systemd-networkd[1301]: lxc859b987c1279: Gained IPv6LL Sep 13 02:24:33.521297 systemd-networkd[1301]: lxc_health: Gained IPv6LL Sep 13 02:24:34.290522 env[1544]: time="2025-09-13T02:24:34.290483398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:24:34.290522 env[1544]: time="2025-09-13T02:24:34.290509028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:24:34.290522 env[1544]: time="2025-09-13T02:24:34.290516543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:24:34.290826 env[1544]: time="2025-09-13T02:24:34.290551144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:24:34.290826 env[1544]: time="2025-09-13T02:24:34.290578359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:24:34.290826 env[1544]: time="2025-09-13T02:24:34.290588434Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3db5f1df0dd75ab372d806d9c6c5f37894fd2cff447a19f2bc746acdab87a707 pid=3875 runtime=io.containerd.runc.v2 Sep 13 02:24:34.290826 env[1544]: time="2025-09-13T02:24:34.290594304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:24:34.290826 env[1544]: time="2025-09-13T02:24:34.290699910Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/222dc360e1ec11a0b2c0dde190748173c721f8ebc90977cd41a7a9e02a83ac9b pid=3876 runtime=io.containerd.runc.v2 Sep 13 02:24:34.298917 systemd[1]: Started cri-containerd-222dc360e1ec11a0b2c0dde190748173c721f8ebc90977cd41a7a9e02a83ac9b.scope. Sep 13 02:24:34.299572 systemd[1]: Started cri-containerd-3db5f1df0dd75ab372d806d9c6c5f37894fd2cff447a19f2bc746acdab87a707.scope. Sep 13 02:24:34.320203 env[1544]: time="2025-09-13T02:24:34.320172908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nts7l,Uid:6512d400-f24f-4049-8bb5-46b05855c606,Namespace:kube-system,Attempt:0,} returns sandbox id \"3db5f1df0dd75ab372d806d9c6c5f37894fd2cff447a19f2bc746acdab87a707\"" Sep 13 02:24:34.320324 env[1544]: time="2025-09-13T02:24:34.320173912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rq52r,Uid:9a87ae18-d7a7-4ff0-b56d-ecb9ecf7ccae,Namespace:kube-system,Attempt:0,} returns sandbox id \"222dc360e1ec11a0b2c0dde190748173c721f8ebc90977cd41a7a9e02a83ac9b\"" Sep 13 02:24:34.321268 env[1544]: time="2025-09-13T02:24:34.321254085Z" level=info msg="CreateContainer within sandbox \"3db5f1df0dd75ab372d806d9c6c5f37894fd2cff447a19f2bc746acdab87a707\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 02:24:34.321268 env[1544]: time="2025-09-13T02:24:34.321254939Z" level=info msg="CreateContainer within sandbox \"222dc360e1ec11a0b2c0dde190748173c721f8ebc90977cd41a7a9e02a83ac9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 02:24:34.326941 env[1544]: time="2025-09-13T02:24:34.326911855Z" level=info msg="CreateContainer within sandbox \"3db5f1df0dd75ab372d806d9c6c5f37894fd2cff447a19f2bc746acdab87a707\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"037388b0e0840e67f30dfd4a75f9a3d65dff1e4a579701edfdd84b469efece8d\"" Sep 13 02:24:34.327226 env[1544]: time="2025-09-13T02:24:34.327202197Z" level=info msg="StartContainer for \"037388b0e0840e67f30dfd4a75f9a3d65dff1e4a579701edfdd84b469efece8d\"" Sep 13 02:24:34.327744 env[1544]: time="2025-09-13T02:24:34.327727212Z" level=info msg="CreateContainer within sandbox \"222dc360e1ec11a0b2c0dde190748173c721f8ebc90977cd41a7a9e02a83ac9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f273ac146c8a58909ba520e3d16c4ba12004bbbfb041b21bdb7855a8b688fac4\"" Sep 13 02:24:34.327889 env[1544]: time="2025-09-13T02:24:34.327878685Z" level=info msg="StartContainer for \"f273ac146c8a58909ba520e3d16c4ba12004bbbfb041b21bdb7855a8b688fac4\"" Sep 13 02:24:34.335230 systemd[1]: Started cri-containerd-037388b0e0840e67f30dfd4a75f9a3d65dff1e4a579701edfdd84b469efece8d.scope. Sep 13 02:24:34.335898 systemd[1]: Started cri-containerd-f273ac146c8a58909ba520e3d16c4ba12004bbbfb041b21bdb7855a8b688fac4.scope. Sep 13 02:24:34.356434 env[1544]: time="2025-09-13T02:24:34.356380422Z" level=info msg="StartContainer for \"037388b0e0840e67f30dfd4a75f9a3d65dff1e4a579701edfdd84b469efece8d\" returns successfully" Sep 13 02:24:34.356434 env[1544]: time="2025-09-13T02:24:34.356380381Z" level=info msg="StartContainer for \"f273ac146c8a58909ba520e3d16c4ba12004bbbfb041b21bdb7855a8b688fac4\" returns successfully" Sep 13 02:24:34.963063 kubelet[2454]: I0913 02:24:34.963025 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rq52r" podStartSLOduration=16.963011429 podStartE2EDuration="16.963011429s" podCreationTimestamp="2025-09-13 02:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:24:34.962932904 +0000 UTC m=+24.135390552" watchObservedRunningTime="2025-09-13 02:24:34.963011429 +0000 UTC m=+24.135469070" Sep 13 02:24:34.969233 kubelet[2454]: I0913 02:24:34.969170 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nts7l" podStartSLOduration=16.969159081 podStartE2EDuration="16.969159081s" podCreationTimestamp="2025-09-13 02:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:24:34.968964568 +0000 UTC m=+24.141422213" watchObservedRunningTime="2025-09-13 02:24:34.969159081 +0000 UTC m=+24.141616722" Sep 13 02:28:13.635982 systemd[1]: Started sshd@7-147.75.203.133:22-194.0.234.19:55558.service. Sep 13 02:28:15.804657 sshd[4063]: Connection closed by authenticating user nobody 194.0.234.19 port 55558 [preauth] Sep 13 02:28:15.807718 systemd[1]: sshd@7-147.75.203.133:22-194.0.234.19:55558.service: Deactivated successfully. Sep 13 02:29:09.055800 systemd[1]: Started sshd@8-147.75.203.133:22-92.118.39.62:35434.service. Sep 13 02:29:09.759484 sshd[4075]: Invalid user cyberpanel from 92.118.39.62 port 35434 Sep 13 02:29:09.934165 sshd[4075]: pam_faillock(sshd:auth): User unknown Sep 13 02:29:09.934450 sshd[4075]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:29:09.934493 sshd[4075]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.118.39.62 Sep 13 02:29:09.934727 sshd[4075]: pam_faillock(sshd:auth): User unknown Sep 13 02:29:11.829944 sshd[4075]: Failed password for invalid user cyberpanel from 92.118.39.62 port 35434 ssh2 Sep 13 02:29:12.130968 sshd[4075]: Connection closed by invalid user cyberpanel 92.118.39.62 port 35434 [preauth] Sep 13 02:29:12.133350 systemd[1]: sshd@8-147.75.203.133:22-92.118.39.62:35434.service: Deactivated successfully. Sep 13 02:30:29.644427 systemd[1]: Started sshd@9-147.75.203.133:22-139.178.89.65:49070.service. Sep 13 02:30:29.735978 sshd[4090]: Accepted publickey for core from 139.178.89.65 port 49070 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:30:29.737007 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:30:29.740360 systemd-logind[1581]: New session 10 of user core. Sep 13 02:30:29.741166 systemd[1]: Started session-10.scope. Sep 13 02:30:29.832001 sshd[4090]: pam_unix(sshd:session): session closed for user core Sep 13 02:30:29.833605 systemd[1]: sshd@9-147.75.203.133:22-139.178.89.65:49070.service: Deactivated successfully. Sep 13 02:30:29.834040 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 02:30:29.834422 systemd-logind[1581]: Session 10 logged out. Waiting for processes to exit. Sep 13 02:30:29.834913 systemd-logind[1581]: Removed session 10. Sep 13 02:30:34.842622 systemd[1]: Started sshd@10-147.75.203.133:22-139.178.89.65:42702.service. Sep 13 02:30:34.941051 sshd[4117]: Accepted publickey for core from 139.178.89.65 port 42702 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:30:34.942353 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:30:34.946375 systemd-logind[1581]: New session 11 of user core. Sep 13 02:30:34.947433 systemd[1]: Started session-11.scope. Sep 13 02:30:35.078654 sshd[4117]: pam_unix(sshd:session): session closed for user core Sep 13 02:30:35.080266 systemd[1]: sshd@10-147.75.203.133:22-139.178.89.65:42702.service: Deactivated successfully. Sep 13 02:30:35.080707 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 02:30:35.081049 systemd-logind[1581]: Session 11 logged out. Waiting for processes to exit. Sep 13 02:30:35.081521 systemd-logind[1581]: Removed session 11. Sep 13 02:30:40.087370 systemd[1]: Started sshd@11-147.75.203.133:22-139.178.89.65:50322.service. Sep 13 02:30:40.128544 sshd[4144]: Accepted publickey for core from 139.178.89.65 port 50322 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:30:40.129493 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:30:40.132804 systemd-logind[1581]: New session 12 of user core. Sep 13 02:30:40.133489 systemd[1]: Started session-12.scope. Sep 13 02:30:40.223494 sshd[4144]: pam_unix(sshd:session): session closed for user core Sep 13 02:30:40.224990 systemd[1]: sshd@11-147.75.203.133:22-139.178.89.65:50322.service: Deactivated successfully. Sep 13 02:30:40.225425 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 02:30:40.225783 systemd-logind[1581]: Session 12 logged out. Waiting for processes to exit. Sep 13 02:30:40.226292 systemd-logind[1581]: Removed session 12. Sep 13 02:30:45.233291 systemd[1]: Started sshd@12-147.75.203.133:22-139.178.89.65:50330.service. Sep 13 02:30:45.268536 sshd[4171]: Accepted publickey for core from 139.178.89.65 port 50330 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:30:45.269237 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:30:45.271675 systemd-logind[1581]: New session 13 of user core. Sep 13 02:30:45.272138 systemd[1]: Started session-13.scope. Sep 13 02:30:45.357323 sshd[4171]: pam_unix(sshd:session): session closed for user core Sep 13 02:30:45.359020 systemd[1]: sshd@12-147.75.203.133:22-139.178.89.65:50330.service: Deactivated successfully. Sep 13 02:30:45.359519 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 02:30:45.359863 systemd-logind[1581]: Session 13 logged out. Waiting for processes to exit. Sep 13 02:30:45.360349 systemd-logind[1581]: Removed session 13. Sep 13 02:30:50.366479 systemd[1]: Started sshd@13-147.75.203.133:22-139.178.89.65:45232.service. Sep 13 02:30:50.402677 sshd[4199]: Accepted publickey for core from 139.178.89.65 port 45232 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:30:50.403353 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:30:50.405497 systemd-logind[1581]: New session 14 of user core. Sep 13 02:30:50.406029 systemd[1]: Started session-14.scope. Sep 13 02:30:50.490115 sshd[4199]: pam_unix(sshd:session): session closed for user core Sep 13 02:30:50.491893 systemd[1]: sshd@13-147.75.203.133:22-139.178.89.65:45232.service: Deactivated successfully. Sep 13 02:30:50.492238 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 02:30:50.492630 systemd-logind[1581]: Session 14 logged out. Waiting for processes to exit. Sep 13 02:30:50.493218 systemd[1]: Started sshd@14-147.75.203.133:22-139.178.89.65:45238.service. Sep 13 02:30:50.493703 systemd-logind[1581]: Removed session 14. Sep 13 02:30:50.534297 sshd[4225]: Accepted publickey for core from 139.178.89.65 port 45238 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:30:50.535123 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:30:50.537882 systemd-logind[1581]: New session 15 of user core. Sep 13 02:30:50.538526 systemd[1]: Started session-15.scope. Sep 13 02:30:50.643417 sshd[4225]: pam_unix(sshd:session): session closed for user core Sep 13 02:30:50.645212 systemd[1]: sshd@14-147.75.203.133:22-139.178.89.65:45238.service: Deactivated successfully. Sep 13 02:30:50.645609 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 02:30:50.645970 systemd-logind[1581]: Session 15 logged out. Waiting for processes to exit. Sep 13 02:30:50.646636 systemd[1]: Started sshd@15-147.75.203.133:22-139.178.89.65:45248.service. Sep 13 02:30:50.647077 systemd-logind[1581]: Removed session 15. Sep 13 02:30:50.676564 sshd[4249]: Accepted publickey for core from 139.178.89.65 port 45248 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:30:50.677386 sshd[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:30:50.679605 systemd-logind[1581]: New session 16 of user core. Sep 13 02:30:50.680113 systemd[1]: Started session-16.scope. Sep 13 02:30:50.766929 sshd[4249]: pam_unix(sshd:session): session closed for user core Sep 13 02:30:50.768492 systemd[1]: sshd@15-147.75.203.133:22-139.178.89.65:45248.service: Deactivated successfully. Sep 13 02:30:50.768989 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 02:30:50.769493 systemd-logind[1581]: Session 16 logged out. Waiting for processes to exit. Sep 13 02:30:50.770007 systemd-logind[1581]: Removed session 16. Sep 13 02:30:55.775424 systemd[1]: Started sshd@16-147.75.203.133:22-139.178.89.65:45260.service. Sep 13 02:30:55.805712 sshd[4276]: Accepted publickey for core from 139.178.89.65 port 45260 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:30:55.806456 sshd[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:30:55.808973 systemd-logind[1581]: New session 17 of user core. Sep 13 02:30:55.809582 systemd[1]: Started session-17.scope. Sep 13 02:30:55.897929 sshd[4276]: pam_unix(sshd:session): session closed for user core Sep 13 02:30:55.899428 systemd[1]: sshd@16-147.75.203.133:22-139.178.89.65:45260.service: Deactivated successfully. Sep 13 02:30:55.899922 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 02:30:55.900345 systemd-logind[1581]: Session 17 logged out. Waiting for processes to exit. Sep 13 02:30:55.900796 systemd-logind[1581]: Removed session 17. Sep 13 02:31:00.905713 systemd[1]: Started sshd@17-147.75.203.133:22-139.178.89.65:43520.service. Sep 13 02:31:00.947312 sshd[4301]: Accepted publickey for core from 139.178.89.65 port 43520 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:00.948199 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:00.951161 systemd-logind[1581]: New session 18 of user core. Sep 13 02:31:00.951984 systemd[1]: Started session-18.scope. Sep 13 02:31:01.041889 sshd[4301]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:01.043692 systemd[1]: sshd@17-147.75.203.133:22-139.178.89.65:43520.service: Deactivated successfully. Sep 13 02:31:01.044040 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 02:31:01.044406 systemd-logind[1581]: Session 18 logged out. Waiting for processes to exit. Sep 13 02:31:01.044983 systemd[1]: Started sshd@18-147.75.203.133:22-139.178.89.65:43528.service. Sep 13 02:31:01.045487 systemd-logind[1581]: Removed session 18. Sep 13 02:31:01.075895 sshd[4325]: Accepted publickey for core from 139.178.89.65 port 43528 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:01.076627 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:01.079233 systemd-logind[1581]: New session 19 of user core. Sep 13 02:31:01.079780 systemd[1]: Started session-19.scope. Sep 13 02:31:01.357254 sshd[4325]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:01.364613 systemd[1]: sshd@18-147.75.203.133:22-139.178.89.65:43528.service: Deactivated successfully. Sep 13 02:31:01.364941 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 02:31:01.365395 systemd-logind[1581]: Session 19 logged out. Waiting for processes to exit. Sep 13 02:31:01.365990 systemd[1]: Started sshd@19-147.75.203.133:22-139.178.89.65:43530.service. Sep 13 02:31:01.366496 systemd-logind[1581]: Removed session 19. Sep 13 02:31:01.407395 sshd[4347]: Accepted publickey for core from 139.178.89.65 port 43530 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:01.408262 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:01.411059 systemd-logind[1581]: New session 20 of user core. Sep 13 02:31:01.411713 systemd[1]: Started session-20.scope. Sep 13 02:31:02.071369 sshd[4347]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:02.074773 systemd[1]: sshd@19-147.75.203.133:22-139.178.89.65:43530.service: Deactivated successfully. Sep 13 02:31:02.075468 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 02:31:02.076065 systemd-logind[1581]: Session 20 logged out. Waiting for processes to exit. Sep 13 02:31:02.077274 systemd[1]: Started sshd@20-147.75.203.133:22-139.178.89.65:43540.service. Sep 13 02:31:02.078136 systemd-logind[1581]: Removed session 20. Sep 13 02:31:02.127633 sshd[4378]: Accepted publickey for core from 139.178.89.65 port 43540 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:02.128952 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:02.132564 systemd-logind[1581]: New session 21 of user core. Sep 13 02:31:02.133368 systemd[1]: Started session-21.scope. Sep 13 02:31:02.358718 sshd[4378]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:02.362751 systemd[1]: sshd@20-147.75.203.133:22-139.178.89.65:43540.service: Deactivated successfully. Sep 13 02:31:02.363557 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 02:31:02.364357 systemd-logind[1581]: Session 21 logged out. Waiting for processes to exit. Sep 13 02:31:02.365637 systemd[1]: Started sshd@21-147.75.203.133:22-139.178.89.65:43554.service. Sep 13 02:31:02.366643 systemd-logind[1581]: Removed session 21. Sep 13 02:31:02.442861 sshd[4404]: Accepted publickey for core from 139.178.89.65 port 43554 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:02.444376 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:02.449548 systemd-logind[1581]: New session 22 of user core. Sep 13 02:31:02.450637 systemd[1]: Started session-22.scope. Sep 13 02:31:02.582783 sshd[4404]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:02.584346 systemd[1]: sshd@21-147.75.203.133:22-139.178.89.65:43554.service: Deactivated successfully. Sep 13 02:31:02.584764 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 02:31:02.585070 systemd-logind[1581]: Session 22 logged out. Waiting for processes to exit. Sep 13 02:31:02.585658 systemd-logind[1581]: Removed session 22. Sep 13 02:31:07.592061 systemd[1]: Started sshd@22-147.75.203.133:22-139.178.89.65:43560.service. Sep 13 02:31:07.627523 sshd[4434]: Accepted publickey for core from 139.178.89.65 port 43560 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:07.628209 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:07.630305 systemd-logind[1581]: New session 23 of user core. Sep 13 02:31:07.630795 systemd[1]: Started session-23.scope. Sep 13 02:31:07.712564 sshd[4434]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:07.713969 systemd[1]: sshd@22-147.75.203.133:22-139.178.89.65:43560.service: Deactivated successfully. Sep 13 02:31:07.714412 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 02:31:07.714794 systemd-logind[1581]: Session 23 logged out. Waiting for processes to exit. Sep 13 02:31:07.715318 systemd-logind[1581]: Removed session 23. Sep 13 02:31:12.720909 systemd[1]: Started sshd@23-147.75.203.133:22-139.178.89.65:50356.service. Sep 13 02:31:12.800649 sshd[4461]: Accepted publickey for core from 139.178.89.65 port 50356 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:12.802075 sshd[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:12.806735 systemd-logind[1581]: New session 24 of user core. Sep 13 02:31:12.807773 systemd[1]: Started session-24.scope. Sep 13 02:31:12.891796 sshd[4461]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:12.893317 systemd[1]: sshd@23-147.75.203.133:22-139.178.89.65:50356.service: Deactivated successfully. Sep 13 02:31:12.893713 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 02:31:12.894002 systemd-logind[1581]: Session 24 logged out. Waiting for processes to exit. Sep 13 02:31:12.894613 systemd-logind[1581]: Removed session 24. Sep 13 02:31:17.902793 systemd[1]: Started sshd@24-147.75.203.133:22-139.178.89.65:50368.service. Sep 13 02:31:17.986128 sshd[4486]: Accepted publickey for core from 139.178.89.65 port 50368 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:17.987954 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:17.993918 systemd-logind[1581]: New session 25 of user core. Sep 13 02:31:17.995323 systemd[1]: Started session-25.scope. Sep 13 02:31:18.100208 sshd[4486]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:18.102024 systemd[1]: Started sshd@25-147.75.203.133:22-139.178.89.65:50380.service. Sep 13 02:31:18.102330 systemd[1]: sshd@24-147.75.203.133:22-139.178.89.65:50368.service: Deactivated successfully. Sep 13 02:31:18.102666 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 02:31:18.102964 systemd-logind[1581]: Session 25 logged out. Waiting for processes to exit. Sep 13 02:31:18.103508 systemd-logind[1581]: Removed session 25. Sep 13 02:31:18.150605 sshd[4510]: Accepted publickey for core from 139.178.89.65 port 50380 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:18.151995 sshd[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:18.156461 systemd-logind[1581]: New session 26 of user core. Sep 13 02:31:18.157419 systemd[1]: Started session-26.scope. Sep 13 02:31:19.526834 env[1544]: time="2025-09-13T02:31:19.526790894Z" level=info msg="StopContainer for \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\" with timeout 30 (s)" Sep 13 02:31:19.527400 env[1544]: time="2025-09-13T02:31:19.527378678Z" level=info msg="Stop container \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\" with signal terminated" Sep 13 02:31:19.532043 systemd[1]: cri-containerd-13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486.scope: Deactivated successfully. Sep 13 02:31:19.537044 env[1544]: time="2025-09-13T02:31:19.537009569Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 02:31:19.539682 env[1544]: time="2025-09-13T02:31:19.539666333Z" level=info msg="StopContainer for \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\" with timeout 2 (s)" Sep 13 02:31:19.539785 env[1544]: time="2025-09-13T02:31:19.539773611Z" level=info msg="Stop container \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\" with signal terminated" Sep 13 02:31:19.540941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486-rootfs.mount: Deactivated successfully. Sep 13 02:31:19.542750 systemd-networkd[1301]: lxc_health: Link DOWN Sep 13 02:31:19.542753 systemd-networkd[1301]: lxc_health: Lost carrier Sep 13 02:31:19.584333 env[1544]: time="2025-09-13T02:31:19.584219976Z" level=info msg="shim disconnected" id=13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486 Sep 13 02:31:19.584700 env[1544]: time="2025-09-13T02:31:19.584335414Z" level=warning msg="cleaning up after shim disconnected" id=13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486 namespace=k8s.io Sep 13 02:31:19.584700 env[1544]: time="2025-09-13T02:31:19.584398949Z" level=info msg="cleaning up dead shim" Sep 13 02:31:19.600941 env[1544]: time="2025-09-13T02:31:19.600867219Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:31:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4581 runtime=io.containerd.runc.v2\n" Sep 13 02:31:19.603227 env[1544]: time="2025-09-13T02:31:19.603133496Z" level=info msg="StopContainer for \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\" returns successfully" Sep 13 02:31:19.604407 env[1544]: time="2025-09-13T02:31:19.604309034Z" level=info msg="StopPodSandbox for \"8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0\"" Sep 13 02:31:19.604603 env[1544]: time="2025-09-13T02:31:19.604470000Z" level=info msg="Container to stop \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:31:19.610179 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0-shm.mount: Deactivated successfully. Sep 13 02:31:19.619196 systemd[1]: cri-containerd-8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0.scope: Deactivated successfully. Sep 13 02:31:19.637089 systemd[1]: cri-containerd-6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0.scope: Deactivated successfully. Sep 13 02:31:19.637690 systemd[1]: cri-containerd-6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0.scope: Consumed 6.342s CPU time. Sep 13 02:31:19.655473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0-rootfs.mount: Deactivated successfully. Sep 13 02:31:19.662789 env[1544]: time="2025-09-13T02:31:19.662729905Z" level=info msg="shim disconnected" id=6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0 Sep 13 02:31:19.662984 env[1544]: time="2025-09-13T02:31:19.662793262Z" level=warning msg="cleaning up after shim disconnected" id=6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0 namespace=k8s.io Sep 13 02:31:19.662984 env[1544]: time="2025-09-13T02:31:19.662809630Z" level=info msg="cleaning up dead shim" Sep 13 02:31:19.663276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0-rootfs.mount: Deactivated successfully. Sep 13 02:31:19.668768 env[1544]: time="2025-09-13T02:31:19.668695701Z" level=info msg="shim disconnected" id=8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0 Sep 13 02:31:19.668998 env[1544]: time="2025-09-13T02:31:19.668954855Z" level=warning msg="cleaning up after shim disconnected" id=8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0 namespace=k8s.io Sep 13 02:31:19.668998 env[1544]: time="2025-09-13T02:31:19.668991311Z" level=info msg="cleaning up dead shim" Sep 13 02:31:19.671932 env[1544]: time="2025-09-13T02:31:19.671890575Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:31:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4628 runtime=io.containerd.runc.v2\n" Sep 13 02:31:19.673068 env[1544]: time="2025-09-13T02:31:19.673033121Z" level=info msg="StopContainer for \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\" returns successfully" Sep 13 02:31:19.673651 env[1544]: time="2025-09-13T02:31:19.673619657Z" level=info msg="StopPodSandbox for \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\"" Sep 13 02:31:19.673740 env[1544]: time="2025-09-13T02:31:19.673694937Z" level=info msg="Container to stop \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:31:19.673740 env[1544]: time="2025-09-13T02:31:19.673717293Z" level=info msg="Container to stop \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:31:19.673848 env[1544]: time="2025-09-13T02:31:19.673735528Z" level=info msg="Container to stop \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:31:19.673848 env[1544]: time="2025-09-13T02:31:19.673751423Z" level=info msg="Container to stop \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:31:19.673848 env[1544]: time="2025-09-13T02:31:19.673770969Z" level=info msg="Container to stop \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:31:19.678469 env[1544]: time="2025-09-13T02:31:19.678404012Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:31:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4641 runtime=io.containerd.runc.v2\n" Sep 13 02:31:19.678840 env[1544]: time="2025-09-13T02:31:19.678786181Z" level=info msg="TearDown network for sandbox \"8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0\" successfully" Sep 13 02:31:19.678840 env[1544]: time="2025-09-13T02:31:19.678815452Z" level=info msg="StopPodSandbox for \"8143cc75114723e7c216ba80e96d5aad6ee63ad5d6583c11dde54b92767ef6f0\" returns successfully" Sep 13 02:31:19.681401 systemd[1]: cri-containerd-dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf.scope: Deactivated successfully. Sep 13 02:31:19.701296 env[1544]: time="2025-09-13T02:31:19.701221309Z" level=info msg="shim disconnected" id=dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf Sep 13 02:31:19.701513 env[1544]: time="2025-09-13T02:31:19.701303519Z" level=warning msg="cleaning up after shim disconnected" id=dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf namespace=k8s.io Sep 13 02:31:19.701513 env[1544]: time="2025-09-13T02:31:19.701329436Z" level=info msg="cleaning up dead shim" Sep 13 02:31:19.709756 env[1544]: time="2025-09-13T02:31:19.709693999Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:31:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4671 runtime=io.containerd.runc.v2\n" Sep 13 02:31:19.710100 env[1544]: time="2025-09-13T02:31:19.710045218Z" level=info msg="TearDown network for sandbox \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" successfully" Sep 13 02:31:19.710100 env[1544]: time="2025-09-13T02:31:19.710076162Z" level=info msg="StopPodSandbox for \"dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf\" returns successfully" Sep 13 02:31:19.732942 kubelet[2454]: I0913 02:31:19.732876 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/859ad558-0a9c-47fc-9412-b545b356a61e-cilium-config-path\") pod \"859ad558-0a9c-47fc-9412-b545b356a61e\" (UID: \"859ad558-0a9c-47fc-9412-b545b356a61e\") " Sep 13 02:31:19.732942 kubelet[2454]: I0913 02:31:19.732940 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6bsd\" (UniqueName: \"kubernetes.io/projected/859ad558-0a9c-47fc-9412-b545b356a61e-kube-api-access-k6bsd\") pod \"859ad558-0a9c-47fc-9412-b545b356a61e\" (UID: \"859ad558-0a9c-47fc-9412-b545b356a61e\") " Sep 13 02:31:19.735811 kubelet[2454]: I0913 02:31:19.735769 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859ad558-0a9c-47fc-9412-b545b356a61e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "859ad558-0a9c-47fc-9412-b545b356a61e" (UID: "859ad558-0a9c-47fc-9412-b545b356a61e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 02:31:19.736813 kubelet[2454]: I0913 02:31:19.736777 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859ad558-0a9c-47fc-9412-b545b356a61e-kube-api-access-k6bsd" (OuterVolumeSpecName: "kube-api-access-k6bsd") pod "859ad558-0a9c-47fc-9412-b545b356a61e" (UID: "859ad558-0a9c-47fc-9412-b545b356a61e"). InnerVolumeSpecName "kube-api-access-k6bsd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:31:19.833562 kubelet[2454]: I0913 02:31:19.833369 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-bpf-maps\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.833562 kubelet[2454]: I0913 02:31:19.833436 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-hostproc\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.833562 kubelet[2454]: I0913 02:31:19.833496 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a521111b-2ecd-4d41-a2d5-bf26b3b73592-clustermesh-secrets\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.833562 kubelet[2454]: I0913 02:31:19.833549 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-config-path\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.834408 kubelet[2454]: I0913 02:31:19.833560 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:19.834408 kubelet[2454]: I0913 02:31:19.833591 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-xtables-lock\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.834408 kubelet[2454]: I0913 02:31:19.833620 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-hostproc" (OuterVolumeSpecName: "hostproc") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:19.834408 kubelet[2454]: I0913 02:31:19.833640 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:19.834408 kubelet[2454]: I0913 02:31:19.833701 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-run\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.835190 kubelet[2454]: I0913 02:31:19.833753 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-host-proc-sys-kernel\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.835190 kubelet[2454]: I0913 02:31:19.833804 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a521111b-2ecd-4d41-a2d5-bf26b3b73592-hubble-tls\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.835190 kubelet[2454]: I0913 02:31:19.833847 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cni-path\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.835190 kubelet[2454]: I0913 02:31:19.833845 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:19.835190 kubelet[2454]: I0913 02:31:19.833882 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:19.835744 kubelet[2454]: I0913 02:31:19.833900 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2g96\" (UniqueName: \"kubernetes.io/projected/a521111b-2ecd-4d41-a2d5-bf26b3b73592-kube-api-access-q2g96\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.835744 kubelet[2454]: I0913 02:31:19.834004 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cni-path" (OuterVolumeSpecName: "cni-path") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:19.835744 kubelet[2454]: I0913 02:31:19.834051 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-lib-modules\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.835744 kubelet[2454]: I0913 02:31:19.834199 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-etc-cni-netd\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.835744 kubelet[2454]: I0913 02:31:19.834249 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:19.835744 kubelet[2454]: I0913 02:31:19.834287 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-cgroup\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.836424 kubelet[2454]: I0913 02:31:19.834329 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:19.836424 kubelet[2454]: I0913 02:31:19.834364 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:19.836424 kubelet[2454]: I0913 02:31:19.834457 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-host-proc-sys-net\") pod \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\" (UID: \"a521111b-2ecd-4d41-a2d5-bf26b3b73592\") " Sep 13 02:31:19.836424 kubelet[2454]: I0913 02:31:19.834528 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:19.836424 kubelet[2454]: I0913 02:31:19.834658 2454 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-bpf-maps\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.836424 kubelet[2454]: I0913 02:31:19.834724 2454 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-hostproc\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.837041 kubelet[2454]: I0913 02:31:19.834775 2454 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-xtables-lock\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.837041 kubelet[2454]: I0913 02:31:19.834827 2454 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-run\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.837041 kubelet[2454]: I0913 02:31:19.834869 2454 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.837041 kubelet[2454]: I0913 02:31:19.834903 2454 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cni-path\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.837041 kubelet[2454]: I0913 02:31:19.834954 2454 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k6bsd\" (UniqueName: \"kubernetes.io/projected/859ad558-0a9c-47fc-9412-b545b356a61e-kube-api-access-k6bsd\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.837041 kubelet[2454]: I0913 02:31:19.835001 2454 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-lib-modules\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.837041 kubelet[2454]: I0913 02:31:19.835055 2454 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/859ad558-0a9c-47fc-9412-b545b356a61e-cilium-config-path\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.837041 kubelet[2454]: I0913 02:31:19.835091 2454 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-etc-cni-netd\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.837890 kubelet[2454]: I0913 02:31:19.835117 2454 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-cgroup\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.839426 kubelet[2454]: I0913 02:31:19.839329 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 02:31:19.840736 kubelet[2454]: I0913 02:31:19.840606 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a521111b-2ecd-4d41-a2d5-bf26b3b73592-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 02:31:19.840968 kubelet[2454]: I0913 02:31:19.840867 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a521111b-2ecd-4d41-a2d5-bf26b3b73592-kube-api-access-q2g96" (OuterVolumeSpecName: "kube-api-access-q2g96") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "kube-api-access-q2g96". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:31:19.841415 kubelet[2454]: I0913 02:31:19.841309 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a521111b-2ecd-4d41-a2d5-bf26b3b73592-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a521111b-2ecd-4d41-a2d5-bf26b3b73592" (UID: "a521111b-2ecd-4d41-a2d5-bf26b3b73592"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:31:19.936352 kubelet[2454]: I0913 02:31:19.936256 2454 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q2g96\" (UniqueName: \"kubernetes.io/projected/a521111b-2ecd-4d41-a2d5-bf26b3b73592-kube-api-access-q2g96\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.936352 kubelet[2454]: I0913 02:31:19.936324 2454 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a521111b-2ecd-4d41-a2d5-bf26b3b73592-hubble-tls\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.936352 kubelet[2454]: I0913 02:31:19.936359 2454 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a521111b-2ecd-4d41-a2d5-bf26b3b73592-host-proc-sys-net\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.936880 kubelet[2454]: I0913 02:31:19.936388 2454 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a521111b-2ecd-4d41-a2d5-bf26b3b73592-clustermesh-secrets\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.936880 kubelet[2454]: I0913 02:31:19.936417 2454 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a521111b-2ecd-4d41-a2d5-bf26b3b73592-cilium-config-path\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:19.986170 kubelet[2454]: I0913 02:31:19.986111 2454 scope.go:117] "RemoveContainer" containerID="13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486" Sep 13 02:31:19.988597 env[1544]: time="2025-09-13T02:31:19.988517351Z" level=info msg="RemoveContainer for \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\"" Sep 13 02:31:19.994653 env[1544]: time="2025-09-13T02:31:19.994579799Z" level=info msg="RemoveContainer for \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\" returns successfully" Sep 13 02:31:19.995138 kubelet[2454]: I0913 02:31:19.995085 2454 scope.go:117] "RemoveContainer" containerID="13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486" Sep 13 02:31:19.995799 env[1544]: time="2025-09-13T02:31:19.995607951Z" level=error msg="ContainerStatus for \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\": not found" Sep 13 02:31:19.996204 kubelet[2454]: E0913 02:31:19.996107 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\": not found" containerID="13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486" Sep 13 02:31:19.996534 kubelet[2454]: I0913 02:31:19.996261 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486"} err="failed to get container status \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\": rpc error: code = NotFound desc = an error occurred when try to find container \"13892e1efae5b1abc80a355bff0c9599be5045d43e9dbcea50ac50de1f469486\": not found" Sep 13 02:31:19.996688 kubelet[2454]: I0913 02:31:19.996552 2454 scope.go:117] "RemoveContainer" containerID="6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0" Sep 13 02:31:19.997701 systemd[1]: Removed slice kubepods-besteffort-pod859ad558_0a9c_47fc_9412_b545b356a61e.slice. Sep 13 02:31:19.999484 env[1544]: time="2025-09-13T02:31:19.999372590Z" level=info msg="RemoveContainer for \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\"" Sep 13 02:31:20.002929 systemd[1]: Removed slice kubepods-burstable-poda521111b_2ecd_4d41_a2d5_bf26b3b73592.slice. Sep 13 02:31:20.003250 systemd[1]: kubepods-burstable-poda521111b_2ecd_4d41_a2d5_bf26b3b73592.slice: Consumed 6.402s CPU time. Sep 13 02:31:20.003530 env[1544]: time="2025-09-13T02:31:20.003447000Z" level=info msg="RemoveContainer for \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\" returns successfully" Sep 13 02:31:20.003955 kubelet[2454]: I0913 02:31:20.003890 2454 scope.go:117] "RemoveContainer" containerID="a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841" Sep 13 02:31:20.006588 env[1544]: time="2025-09-13T02:31:20.006463229Z" level=info msg="RemoveContainer for \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\"" Sep 13 02:31:20.010658 env[1544]: time="2025-09-13T02:31:20.010549210Z" level=info msg="RemoveContainer for \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\" returns successfully" Sep 13 02:31:20.011038 kubelet[2454]: I0913 02:31:20.010960 2454 scope.go:117] "RemoveContainer" containerID="3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e" Sep 13 02:31:20.013773 env[1544]: time="2025-09-13T02:31:20.013700932Z" level=info msg="RemoveContainer for \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\"" Sep 13 02:31:20.018009 env[1544]: time="2025-09-13T02:31:20.017940095Z" level=info msg="RemoveContainer for \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\" returns successfully" Sep 13 02:31:20.018326 kubelet[2454]: I0913 02:31:20.018284 2454 scope.go:117] "RemoveContainer" containerID="4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319" Sep 13 02:31:20.020861 env[1544]: time="2025-09-13T02:31:20.020798168Z" level=info msg="RemoveContainer for \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\"" Sep 13 02:31:20.025409 env[1544]: time="2025-09-13T02:31:20.025305430Z" level=info msg="RemoveContainer for \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\" returns successfully" Sep 13 02:31:20.025804 kubelet[2454]: I0913 02:31:20.025707 2454 scope.go:117] "RemoveContainer" containerID="28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1" Sep 13 02:31:20.028405 env[1544]: time="2025-09-13T02:31:20.028305895Z" level=info msg="RemoveContainer for \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\"" Sep 13 02:31:20.037199 env[1544]: time="2025-09-13T02:31:20.037084209Z" level=info msg="RemoveContainer for \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\" returns successfully" Sep 13 02:31:20.037576 kubelet[2454]: I0913 02:31:20.037519 2454 scope.go:117] "RemoveContainer" containerID="6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0" Sep 13 02:31:20.038301 env[1544]: time="2025-09-13T02:31:20.038083507Z" level=error msg="ContainerStatus for \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\": not found" Sep 13 02:31:20.038566 kubelet[2454]: E0913 02:31:20.038502 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\": not found" containerID="6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0" Sep 13 02:31:20.038706 kubelet[2454]: I0913 02:31:20.038565 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0"} err="failed to get container status \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f3c5b84b2f405a6ad29fd8b86983883ea8b335cf60f272e56a8973567d59ca0\": not found" Sep 13 02:31:20.038706 kubelet[2454]: I0913 02:31:20.038620 2454 scope.go:117] "RemoveContainer" containerID="a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841" Sep 13 02:31:20.039163 env[1544]: time="2025-09-13T02:31:20.039024683Z" level=error msg="ContainerStatus for \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\": not found" Sep 13 02:31:20.039593 kubelet[2454]: E0913 02:31:20.039471 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\": not found" containerID="a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841" Sep 13 02:31:20.039803 kubelet[2454]: I0913 02:31:20.039566 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841"} err="failed to get container status \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1b74e477d8d088021dea1599b60e0549c1e82404a5acf78a043b4c8ee7f3841\": not found" Sep 13 02:31:20.039803 kubelet[2454]: I0913 02:31:20.039640 2454 scope.go:117] "RemoveContainer" containerID="3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e" Sep 13 02:31:20.040304 env[1544]: time="2025-09-13T02:31:20.040118169Z" level=error msg="ContainerStatus for \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\": not found" Sep 13 02:31:20.040560 kubelet[2454]: E0913 02:31:20.040495 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\": not found" containerID="3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e" Sep 13 02:31:20.040724 kubelet[2454]: I0913 02:31:20.040558 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e"} err="failed to get container status \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3805e6aeaade70b3c3710fd8c657c03e2f5a8250af11917635fdfffbec8bd88e\": not found" Sep 13 02:31:20.040724 kubelet[2454]: I0913 02:31:20.040614 2454 scope.go:117] "RemoveContainer" containerID="4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319" Sep 13 02:31:20.041221 env[1544]: time="2025-09-13T02:31:20.041048611Z" level=error msg="ContainerStatus for \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\": not found" Sep 13 02:31:20.041540 kubelet[2454]: E0913 02:31:20.041449 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\": not found" containerID="4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319" Sep 13 02:31:20.041675 kubelet[2454]: I0913 02:31:20.041551 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319"} err="failed to get container status \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c5e53f9a601f1922e6e0ad2b3bfa9df4cf9f52002f34f8c5c97dbaa21d12319\": not found" Sep 13 02:31:20.041675 kubelet[2454]: I0913 02:31:20.041623 2454 scope.go:117] "RemoveContainer" containerID="28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1" Sep 13 02:31:20.042281 env[1544]: time="2025-09-13T02:31:20.042089725Z" level=error msg="ContainerStatus for \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\": not found" Sep 13 02:31:20.042562 kubelet[2454]: E0913 02:31:20.042515 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\": not found" containerID="28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1" Sep 13 02:31:20.042682 kubelet[2454]: I0913 02:31:20.042581 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1"} err="failed to get container status \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"28137c78a5f9db532f00ecd603017fb76649cdcc9070a6b99f5f6e03b52c82a1\": not found" Sep 13 02:31:20.534287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf-rootfs.mount: Deactivated successfully. Sep 13 02:31:20.534381 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc0012246cb75a5013f35c569aec41447f51529111092b7804e97d481c22bccf-shm.mount: Deactivated successfully. Sep 13 02:31:20.534417 systemd[1]: var-lib-kubelet-pods-859ad558\x2d0a9c\x2d47fc\x2d9412\x2db545b356a61e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6bsd.mount: Deactivated successfully. Sep 13 02:31:20.534452 systemd[1]: var-lib-kubelet-pods-a521111b\x2d2ecd\x2d4d41\x2da2d5\x2dbf26b3b73592-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq2g96.mount: Deactivated successfully. Sep 13 02:31:20.534485 systemd[1]: var-lib-kubelet-pods-a521111b\x2d2ecd\x2d4d41\x2da2d5\x2dbf26b3b73592-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 02:31:20.534517 systemd[1]: var-lib-kubelet-pods-a521111b\x2d2ecd\x2d4d41\x2da2d5\x2dbf26b3b73592-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 02:31:20.887071 kubelet[2454]: I0913 02:31:20.886859 2454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859ad558-0a9c-47fc-9412-b545b356a61e" path="/var/lib/kubelet/pods/859ad558-0a9c-47fc-9412-b545b356a61e/volumes" Sep 13 02:31:20.888187 kubelet[2454]: I0913 02:31:20.888098 2454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a521111b-2ecd-4d41-a2d5-bf26b3b73592" path="/var/lib/kubelet/pods/a521111b-2ecd-4d41-a2d5-bf26b3b73592/volumes" Sep 13 02:31:20.988515 kubelet[2454]: E0913 02:31:20.988443 2454 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 02:31:21.479746 sshd[4510]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:21.487113 systemd[1]: sshd@25-147.75.203.133:22-139.178.89.65:50380.service: Deactivated successfully. Sep 13 02:31:21.487702 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 02:31:21.488078 systemd-logind[1581]: Session 26 logged out. Waiting for processes to exit. Sep 13 02:31:21.488816 systemd[1]: Started sshd@26-147.75.203.133:22-139.178.89.65:44778.service. Sep 13 02:31:21.489363 systemd-logind[1581]: Removed session 26. Sep 13 02:31:21.523815 sshd[4690]: Accepted publickey for core from 139.178.89.65 port 44778 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:21.524498 sshd[4690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:21.526826 systemd-logind[1581]: New session 27 of user core. Sep 13 02:31:21.527333 systemd[1]: Started session-27.scope. Sep 13 02:31:22.130715 sshd[4690]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:22.137777 systemd[1]: sshd@26-147.75.203.133:22-139.178.89.65:44778.service: Deactivated successfully. Sep 13 02:31:22.139444 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 02:31:22.141669 systemd-logind[1581]: Session 27 logged out. Waiting for processes to exit. Sep 13 02:31:22.145020 systemd[1]: Started sshd@27-147.75.203.133:22-139.178.89.65:44782.service. Sep 13 02:31:22.147368 systemd-logind[1581]: Removed session 27. Sep 13 02:31:22.152052 kubelet[2454]: I0913 02:31:22.151980 2454 memory_manager.go:355] "RemoveStaleState removing state" podUID="859ad558-0a9c-47fc-9412-b545b356a61e" containerName="cilium-operator" Sep 13 02:31:22.152052 kubelet[2454]: I0913 02:31:22.152039 2454 memory_manager.go:355] "RemoveStaleState removing state" podUID="a521111b-2ecd-4d41-a2d5-bf26b3b73592" containerName="cilium-agent" Sep 13 02:31:22.164820 systemd[1]: Created slice kubepods-burstable-pod81abed54_98cc_455c_9ae5_da1f1537e418.slice. Sep 13 02:31:22.207010 sshd[4713]: Accepted publickey for core from 139.178.89.65 port 44782 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:22.207925 sshd[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:22.210641 systemd-logind[1581]: New session 28 of user core. Sep 13 02:31:22.211150 systemd[1]: Started session-28.scope. Sep 13 02:31:22.248757 kubelet[2454]: I0913 02:31:22.248689 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cni-path\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.248983 kubelet[2454]: I0913 02:31:22.248772 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-xtables-lock\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.248983 kubelet[2454]: I0913 02:31:22.248823 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81abed54-98cc-455c-9ae5-da1f1537e418-clustermesh-secrets\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.248983 kubelet[2454]: I0913 02:31:22.248866 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81abed54-98cc-455c-9ae5-da1f1537e418-hubble-tls\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.248983 kubelet[2454]: I0913 02:31:22.248949 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-host-proc-sys-kernel\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.249427 kubelet[2454]: I0913 02:31:22.248995 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-bpf-maps\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.249427 kubelet[2454]: I0913 02:31:22.249156 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-config-path\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.249427 kubelet[2454]: I0913 02:31:22.249243 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-hostproc\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.249427 kubelet[2454]: I0913 02:31:22.249290 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-etc-cni-netd\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.249427 kubelet[2454]: I0913 02:31:22.249337 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-ipsec-secrets\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.249427 kubelet[2454]: I0913 02:31:22.249386 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-cgroup\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.249961 kubelet[2454]: I0913 02:31:22.249460 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-lib-modules\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.249961 kubelet[2454]: I0913 02:31:22.249532 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-host-proc-sys-net\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.249961 kubelet[2454]: I0913 02:31:22.249578 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4jzp\" (UniqueName: \"kubernetes.io/projected/81abed54-98cc-455c-9ae5-da1f1537e418-kube-api-access-v4jzp\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.249961 kubelet[2454]: I0913 02:31:22.249621 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-run\") pod \"cilium-qmqxb\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " pod="kube-system/cilium-qmqxb" Sep 13 02:31:22.342709 sshd[4713]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:22.344614 systemd[1]: sshd@27-147.75.203.133:22-139.178.89.65:44782.service: Deactivated successfully. Sep 13 02:31:22.344964 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 02:31:22.345304 systemd-logind[1581]: Session 28 logged out. Waiting for processes to exit. Sep 13 02:31:22.345944 systemd[1]: Started sshd@28-147.75.203.133:22-139.178.89.65:44790.service. Sep 13 02:31:22.346366 systemd-logind[1581]: Removed session 28. Sep 13 02:31:22.350238 kubelet[2454]: E0913 02:31:22.350206 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls kube-api-access-v4jzp], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-qmqxb" podUID="81abed54-98cc-455c-9ae5-da1f1537e418" Sep 13 02:31:22.376748 sshd[4738]: Accepted publickey for core from 139.178.89.65 port 44790 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:22.379995 sshd[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:22.390178 systemd-logind[1581]: New session 29 of user core. Sep 13 02:31:22.392825 systemd[1]: Started session-29.scope. Sep 13 02:31:23.054368 kubelet[2454]: I0913 02:31:23.054270 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-host-proc-sys-kernel\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.054368 kubelet[2454]: I0913 02:31:23.054360 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-ipsec-secrets\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.054833 kubelet[2454]: I0913 02:31:23.054382 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:23.054833 kubelet[2454]: I0913 02:31:23.054404 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-cgroup\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.054833 kubelet[2454]: I0913 02:31:23.054456 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:23.054833 kubelet[2454]: I0913 02:31:23.054524 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4jzp\" (UniqueName: \"kubernetes.io/projected/81abed54-98cc-455c-9ae5-da1f1537e418-kube-api-access-v4jzp\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.054833 kubelet[2454]: I0913 02:31:23.054593 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-xtables-lock\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.055676 kubelet[2454]: I0913 02:31:23.054644 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-etc-cni-netd\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.055676 kubelet[2454]: I0913 02:31:23.054693 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-run\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.055676 kubelet[2454]: I0913 02:31:23.054712 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:23.055676 kubelet[2454]: I0913 02:31:23.054740 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cni-path\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.055676 kubelet[2454]: I0913 02:31:23.054782 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cni-path" (OuterVolumeSpecName: "cni-path") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:23.056221 kubelet[2454]: I0913 02:31:23.054775 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:23.056221 kubelet[2454]: I0913 02:31:23.054839 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-bpf-maps\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.056221 kubelet[2454]: I0913 02:31:23.054819 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:23.056221 kubelet[2454]: I0913 02:31:23.054901 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-config-path\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.056221 kubelet[2454]: I0913 02:31:23.054893 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:23.056800 kubelet[2454]: I0913 02:31:23.054947 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-hostproc\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.056800 kubelet[2454]: I0913 02:31:23.054993 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-host-proc-sys-net\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.056800 kubelet[2454]: I0913 02:31:23.055042 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81abed54-98cc-455c-9ae5-da1f1537e418-clustermesh-secrets\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.056800 kubelet[2454]: I0913 02:31:23.055114 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81abed54-98cc-455c-9ae5-da1f1537e418-hubble-tls\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.056800 kubelet[2454]: I0913 02:31:23.055038 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-hostproc" (OuterVolumeSpecName: "hostproc") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:23.057326 kubelet[2454]: I0913 02:31:23.055119 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:23.057326 kubelet[2454]: I0913 02:31:23.055191 2454 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-lib-modules\") pod \"81abed54-98cc-455c-9ae5-da1f1537e418\" (UID: \"81abed54-98cc-455c-9ae5-da1f1537e418\") " Sep 13 02:31:23.057326 kubelet[2454]: I0913 02:31:23.055261 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:31:23.057326 kubelet[2454]: I0913 02:31:23.055423 2454 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-bpf-maps\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.057326 kubelet[2454]: I0913 02:31:23.055498 2454 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cni-path\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.057326 kubelet[2454]: I0913 02:31:23.055548 2454 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-hostproc\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.057925 kubelet[2454]: I0913 02:31:23.055602 2454 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-host-proc-sys-net\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.057925 kubelet[2454]: I0913 02:31:23.055658 2454 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-lib-modules\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.057925 kubelet[2454]: I0913 02:31:23.055711 2454 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-cgroup\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.057925 kubelet[2454]: I0913 02:31:23.055761 2454 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.057925 kubelet[2454]: I0913 02:31:23.055815 2454 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-xtables-lock\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.057925 kubelet[2454]: I0913 02:31:23.055867 2454 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-etc-cni-netd\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.057925 kubelet[2454]: I0913 02:31:23.055913 2454 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-run\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.059366 kubelet[2454]: I0913 02:31:23.059332 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 02:31:23.059933 kubelet[2454]: I0913 02:31:23.059901 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81abed54-98cc-455c-9ae5-da1f1537e418-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:31:23.059933 kubelet[2454]: I0913 02:31:23.059909 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 02:31:23.060011 kubelet[2454]: I0913 02:31:23.059940 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81abed54-98cc-455c-9ae5-da1f1537e418-kube-api-access-v4jzp" (OuterVolumeSpecName: "kube-api-access-v4jzp") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "kube-api-access-v4jzp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:31:23.060011 kubelet[2454]: I0913 02:31:23.059982 2454 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81abed54-98cc-455c-9ae5-da1f1537e418-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "81abed54-98cc-455c-9ae5-da1f1537e418" (UID: "81abed54-98cc-455c-9ae5-da1f1537e418"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 02:31:23.060906 systemd[1]: var-lib-kubelet-pods-81abed54\x2d98cc\x2d455c\x2d9ae5\x2dda1f1537e418-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv4jzp.mount: Deactivated successfully. Sep 13 02:31:23.060960 systemd[1]: var-lib-kubelet-pods-81abed54\x2d98cc\x2d455c\x2d9ae5\x2dda1f1537e418-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 02:31:23.061000 systemd[1]: var-lib-kubelet-pods-81abed54\x2d98cc\x2d455c\x2d9ae5\x2dda1f1537e418-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 02:31:23.061034 systemd[1]: var-lib-kubelet-pods-81abed54\x2d98cc\x2d455c\x2d9ae5\x2dda1f1537e418-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 02:31:23.156706 kubelet[2454]: I0913 02:31:23.156613 2454 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-config-path\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.156706 kubelet[2454]: I0913 02:31:23.156674 2454 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81abed54-98cc-455c-9ae5-da1f1537e418-hubble-tls\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.156706 kubelet[2454]: I0913 02:31:23.156706 2454 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81abed54-98cc-455c-9ae5-da1f1537e418-clustermesh-secrets\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.157680 kubelet[2454]: I0913 02:31:23.156734 2454 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/81abed54-98cc-455c-9ae5-da1f1537e418-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:23.157680 kubelet[2454]: I0913 02:31:23.156764 2454 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v4jzp\" (UniqueName: \"kubernetes.io/projected/81abed54-98cc-455c-9ae5-da1f1537e418-kube-api-access-v4jzp\") on node \"ci-3510.3.8-n-78f707d8f3\" DevicePath \"\"" Sep 13 02:31:24.011421 systemd[1]: Removed slice kubepods-burstable-pod81abed54_98cc_455c_9ae5_da1f1537e418.slice. Sep 13 02:31:24.032435 systemd[1]: Created slice kubepods-burstable-pod4ee86093_f9d4_424f_9955_fd46c07c4054.slice. Sep 13 02:31:24.062882 kubelet[2454]: I0913 02:31:24.062776 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4ee86093-f9d4-424f-9955-fd46c07c4054-cilium-ipsec-secrets\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.062882 kubelet[2454]: I0913 02:31:24.062864 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ee86093-f9d4-424f-9955-fd46c07c4054-hubble-tls\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063320 kubelet[2454]: I0913 02:31:24.062930 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ee86093-f9d4-424f-9955-fd46c07c4054-cilium-cgroup\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063320 kubelet[2454]: I0913 02:31:24.062976 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ee86093-f9d4-424f-9955-fd46c07c4054-cni-path\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063320 kubelet[2454]: I0913 02:31:24.063087 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ee86093-f9d4-424f-9955-fd46c07c4054-etc-cni-netd\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063320 kubelet[2454]: I0913 02:31:24.063202 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ee86093-f9d4-424f-9955-fd46c07c4054-host-proc-sys-net\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063320 kubelet[2454]: I0913 02:31:24.063267 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ee86093-f9d4-424f-9955-fd46c07c4054-hostproc\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063320 kubelet[2454]: I0913 02:31:24.063323 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ee86093-f9d4-424f-9955-fd46c07c4054-clustermesh-secrets\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063952 kubelet[2454]: I0913 02:31:24.063376 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q52hp\" (UniqueName: \"kubernetes.io/projected/4ee86093-f9d4-424f-9955-fd46c07c4054-kube-api-access-q52hp\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063952 kubelet[2454]: I0913 02:31:24.063428 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ee86093-f9d4-424f-9955-fd46c07c4054-lib-modules\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063952 kubelet[2454]: I0913 02:31:24.063478 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ee86093-f9d4-424f-9955-fd46c07c4054-cilium-config-path\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063952 kubelet[2454]: I0913 02:31:24.063521 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ee86093-f9d4-424f-9955-fd46c07c4054-bpf-maps\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063952 kubelet[2454]: I0913 02:31:24.063621 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ee86093-f9d4-424f-9955-fd46c07c4054-host-proc-sys-kernel\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.063952 kubelet[2454]: I0913 02:31:24.063727 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ee86093-f9d4-424f-9955-fd46c07c4054-cilium-run\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.064570 kubelet[2454]: I0913 02:31:24.063814 2454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ee86093-f9d4-424f-9955-fd46c07c4054-xtables-lock\") pod \"cilium-cjvk9\" (UID: \"4ee86093-f9d4-424f-9955-fd46c07c4054\") " pod="kube-system/cilium-cjvk9" Sep 13 02:31:24.336111 env[1544]: time="2025-09-13T02:31:24.335906466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cjvk9,Uid:4ee86093-f9d4-424f-9955-fd46c07c4054,Namespace:kube-system,Attempt:0,}" Sep 13 02:31:24.353941 env[1544]: time="2025-09-13T02:31:24.353871380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:31:24.353941 env[1544]: time="2025-09-13T02:31:24.353909929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:31:24.353941 env[1544]: time="2025-09-13T02:31:24.353919874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:31:24.354057 env[1544]: time="2025-09-13T02:31:24.354015682Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461 pid=4779 runtime=io.containerd.runc.v2 Sep 13 02:31:24.360578 systemd[1]: Started cri-containerd-f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461.scope. Sep 13 02:31:24.371681 env[1544]: time="2025-09-13T02:31:24.371651953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cjvk9,Uid:4ee86093-f9d4-424f-9955-fd46c07c4054,Namespace:kube-system,Attempt:0,} returns sandbox id \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\"" Sep 13 02:31:24.372959 env[1544]: time="2025-09-13T02:31:24.372939414Z" level=info msg="CreateContainer within sandbox \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 02:31:24.378201 env[1544]: time="2025-09-13T02:31:24.378144429Z" level=info msg="CreateContainer within sandbox \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e82ec6dee1f2d521e0b0952568d7471c8a1e578c4b2bb20b8529a32d728c6b58\"" Sep 13 02:31:24.378404 env[1544]: time="2025-09-13T02:31:24.378385056Z" level=info msg="StartContainer for \"e82ec6dee1f2d521e0b0952568d7471c8a1e578c4b2bb20b8529a32d728c6b58\"" Sep 13 02:31:24.388292 systemd[1]: Started cri-containerd-e82ec6dee1f2d521e0b0952568d7471c8a1e578c4b2bb20b8529a32d728c6b58.scope. Sep 13 02:31:24.405161 env[1544]: time="2025-09-13T02:31:24.405124191Z" level=info msg="StartContainer for \"e82ec6dee1f2d521e0b0952568d7471c8a1e578c4b2bb20b8529a32d728c6b58\" returns successfully" Sep 13 02:31:24.411723 systemd[1]: cri-containerd-e82ec6dee1f2d521e0b0952568d7471c8a1e578c4b2bb20b8529a32d728c6b58.scope: Deactivated successfully. Sep 13 02:31:24.448753 env[1544]: time="2025-09-13T02:31:24.448700898Z" level=info msg="shim disconnected" id=e82ec6dee1f2d521e0b0952568d7471c8a1e578c4b2bb20b8529a32d728c6b58 Sep 13 02:31:24.448919 env[1544]: time="2025-09-13T02:31:24.448754399Z" level=warning msg="cleaning up after shim disconnected" id=e82ec6dee1f2d521e0b0952568d7471c8a1e578c4b2bb20b8529a32d728c6b58 namespace=k8s.io Sep 13 02:31:24.448919 env[1544]: time="2025-09-13T02:31:24.448767382Z" level=info msg="cleaning up dead shim" Sep 13 02:31:24.456966 env[1544]: time="2025-09-13T02:31:24.456895918Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:31:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4864 runtime=io.containerd.runc.v2\n" Sep 13 02:31:24.886858 kubelet[2454]: I0913 02:31:24.886764 2454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81abed54-98cc-455c-9ae5-da1f1537e418" path="/var/lib/kubelet/pods/81abed54-98cc-455c-9ae5-da1f1537e418/volumes" Sep 13 02:31:25.014430 env[1544]: time="2025-09-13T02:31:25.014176167Z" level=info msg="CreateContainer within sandbox \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 02:31:25.027719 env[1544]: time="2025-09-13T02:31:25.027597555Z" level=info msg="CreateContainer within sandbox \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"86035b7ed8b388bdee61630148d0b69151559c5df41c495b558f28a89638f2ae\"" Sep 13 02:31:25.028498 env[1544]: time="2025-09-13T02:31:25.028387770Z" level=info msg="StartContainer for \"86035b7ed8b388bdee61630148d0b69151559c5df41c495b558f28a89638f2ae\"" Sep 13 02:31:25.066739 systemd[1]: Started cri-containerd-86035b7ed8b388bdee61630148d0b69151559c5df41c495b558f28a89638f2ae.scope. Sep 13 02:31:25.123614 env[1544]: time="2025-09-13T02:31:25.123509613Z" level=info msg="StartContainer for \"86035b7ed8b388bdee61630148d0b69151559c5df41c495b558f28a89638f2ae\" returns successfully" Sep 13 02:31:25.142996 systemd[1]: cri-containerd-86035b7ed8b388bdee61630148d0b69151559c5df41c495b558f28a89638f2ae.scope: Deactivated successfully. Sep 13 02:31:25.190821 env[1544]: time="2025-09-13T02:31:25.190704046Z" level=info msg="shim disconnected" id=86035b7ed8b388bdee61630148d0b69151559c5df41c495b558f28a89638f2ae Sep 13 02:31:25.191345 env[1544]: time="2025-09-13T02:31:25.190828516Z" level=warning msg="cleaning up after shim disconnected" id=86035b7ed8b388bdee61630148d0b69151559c5df41c495b558f28a89638f2ae namespace=k8s.io Sep 13 02:31:25.191345 env[1544]: time="2025-09-13T02:31:25.190880187Z" level=info msg="cleaning up dead shim" Sep 13 02:31:25.191974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86035b7ed8b388bdee61630148d0b69151559c5df41c495b558f28a89638f2ae-rootfs.mount: Deactivated successfully. Sep 13 02:31:25.208760 env[1544]: time="2025-09-13T02:31:25.208676372Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:31:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4924 runtime=io.containerd.runc.v2\n" Sep 13 02:31:25.989602 kubelet[2454]: E0913 02:31:25.989516 2454 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 02:31:26.020726 env[1544]: time="2025-09-13T02:31:26.020627583Z" level=info msg="CreateContainer within sandbox \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 02:31:26.039787 env[1544]: time="2025-09-13T02:31:26.039734069Z" level=info msg="CreateContainer within sandbox \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0ba29db301f729f27bfc51b81cbf3f53d18274435e29fba184d5229e623de864\"" Sep 13 02:31:26.040100 env[1544]: time="2025-09-13T02:31:26.040056004Z" level=info msg="StartContainer for \"0ba29db301f729f27bfc51b81cbf3f53d18274435e29fba184d5229e623de864\"" Sep 13 02:31:26.048224 systemd[1]: Started cri-containerd-0ba29db301f729f27bfc51b81cbf3f53d18274435e29fba184d5229e623de864.scope. Sep 13 02:31:26.061986 env[1544]: time="2025-09-13T02:31:26.061964788Z" level=info msg="StartContainer for \"0ba29db301f729f27bfc51b81cbf3f53d18274435e29fba184d5229e623de864\" returns successfully" Sep 13 02:31:26.063408 systemd[1]: cri-containerd-0ba29db301f729f27bfc51b81cbf3f53d18274435e29fba184d5229e623de864.scope: Deactivated successfully. Sep 13 02:31:26.091564 env[1544]: time="2025-09-13T02:31:26.091530452Z" level=info msg="shim disconnected" id=0ba29db301f729f27bfc51b81cbf3f53d18274435e29fba184d5229e623de864 Sep 13 02:31:26.091691 env[1544]: time="2025-09-13T02:31:26.091565210Z" level=warning msg="cleaning up after shim disconnected" id=0ba29db301f729f27bfc51b81cbf3f53d18274435e29fba184d5229e623de864 namespace=k8s.io Sep 13 02:31:26.091691 env[1544]: time="2025-09-13T02:31:26.091574204Z" level=info msg="cleaning up dead shim" Sep 13 02:31:26.096569 env[1544]: time="2025-09-13T02:31:26.096516378Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:31:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4980 runtime=io.containerd.runc.v2\n" Sep 13 02:31:26.176231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ba29db301f729f27bfc51b81cbf3f53d18274435e29fba184d5229e623de864-rootfs.mount: Deactivated successfully. Sep 13 02:31:27.028037 env[1544]: time="2025-09-13T02:31:27.027941515Z" level=info msg="CreateContainer within sandbox \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 02:31:27.038712 env[1544]: time="2025-09-13T02:31:27.038691165Z" level=info msg="CreateContainer within sandbox \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"788a35e11d11b5e586848ec7f7aed9c8e99041f1fffbcf9aaf1e2854f068a73f\"" Sep 13 02:31:27.039049 env[1544]: time="2025-09-13T02:31:27.039014553Z" level=info msg="StartContainer for \"788a35e11d11b5e586848ec7f7aed9c8e99041f1fffbcf9aaf1e2854f068a73f\"" Sep 13 02:31:27.048518 systemd[1]: Started cri-containerd-788a35e11d11b5e586848ec7f7aed9c8e99041f1fffbcf9aaf1e2854f068a73f.scope. Sep 13 02:31:27.060013 env[1544]: time="2025-09-13T02:31:27.059988374Z" level=info msg="StartContainer for \"788a35e11d11b5e586848ec7f7aed9c8e99041f1fffbcf9aaf1e2854f068a73f\" returns successfully" Sep 13 02:31:27.060180 systemd[1]: cri-containerd-788a35e11d11b5e586848ec7f7aed9c8e99041f1fffbcf9aaf1e2854f068a73f.scope: Deactivated successfully. Sep 13 02:31:27.069507 env[1544]: time="2025-09-13T02:31:27.069453667Z" level=info msg="shim disconnected" id=788a35e11d11b5e586848ec7f7aed9c8e99041f1fffbcf9aaf1e2854f068a73f Sep 13 02:31:27.069507 env[1544]: time="2025-09-13T02:31:27.069479766Z" level=warning msg="cleaning up after shim disconnected" id=788a35e11d11b5e586848ec7f7aed9c8e99041f1fffbcf9aaf1e2854f068a73f namespace=k8s.io Sep 13 02:31:27.069507 env[1544]: time="2025-09-13T02:31:27.069485273Z" level=info msg="cleaning up dead shim" Sep 13 02:31:27.073252 env[1544]: time="2025-09-13T02:31:27.073209691Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:31:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5032 runtime=io.containerd.runc.v2\n" Sep 13 02:31:27.176111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-788a35e11d11b5e586848ec7f7aed9c8e99041f1fffbcf9aaf1e2854f068a73f-rootfs.mount: Deactivated successfully. Sep 13 02:31:28.036272 env[1544]: time="2025-09-13T02:31:28.036122944Z" level=info msg="CreateContainer within sandbox \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 02:31:28.054484 env[1544]: time="2025-09-13T02:31:28.054434412Z" level=info msg="CreateContainer within sandbox \"f97021e8016ba07f495aa4695b84de9f42140378f9b2d6b173681645d23dd461\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"158815f9dda9d6ea771a34dae14feb48b801fe06325313a1d0f44fdb05852491\"" Sep 13 02:31:28.054938 env[1544]: time="2025-09-13T02:31:28.054860505Z" level=info msg="StartContainer for \"158815f9dda9d6ea771a34dae14feb48b801fe06325313a1d0f44fdb05852491\"" Sep 13 02:31:28.056271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568344943.mount: Deactivated successfully. Sep 13 02:31:28.064488 systemd[1]: Started cri-containerd-158815f9dda9d6ea771a34dae14feb48b801fe06325313a1d0f44fdb05852491.scope. Sep 13 02:31:28.077243 env[1544]: time="2025-09-13T02:31:28.077216703Z" level=info msg="StartContainer for \"158815f9dda9d6ea771a34dae14feb48b801fe06325313a1d0f44fdb05852491\" returns successfully" Sep 13 02:31:28.237205 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 02:31:29.074311 kubelet[2454]: I0913 02:31:29.074139 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cjvk9" podStartSLOduration=5.074102813 podStartE2EDuration="5.074102813s" podCreationTimestamp="2025-09-13 02:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:31:29.073571349 +0000 UTC m=+438.246029052" watchObservedRunningTime="2025-09-13 02:31:29.074102813 +0000 UTC m=+438.246560501" Sep 13 02:31:31.346033 systemd-networkd[1301]: lxc_health: Link UP Sep 13 02:31:31.369161 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 02:31:31.369171 systemd-networkd[1301]: lxc_health: Gained carrier Sep 13 02:31:33.105263 systemd-networkd[1301]: lxc_health: Gained IPv6LL Sep 13 02:31:37.194468 sshd[4738]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:37.196100 systemd[1]: sshd@28-147.75.203.133:22-139.178.89.65:44790.service: Deactivated successfully. Sep 13 02:31:37.196581 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 02:31:37.197026 systemd-logind[1581]: Session 29 logged out. Waiting for processes to exit. Sep 13 02:31:37.197609 systemd-logind[1581]: Removed session 29.