May 17 01:08:21.562406 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 01:08:21.562419 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 01:08:21.562426 kernel: BIOS-provided physical RAM map: May 17 01:08:21.562430 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable May 17 01:08:21.562433 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved May 17 01:08:21.562437 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved May 17 01:08:21.562442 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable May 17 01:08:21.562446 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved May 17 01:08:21.562449 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000008266efff] usable May 17 01:08:21.562453 kernel: BIOS-e820: [mem 0x000000008266f000-0x000000008266ffff] ACPI NVS May 17 01:08:21.562458 kernel: BIOS-e820: [mem 0x0000000082670000-0x0000000082670fff] reserved May 17 01:08:21.562462 kernel: BIOS-e820: [mem 0x0000000082671000-0x000000008afcdfff] usable May 17 01:08:21.562466 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved May 17 01:08:21.562470 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable May 17 01:08:21.562475 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS May 17 01:08:21.562480 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved May 17 01:08:21.562484 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable May 17 01:08:21.562488 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved May 17 01:08:21.562492 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 17 01:08:21.562496 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved May 17 01:08:21.562500 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved May 17 01:08:21.562504 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 17 01:08:21.562509 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved May 17 01:08:21.562513 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable May 17 01:08:21.562517 kernel: NX (Execute Disable) protection: active May 17 01:08:21.562521 kernel: SMBIOS 3.2.1 present. May 17 01:08:21.562526 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 May 17 01:08:21.562530 kernel: tsc: Detected 3400.000 MHz processor May 17 01:08:21.562534 kernel: tsc: Detected 3399.906 MHz TSC May 17 01:08:21.562539 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 01:08:21.562543 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 01:08:21.562548 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 May 17 01:08:21.562552 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 01:08:21.562556 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 May 17 01:08:21.562561 kernel: Using GB pages for direct mapping May 17 01:08:21.562565 kernel: ACPI: Early table checksum verification disabled May 17 01:08:21.562570 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) May 17 01:08:21.562574 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) May 17 01:08:21.562579 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) May 17 01:08:21.562583 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) May 17 01:08:21.562589 kernel: ACPI: FACS 0x000000008C66DF80 000040 May 17 01:08:21.562594 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) May 17 01:08:21.562599 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) May 17 01:08:21.562604 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) May 17 01:08:21.562609 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) May 17 01:08:21.562614 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) May 17 01:08:21.562618 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) May 17 01:08:21.562623 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) May 17 01:08:21.562627 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) May 17 01:08:21.562632 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:08:21.562637 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) May 17 01:08:21.562642 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) May 17 01:08:21.562647 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:08:21.562652 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:08:21.562656 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) May 17 01:08:21.562661 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) May 17 01:08:21.562666 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:08:21.562670 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) May 17 01:08:21.562676 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) May 17 01:08:21.562680 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) May 17 01:08:21.562685 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) May 17 01:08:21.562690 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) May 17 01:08:21.562694 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) May 17 01:08:21.562699 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) May 17 01:08:21.562703 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) May 17 01:08:21.562708 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) May 17 01:08:21.562713 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) May 17 01:08:21.562718 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) May 17 01:08:21.562723 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) May 17 01:08:21.562727 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] May 17 01:08:21.562732 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] May 17 01:08:21.562737 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] May 17 01:08:21.562742 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] May 17 01:08:21.562746 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] May 17 01:08:21.562751 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] May 17 01:08:21.562756 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] May 17 01:08:21.562761 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] May 17 01:08:21.562765 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] May 17 01:08:21.562770 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] May 17 01:08:21.562775 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] May 17 01:08:21.562779 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] May 17 01:08:21.562784 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] May 17 01:08:21.562788 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] May 17 01:08:21.562793 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] May 17 01:08:21.562798 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] May 17 01:08:21.562803 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] May 17 01:08:21.562807 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] May 17 01:08:21.562812 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] May 17 01:08:21.562817 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] May 17 01:08:21.562821 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] May 17 01:08:21.562826 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] May 17 01:08:21.562830 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] May 17 01:08:21.562835 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] May 17 01:08:21.562840 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] May 17 01:08:21.562845 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] May 17 01:08:21.562849 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] May 17 01:08:21.562854 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] May 17 01:08:21.562859 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] May 17 01:08:21.562863 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] May 17 01:08:21.562868 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] May 17 01:08:21.562873 kernel: No NUMA configuration found May 17 01:08:21.562877 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] May 17 01:08:21.562883 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] May 17 01:08:21.562887 kernel: Zone ranges: May 17 01:08:21.562892 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 01:08:21.562897 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 01:08:21.562901 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] May 17 01:08:21.562906 kernel: Movable zone start for each node May 17 01:08:21.562910 kernel: Early memory node ranges May 17 01:08:21.562915 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] May 17 01:08:21.562920 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] May 17 01:08:21.562924 kernel: node 0: [mem 0x0000000040400000-0x000000008266efff] May 17 01:08:21.562930 kernel: node 0: [mem 0x0000000082671000-0x000000008afcdfff] May 17 01:08:21.562934 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] May 17 01:08:21.562939 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] May 17 01:08:21.562944 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] May 17 01:08:21.562948 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] May 17 01:08:21.562953 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 01:08:21.562961 kernel: On node 0, zone DMA: 103 pages in unavailable ranges May 17 01:08:21.562966 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges May 17 01:08:21.562971 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges May 17 01:08:21.562976 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges May 17 01:08:21.562982 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges May 17 01:08:21.562987 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges May 17 01:08:21.562992 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges May 17 01:08:21.562997 kernel: ACPI: PM-Timer IO Port: 0x1808 May 17 01:08:21.563002 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 17 01:08:21.563007 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 17 01:08:21.563012 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 17 01:08:21.563018 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 17 01:08:21.563023 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 17 01:08:21.563028 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 17 01:08:21.563033 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 17 01:08:21.563038 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 17 01:08:21.563043 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 17 01:08:21.563048 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 17 01:08:21.563053 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 17 01:08:21.563058 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 17 01:08:21.563063 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 17 01:08:21.563068 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 17 01:08:21.563073 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 17 01:08:21.563078 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 17 01:08:21.563083 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 May 17 01:08:21.563088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 01:08:21.563093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 01:08:21.563098 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 01:08:21.563103 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 01:08:21.563109 kernel: TSC deadline timer available May 17 01:08:21.563114 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs May 17 01:08:21.563119 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices May 17 01:08:21.563124 kernel: Booting paravirtualized kernel on bare hardware May 17 01:08:21.563129 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 01:08:21.563134 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 May 17 01:08:21.563139 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 May 17 01:08:21.563144 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 May 17 01:08:21.563149 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 17 01:08:21.563154 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 May 17 01:08:21.563159 kernel: Policy zone: Normal May 17 01:08:21.563165 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 01:08:21.563170 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 01:08:21.563175 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) May 17 01:08:21.563180 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) May 17 01:08:21.563185 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 01:08:21.563190 kernel: Memory: 32722608K/33452984K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 730116K reserved, 0K cma-reserved) May 17 01:08:21.563196 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 17 01:08:21.563201 kernel: ftrace: allocating 34585 entries in 136 pages May 17 01:08:21.563206 kernel: ftrace: allocated 136 pages with 2 groups May 17 01:08:21.563211 kernel: rcu: Hierarchical RCU implementation. May 17 01:08:21.563216 kernel: rcu: RCU event tracing is enabled. May 17 01:08:21.563222 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 17 01:08:21.563248 kernel: Rude variant of Tasks RCU enabled. May 17 01:08:21.563253 kernel: Tracing variant of Tasks RCU enabled. May 17 01:08:21.563259 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 01:08:21.563264 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 17 01:08:21.563269 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 May 17 01:08:21.563290 kernel: random: crng init done May 17 01:08:21.563295 kernel: Console: colour dummy device 80x25 May 17 01:08:21.563300 kernel: printk: console [tty0] enabled May 17 01:08:21.563305 kernel: printk: console [ttyS1] enabled May 17 01:08:21.563310 kernel: ACPI: Core revision 20210730 May 17 01:08:21.563315 kernel: hpet: HPET dysfunctional in PC10. Force disabled. May 17 01:08:21.563320 kernel: APIC: Switch to symmetric I/O mode setup May 17 01:08:21.563326 kernel: DMAR: Host address width 39 May 17 01:08:21.563331 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 May 17 01:08:21.563336 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da May 17 01:08:21.563341 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff May 17 01:08:21.563346 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 May 17 01:08:21.563351 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 May 17 01:08:21.563356 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. May 17 01:08:21.563361 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode May 17 01:08:21.563366 kernel: x2apic enabled May 17 01:08:21.563371 kernel: Switched APIC routing to cluster x2apic. May 17 01:08:21.563376 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns May 17 01:08:21.563382 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) May 17 01:08:21.563387 kernel: CPU0: Thermal monitoring enabled (TM1) May 17 01:08:21.563391 kernel: process: using mwait in idle threads May 17 01:08:21.563396 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 01:08:21.563401 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 01:08:21.563406 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 01:08:21.563411 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 17 01:08:21.563417 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 01:08:21.563422 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 01:08:21.563427 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 17 01:08:21.563432 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 17 01:08:21.563436 kernel: RETBleed: Mitigation: Enhanced IBRS May 17 01:08:21.563441 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 01:08:21.563446 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 01:08:21.563451 kernel: TAA: Mitigation: TSX disabled May 17 01:08:21.563456 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers May 17 01:08:21.563461 kernel: SRBDS: Mitigation: Microcode May 17 01:08:21.563466 kernel: GDS: Mitigation: Microcode May 17 01:08:21.563472 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 01:08:21.563476 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 01:08:21.563482 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 01:08:21.563486 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 01:08:21.563491 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 01:08:21.563496 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 01:08:21.563501 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 01:08:21.563506 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 01:08:21.563511 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. May 17 01:08:21.563516 kernel: Freeing SMP alternatives memory: 32K May 17 01:08:21.563521 kernel: pid_max: default: 32768 minimum: 301 May 17 01:08:21.563526 kernel: LSM: Security Framework initializing May 17 01:08:21.563531 kernel: SELinux: Initializing. May 17 01:08:21.563536 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 01:08:21.563541 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 01:08:21.563546 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 May 17 01:08:21.563551 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 17 01:08:21.563556 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. May 17 01:08:21.563561 kernel: ... version: 4 May 17 01:08:21.563566 kernel: ... bit width: 48 May 17 01:08:21.563571 kernel: ... generic registers: 4 May 17 01:08:21.563576 kernel: ... value mask: 0000ffffffffffff May 17 01:08:21.563582 kernel: ... max period: 00007fffffffffff May 17 01:08:21.563587 kernel: ... fixed-purpose events: 3 May 17 01:08:21.563592 kernel: ... event mask: 000000070000000f May 17 01:08:21.563596 kernel: signal: max sigframe size: 2032 May 17 01:08:21.563601 kernel: rcu: Hierarchical SRCU implementation. May 17 01:08:21.563606 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. May 17 01:08:21.563611 kernel: smp: Bringing up secondary CPUs ... May 17 01:08:21.563616 kernel: x86: Booting SMP configuration: May 17 01:08:21.563621 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 May 17 01:08:21.563627 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 01:08:21.563632 kernel: #9 #10 #11 #12 #13 #14 #15 May 17 01:08:21.563637 kernel: smp: Brought up 1 node, 16 CPUs May 17 01:08:21.563642 kernel: smpboot: Max logical packages: 1 May 17 01:08:21.563647 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) May 17 01:08:21.563652 kernel: devtmpfs: initialized May 17 01:08:21.563657 kernel: x86/mm: Memory block size: 128MB May 17 01:08:21.563662 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8266f000-0x8266ffff] (4096 bytes) May 17 01:08:21.563667 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) May 17 01:08:21.563673 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 01:08:21.563678 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 17 01:08:21.563683 kernel: pinctrl core: initialized pinctrl subsystem May 17 01:08:21.563688 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 01:08:21.563693 kernel: audit: initializing netlink subsys (disabled) May 17 01:08:21.563698 kernel: audit: type=2000 audit(1747444096.041:1): state=initialized audit_enabled=0 res=1 May 17 01:08:21.563703 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 01:08:21.563708 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 01:08:21.563713 kernel: cpuidle: using governor menu May 17 01:08:21.563718 kernel: ACPI: bus type PCI registered May 17 01:08:21.563723 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 01:08:21.563728 kernel: dca service started, version 1.12.1 May 17 01:08:21.563733 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 17 01:08:21.563738 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 May 17 01:08:21.563743 kernel: PCI: Using configuration type 1 for base access May 17 01:08:21.563748 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' May 17 01:08:21.563753 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 01:08:21.563759 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 01:08:21.563764 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 01:08:21.563769 kernel: ACPI: Added _OSI(Module Device) May 17 01:08:21.563774 kernel: ACPI: Added _OSI(Processor Device) May 17 01:08:21.563779 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 01:08:21.563784 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 01:08:21.563789 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 01:08:21.563794 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 01:08:21.563799 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 01:08:21.563804 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded May 17 01:08:21.563809 kernel: ACPI: Dynamic OEM Table Load: May 17 01:08:21.563814 kernel: ACPI: SSDT 0xFFFF8BF58021B400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) May 17 01:08:21.563820 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked May 17 01:08:21.563825 kernel: ACPI: Dynamic OEM Table Load: May 17 01:08:21.563829 kernel: ACPI: SSDT 0xFFFF8BF581AE1000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) May 17 01:08:21.563834 kernel: ACPI: Dynamic OEM Table Load: May 17 01:08:21.563839 kernel: ACPI: SSDT 0xFFFF8BF581A5E000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) May 17 01:08:21.563844 kernel: ACPI: Dynamic OEM Table Load: May 17 01:08:21.563849 kernel: ACPI: SSDT 0xFFFF8BF581B4E800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) May 17 01:08:21.563855 kernel: ACPI: Dynamic OEM Table Load: May 17 01:08:21.563860 kernel: ACPI: SSDT 0xFFFF8BF58014E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) May 17 01:08:21.563864 kernel: ACPI: Dynamic OEM Table Load: May 17 01:08:21.563869 kernel: ACPI: SSDT 0xFFFF8BF581AE0800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) May 17 01:08:21.563874 kernel: ACPI: Interpreter enabled May 17 01:08:21.563879 kernel: ACPI: PM: (supports S0 S5) May 17 01:08:21.563884 kernel: ACPI: Using IOAPIC for interrupt routing May 17 01:08:21.563889 kernel: HEST: Enabling Firmware First mode for corrected errors. May 17 01:08:21.563894 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. May 17 01:08:21.563900 kernel: HEST: Table parsing has been initialized. May 17 01:08:21.563905 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. May 17 01:08:21.563910 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 01:08:21.563915 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F May 17 01:08:21.563920 kernel: ACPI: PM: Power Resource [USBC] May 17 01:08:21.563925 kernel: ACPI: PM: Power Resource [V0PR] May 17 01:08:21.563930 kernel: ACPI: PM: Power Resource [V1PR] May 17 01:08:21.563935 kernel: ACPI: PM: Power Resource [V2PR] May 17 01:08:21.563940 kernel: ACPI: PM: Power Resource [WRST] May 17 01:08:21.563945 kernel: ACPI: PM: Power Resource [FN00] May 17 01:08:21.563950 kernel: ACPI: PM: Power Resource [FN01] May 17 01:08:21.563955 kernel: ACPI: PM: Power Resource [FN02] May 17 01:08:21.563960 kernel: ACPI: PM: Power Resource [FN03] May 17 01:08:21.563965 kernel: ACPI: PM: Power Resource [FN04] May 17 01:08:21.563970 kernel: ACPI: PM: Power Resource [PIN] May 17 01:08:21.563975 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) May 17 01:08:21.564040 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:08:21.564086 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] May 17 01:08:21.564130 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] May 17 01:08:21.564138 kernel: PCI host bridge to bus 0000:00 May 17 01:08:21.564181 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 01:08:21.564219 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 01:08:21.564295 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 01:08:21.564333 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] May 17 01:08:21.564370 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] May 17 01:08:21.564409 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] May 17 01:08:21.564461 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 May 17 01:08:21.564513 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 May 17 01:08:21.564558 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold May 17 01:08:21.564605 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 May 17 01:08:21.564651 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] May 17 01:08:21.564699 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 May 17 01:08:21.564743 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] May 17 01:08:21.564789 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 May 17 01:08:21.564832 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] May 17 01:08:21.564876 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold May 17 01:08:21.564922 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 May 17 01:08:21.564968 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] May 17 01:08:21.565009 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] May 17 01:08:21.565058 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 May 17 01:08:21.565100 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:08:21.565149 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 May 17 01:08:21.565191 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:08:21.565262 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 May 17 01:08:21.565305 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] May 17 01:08:21.565348 kernel: pci 0000:00:16.0: PME# supported from D3hot May 17 01:08:21.565395 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 May 17 01:08:21.565437 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] May 17 01:08:21.565481 kernel: pci 0000:00:16.1: PME# supported from D3hot May 17 01:08:21.565526 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 May 17 01:08:21.565572 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] May 17 01:08:21.565614 kernel: pci 0000:00:16.4: PME# supported from D3hot May 17 01:08:21.565661 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 May 17 01:08:21.565704 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] May 17 01:08:21.565746 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] May 17 01:08:21.565790 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] May 17 01:08:21.565839 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] May 17 01:08:21.565884 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] May 17 01:08:21.565927 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] May 17 01:08:21.565970 kernel: pci 0000:00:17.0: PME# supported from D3hot May 17 01:08:21.566017 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 May 17 01:08:21.566062 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold May 17 01:08:21.566110 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 May 17 01:08:21.566155 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold May 17 01:08:21.566207 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 May 17 01:08:21.566253 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold May 17 01:08:21.566302 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 May 17 01:08:21.566345 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold May 17 01:08:21.566394 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 May 17 01:08:21.566440 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold May 17 01:08:21.566489 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 May 17 01:08:21.566534 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:08:21.566582 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 May 17 01:08:21.566632 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 May 17 01:08:21.566675 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] May 17 01:08:21.566720 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] May 17 01:08:21.566769 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 May 17 01:08:21.566814 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] May 17 01:08:21.566864 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 01:08:21.566912 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] May 17 01:08:21.566958 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] May 17 01:08:21.567002 kernel: pci 0000:01:00.0: PME# supported from D3cold May 17 01:08:21.567048 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 01:08:21.567093 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:08:21.567144 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 01:08:21.567189 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] May 17 01:08:21.567240 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] May 17 01:08:21.567285 kernel: pci 0000:01:00.1: PME# supported from D3cold May 17 01:08:21.567330 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 01:08:21.567375 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:08:21.567419 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:08:21.567484 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 01:08:21.567525 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:08:21.567572 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 01:08:21.567620 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect May 17 01:08:21.567666 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 May 17 01:08:21.567710 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] May 17 01:08:21.567754 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] May 17 01:08:21.567798 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] May 17 01:08:21.567843 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 17 01:08:21.567888 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 01:08:21.567932 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 01:08:21.567974 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 01:08:21.568022 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect May 17 01:08:21.568067 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 May 17 01:08:21.568114 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] May 17 01:08:21.568215 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] May 17 01:08:21.568283 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] May 17 01:08:21.568330 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold May 17 01:08:21.568374 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 01:08:21.568417 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 01:08:21.568460 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 01:08:21.568505 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 01:08:21.568553 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 May 17 01:08:21.568599 kernel: pci 0000:06:00.0: enabling Extended Tags May 17 01:08:21.568645 kernel: pci 0000:06:00.0: supports D1 D2 May 17 01:08:21.568689 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:08:21.568732 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 01:08:21.568775 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 01:08:21.568820 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 01:08:21.568869 kernel: pci_bus 0000:07: extended config space not accessible May 17 01:08:21.568924 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 May 17 01:08:21.568971 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] May 17 01:08:21.569021 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] May 17 01:08:21.569066 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] May 17 01:08:21.569113 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 01:08:21.569158 kernel: pci 0000:07:00.0: supports D1 D2 May 17 01:08:21.569206 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:08:21.569275 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 01:08:21.569340 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 01:08:21.569388 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 01:08:21.569396 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 May 17 01:08:21.569402 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 May 17 01:08:21.569407 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 May 17 01:08:21.569412 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 May 17 01:08:21.569417 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 May 17 01:08:21.569423 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 May 17 01:08:21.569428 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 May 17 01:08:21.569433 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 May 17 01:08:21.569440 kernel: iommu: Default domain type: Translated May 17 01:08:21.569445 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 01:08:21.569491 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device May 17 01:08:21.569537 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 01:08:21.569584 kernel: pci 0000:07:00.0: vgaarb: bridge control possible May 17 01:08:21.569592 kernel: vgaarb: loaded May 17 01:08:21.569598 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 01:08:21.569603 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 01:08:21.569609 kernel: PTP clock support registered May 17 01:08:21.569615 kernel: PCI: Using ACPI for IRQ routing May 17 01:08:21.569620 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 01:08:21.569625 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] May 17 01:08:21.569631 kernel: e820: reserve RAM buffer [mem 0x8266f000-0x83ffffff] May 17 01:08:21.569636 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] May 17 01:08:21.569641 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] May 17 01:08:21.569646 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] May 17 01:08:21.569651 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] May 17 01:08:21.569657 kernel: clocksource: Switched to clocksource tsc-early May 17 01:08:21.569663 kernel: VFS: Disk quotas dquot_6.6.0 May 17 01:08:21.569668 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 01:08:21.569673 kernel: pnp: PnP ACPI init May 17 01:08:21.569717 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved May 17 01:08:21.569760 kernel: pnp 00:02: [dma 0 disabled] May 17 01:08:21.569802 kernel: pnp 00:03: [dma 0 disabled] May 17 01:08:21.569848 kernel: system 00:04: [io 0x0680-0x069f] has been reserved May 17 01:08:21.569888 kernel: system 00:04: [io 0x164e-0x164f] has been reserved May 17 01:08:21.569930 kernel: system 00:05: [io 0x1854-0x1857] has been reserved May 17 01:08:21.569973 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved May 17 01:08:21.570011 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved May 17 01:08:21.570050 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved May 17 01:08:21.570088 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved May 17 01:08:21.570127 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved May 17 01:08:21.570166 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved May 17 01:08:21.570204 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved May 17 01:08:21.570267 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved May 17 01:08:21.570329 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved May 17 01:08:21.570368 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved May 17 01:08:21.570409 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved May 17 01:08:21.570447 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved May 17 01:08:21.570484 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved May 17 01:08:21.570522 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved May 17 01:08:21.570561 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved May 17 01:08:21.570602 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved May 17 01:08:21.570610 kernel: pnp: PnP ACPI: found 10 devices May 17 01:08:21.570616 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 01:08:21.570623 kernel: NET: Registered PF_INET protocol family May 17 01:08:21.570628 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:08:21.570634 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 01:08:21.570640 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 01:08:21.570645 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:08:21.570651 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 01:08:21.570656 kernel: TCP: Hash tables configured (established 262144 bind 65536) May 17 01:08:21.570661 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 01:08:21.570667 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 01:08:21.570673 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 01:08:21.570678 kernel: NET: Registered PF_XDP protocol family May 17 01:08:21.570722 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] May 17 01:08:21.570765 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] May 17 01:08:21.570808 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] May 17 01:08:21.570853 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 01:08:21.570898 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 01:08:21.570942 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 01:08:21.570988 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 01:08:21.571032 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:08:21.571075 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 01:08:21.571118 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:08:21.571162 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 01:08:21.571208 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 01:08:21.571278 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 01:08:21.571322 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 01:08:21.571366 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 01:08:21.571409 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 01:08:21.571454 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 01:08:21.571497 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 01:08:21.571543 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 01:08:21.571590 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 01:08:21.571636 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 01:08:21.571679 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 01:08:21.571723 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 01:08:21.571766 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 01:08:21.571806 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 01:08:21.571845 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 01:08:21.571883 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 01:08:21.571924 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 01:08:21.571961 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] May 17 01:08:21.571999 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] May 17 01:08:21.572044 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] May 17 01:08:21.572085 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:08:21.572132 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] May 17 01:08:21.572172 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] May 17 01:08:21.572219 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 01:08:21.572262 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] May 17 01:08:21.572307 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] May 17 01:08:21.572349 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] May 17 01:08:21.572391 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] May 17 01:08:21.572434 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] May 17 01:08:21.572442 kernel: PCI: CLS 64 bytes, default 64 May 17 01:08:21.572449 kernel: DMAR: No ATSR found May 17 01:08:21.572455 kernel: DMAR: No SATC found May 17 01:08:21.572460 kernel: DMAR: dmar0: Using Queued invalidation May 17 01:08:21.572523 kernel: pci 0000:00:00.0: Adding to iommu group 0 May 17 01:08:21.572568 kernel: pci 0000:00:01.0: Adding to iommu group 1 May 17 01:08:21.572611 kernel: pci 0000:00:08.0: Adding to iommu group 2 May 17 01:08:21.572655 kernel: pci 0000:00:12.0: Adding to iommu group 3 May 17 01:08:21.572697 kernel: pci 0000:00:14.0: Adding to iommu group 4 May 17 01:08:21.572742 kernel: pci 0000:00:14.2: Adding to iommu group 4 May 17 01:08:21.572787 kernel: pci 0000:00:15.0: Adding to iommu group 5 May 17 01:08:21.572829 kernel: pci 0000:00:15.1: Adding to iommu group 5 May 17 01:08:21.572873 kernel: pci 0000:00:16.0: Adding to iommu group 6 May 17 01:08:21.572915 kernel: pci 0000:00:16.1: Adding to iommu group 6 May 17 01:08:21.572958 kernel: pci 0000:00:16.4: Adding to iommu group 6 May 17 01:08:21.573000 kernel: pci 0000:00:17.0: Adding to iommu group 7 May 17 01:08:21.573045 kernel: pci 0000:00:1b.0: Adding to iommu group 8 May 17 01:08:21.573089 kernel: pci 0000:00:1b.4: Adding to iommu group 9 May 17 01:08:21.573132 kernel: pci 0000:00:1b.5: Adding to iommu group 10 May 17 01:08:21.573175 kernel: pci 0000:00:1c.0: Adding to iommu group 11 May 17 01:08:21.573217 kernel: pci 0000:00:1c.3: Adding to iommu group 12 May 17 01:08:21.573299 kernel: pci 0000:00:1e.0: Adding to iommu group 13 May 17 01:08:21.573342 kernel: pci 0000:00:1f.0: Adding to iommu group 14 May 17 01:08:21.573385 kernel: pci 0000:00:1f.4: Adding to iommu group 14 May 17 01:08:21.573428 kernel: pci 0000:00:1f.5: Adding to iommu group 14 May 17 01:08:21.573475 kernel: pci 0000:01:00.0: Adding to iommu group 1 May 17 01:08:21.573520 kernel: pci 0000:01:00.1: Adding to iommu group 1 May 17 01:08:21.573564 kernel: pci 0000:03:00.0: Adding to iommu group 15 May 17 01:08:21.573609 kernel: pci 0000:04:00.0: Adding to iommu group 16 May 17 01:08:21.573653 kernel: pci 0000:06:00.0: Adding to iommu group 17 May 17 01:08:21.573700 kernel: pci 0000:07:00.0: Adding to iommu group 17 May 17 01:08:21.573708 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O May 17 01:08:21.573713 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 01:08:21.573719 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) May 17 01:08:21.573726 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer May 17 01:08:21.573731 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules May 17 01:08:21.573737 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules May 17 01:08:21.573742 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules May 17 01:08:21.573788 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) May 17 01:08:21.573796 kernel: Initialise system trusted keyrings May 17 01:08:21.573802 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 May 17 01:08:21.573808 kernel: Key type asymmetric registered May 17 01:08:21.573814 kernel: Asymmetric key parser 'x509' registered May 17 01:08:21.573819 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 01:08:21.573825 kernel: io scheduler mq-deadline registered May 17 01:08:21.573830 kernel: io scheduler kyber registered May 17 01:08:21.573835 kernel: io scheduler bfq registered May 17 01:08:21.573880 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 May 17 01:08:21.573923 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 May 17 01:08:21.573967 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 May 17 01:08:21.574012 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 May 17 01:08:21.574056 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 May 17 01:08:21.574099 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 May 17 01:08:21.574146 kernel: thermal LNXTHERM:00: registered as thermal_zone0 May 17 01:08:21.574154 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) May 17 01:08:21.574159 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. May 17 01:08:21.574165 kernel: pstore: Registered erst as persistent store backend May 17 01:08:21.574170 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 01:08:21.574177 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 01:08:21.574182 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 01:08:21.574188 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 01:08:21.574193 kernel: hpet_acpi_add: no address or irqs in _CRS May 17 01:08:21.574265 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) May 17 01:08:21.574273 kernel: i8042: PNP: No PS/2 controller found. May 17 01:08:21.574332 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 May 17 01:08:21.574373 kernel: rtc_cmos rtc_cmos: registered as rtc0 May 17 01:08:21.574414 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-05-17T01:08:20 UTC (1747444100) May 17 01:08:21.574454 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram May 17 01:08:21.574461 kernel: intel_pstate: Intel P-state driver initializing May 17 01:08:21.574467 kernel: intel_pstate: Disabling energy efficiency optimization May 17 01:08:21.574472 kernel: intel_pstate: HWP enabled May 17 01:08:21.574478 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 May 17 01:08:21.574483 kernel: vesafb: scrolling: redraw May 17 01:08:21.574488 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 May 17 01:08:21.574494 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000006e8317e6, using 768k, total 768k May 17 01:08:21.574500 kernel: Console: switching to colour frame buffer device 128x48 May 17 01:08:21.574506 kernel: fb0: VESA VGA frame buffer device May 17 01:08:21.574511 kernel: NET: Registered PF_INET6 protocol family May 17 01:08:21.574516 kernel: Segment Routing with IPv6 May 17 01:08:21.574522 kernel: In-situ OAM (IOAM) with IPv6 May 17 01:08:21.574527 kernel: NET: Registered PF_PACKET protocol family May 17 01:08:21.574532 kernel: Key type dns_resolver registered May 17 01:08:21.574537 kernel: microcode: sig=0x906ed, pf=0x2, revision=0x102 May 17 01:08:21.574543 kernel: microcode: Microcode Update Driver: v2.2. May 17 01:08:21.574549 kernel: IPI shorthand broadcast: enabled May 17 01:08:21.574554 kernel: sched_clock: Marking stable (1681154698, 1340008558)->(4454111542, -1432948286) May 17 01:08:21.574559 kernel: registered taskstats version 1 May 17 01:08:21.574565 kernel: Loading compiled-in X.509 certificates May 17 01:08:21.574570 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 01:08:21.574575 kernel: Key type .fscrypt registered May 17 01:08:21.574581 kernel: Key type fscrypt-provisioning registered May 17 01:08:21.574586 kernel: pstore: Using crash dump compression: deflate May 17 01:08:21.574592 kernel: ima: Allocated hash algorithm: sha1 May 17 01:08:21.574597 kernel: ima: No architecture policies found May 17 01:08:21.574603 kernel: clk: Disabling unused clocks May 17 01:08:21.574608 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 01:08:21.574613 kernel: Write protecting the kernel read-only data: 28672k May 17 01:08:21.574619 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 01:08:21.574624 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 01:08:21.574629 kernel: Run /init as init process May 17 01:08:21.574635 kernel: with arguments: May 17 01:08:21.574640 kernel: /init May 17 01:08:21.574646 kernel: with environment: May 17 01:08:21.574651 kernel: HOME=/ May 17 01:08:21.574656 kernel: TERM=linux May 17 01:08:21.574662 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 01:08:21.574668 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 01:08:21.574675 systemd[1]: Detected architecture x86-64. May 17 01:08:21.574680 systemd[1]: Running in initrd. May 17 01:08:21.574687 systemd[1]: No hostname configured, using default hostname. May 17 01:08:21.574692 systemd[1]: Hostname set to . May 17 01:08:21.574697 systemd[1]: Initializing machine ID from random generator. May 17 01:08:21.574703 systemd[1]: Queued start job for default target initrd.target. May 17 01:08:21.574709 systemd[1]: Started systemd-ask-password-console.path. May 17 01:08:21.574714 systemd[1]: Reached target cryptsetup.target. May 17 01:08:21.574720 systemd[1]: Reached target paths.target. May 17 01:08:21.574725 systemd[1]: Reached target slices.target. May 17 01:08:21.574731 systemd[1]: Reached target swap.target. May 17 01:08:21.574737 systemd[1]: Reached target timers.target. May 17 01:08:21.574742 systemd[1]: Listening on iscsid.socket. May 17 01:08:21.574748 systemd[1]: Listening on iscsiuio.socket. May 17 01:08:21.574753 systemd[1]: Listening on systemd-journald-audit.socket. May 17 01:08:21.574759 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 01:08:21.574764 systemd[1]: Listening on systemd-journald.socket. May 17 01:08:21.574770 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz May 17 01:08:21.574776 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns May 17 01:08:21.574782 kernel: clocksource: Switched to clocksource tsc May 17 01:08:21.574787 systemd[1]: Listening on systemd-networkd.socket. May 17 01:08:21.574793 systemd[1]: Listening on systemd-udevd-control.socket. May 17 01:08:21.574798 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 01:08:21.574804 systemd[1]: Reached target sockets.target. May 17 01:08:21.574809 systemd[1]: Starting kmod-static-nodes.service... May 17 01:08:21.574815 systemd[1]: Finished network-cleanup.service. May 17 01:08:21.574820 systemd[1]: Starting systemd-fsck-usr.service... May 17 01:08:21.574826 systemd[1]: Starting systemd-journald.service... May 17 01:08:21.574832 systemd[1]: Starting systemd-modules-load.service... May 17 01:08:21.574840 systemd-journald[268]: Journal started May 17 01:08:21.574865 systemd-journald[268]: Runtime Journal (/run/log/journal/7dfb18a59f6b46bdbdcfac8d5a8b2134) is 8.0M, max 640.1M, 632.1M free. May 17 01:08:21.576753 systemd-modules-load[269]: Inserted module 'overlay' May 17 01:08:21.582000 audit: BPF prog-id=6 op=LOAD May 17 01:08:21.600273 kernel: audit: type=1334 audit(1747444101.582:2): prog-id=6 op=LOAD May 17 01:08:21.600289 systemd[1]: Starting systemd-resolved.service... May 17 01:08:21.649259 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 01:08:21.649276 systemd[1]: Starting systemd-vconsole-setup.service... May 17 01:08:21.682271 kernel: Bridge firewalling registered May 17 01:08:21.682287 systemd[1]: Started systemd-journald.service. May 17 01:08:21.696687 systemd-modules-load[269]: Inserted module 'br_netfilter' May 17 01:08:21.745635 kernel: audit: type=1130 audit(1747444101.704:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:21.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:21.699188 systemd-resolved[271]: Positive Trust Anchors: May 17 01:08:21.802933 kernel: SCSI subsystem initialized May 17 01:08:21.802943 kernel: audit: type=1130 audit(1747444101.757:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:21.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:21.699195 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:08:21.924313 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 01:08:21.924326 kernel: audit: type=1130 audit(1747444101.828:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:21.924334 kernel: device-mapper: uevent: version 1.0.3 May 17 01:08:21.924340 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 01:08:21.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:21.699217 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 01:08:21.996502 kernel: audit: type=1130 audit(1747444101.924:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:21.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:21.700799 systemd-resolved[271]: Defaulting to hostname 'linux'. May 17 01:08:22.050290 kernel: audit: type=1130 audit(1747444102.005:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:22.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:21.704454 systemd[1]: Started systemd-resolved.service. May 17 01:08:22.104261 kernel: audit: type=1130 audit(1747444102.058:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:22.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:21.757413 systemd[1]: Finished kmod-static-nodes.service. May 17 01:08:21.828390 systemd[1]: Finished systemd-fsck-usr.service. May 17 01:08:21.924702 systemd[1]: Finished systemd-vconsole-setup.service. May 17 01:08:21.969670 systemd-modules-load[269]: Inserted module 'dm_multipath' May 17 01:08:22.005616 systemd[1]: Finished systemd-modules-load.service. May 17 01:08:22.058529 systemd[1]: Reached target nss-lookup.target. May 17 01:08:22.112832 systemd[1]: Starting dracut-cmdline-ask.service... May 17 01:08:22.132775 systemd[1]: Starting systemd-sysctl.service... May 17 01:08:22.133076 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 01:08:22.135873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 01:08:22.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:22.136654 systemd[1]: Finished systemd-sysctl.service. May 17 01:08:22.185243 kernel: audit: type=1130 audit(1747444102.135:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:22.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:22.198613 systemd[1]: Finished dracut-cmdline-ask.service. May 17 01:08:22.263329 kernel: audit: type=1130 audit(1747444102.198:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:22.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:22.255833 systemd[1]: Starting dracut-cmdline.service... May 17 01:08:22.279335 dracut-cmdline[294]: dracut-dracut-053 May 17 01:08:22.279335 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA May 17 01:08:22.279335 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 01:08:22.350269 kernel: Loading iSCSI transport class v2.0-870. May 17 01:08:22.350282 kernel: iscsi: registered transport (tcp) May 17 01:08:22.404845 kernel: iscsi: registered transport (qla4xxx) May 17 01:08:22.404861 kernel: QLogic iSCSI HBA Driver May 17 01:08:22.422400 systemd[1]: Finished dracut-cmdline.service. May 17 01:08:22.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:22.433041 systemd[1]: Starting dracut-pre-udev.service... May 17 01:08:22.489300 kernel: raid6: avx2x4 gen() 47138 MB/s May 17 01:08:22.525312 kernel: raid6: avx2x4 xor() 16923 MB/s May 17 01:08:22.560302 kernel: raid6: avx2x2 gen() 53676 MB/s May 17 01:08:22.595267 kernel: raid6: avx2x2 xor() 31989 MB/s May 17 01:08:22.630299 kernel: raid6: avx2x1 gen() 45108 MB/s May 17 01:08:22.664258 kernel: raid6: avx2x1 xor() 27791 MB/s May 17 01:08:22.698259 kernel: raid6: sse2x4 gen() 21278 MB/s May 17 01:08:22.732258 kernel: raid6: sse2x4 xor() 11951 MB/s May 17 01:08:22.766258 kernel: raid6: sse2x2 gen() 21583 MB/s May 17 01:08:22.800302 kernel: raid6: sse2x2 xor() 13408 MB/s May 17 01:08:22.834298 kernel: raid6: sse2x1 gen() 18218 MB/s May 17 01:08:22.886028 kernel: raid6: sse2x1 xor() 9010 MB/s May 17 01:08:22.886045 kernel: raid6: using algorithm avx2x2 gen() 53676 MB/s May 17 01:08:22.886053 kernel: raid6: .... xor() 31989 MB/s, rmw enabled May 17 01:08:22.904220 kernel: raid6: using avx2x2 recovery algorithm May 17 01:08:22.950296 kernel: xor: automatically using best checksumming function avx May 17 01:08:23.030236 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 01:08:23.035220 systemd[1]: Finished dracut-pre-udev.service. May 17 01:08:23.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:23.035000 audit: BPF prog-id=7 op=LOAD May 17 01:08:23.035000 audit: BPF prog-id=8 op=LOAD May 17 01:08:23.036053 systemd[1]: Starting systemd-udevd.service... May 17 01:08:23.044373 systemd-udevd[473]: Using default interface naming scheme 'v252'. May 17 01:08:23.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:23.057701 systemd[1]: Started systemd-udevd.service. May 17 01:08:23.096302 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation May 17 01:08:23.074203 systemd[1]: Starting dracut-pre-trigger.service... May 17 01:08:23.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:23.102443 systemd[1]: Finished dracut-pre-trigger.service. May 17 01:08:23.115130 systemd[1]: Starting systemd-udev-trigger.service... May 17 01:08:23.188719 systemd[1]: Finished systemd-udev-trigger.service. May 17 01:08:23.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:23.218264 kernel: cryptd: max_cpu_qlen set to 1000 May 17 01:08:23.260920 kernel: AVX2 version of gcm_enc/dec engaged. May 17 01:08:23.260971 kernel: AES CTR mode by8 optimization enabled May 17 01:08:23.261232 kernel: ACPI: bus type USB registered May 17 01:08:23.278271 kernel: libata version 3.00 loaded. May 17 01:08:23.278289 kernel: usbcore: registered new interface driver usbfs May 17 01:08:23.295835 kernel: usbcore: registered new interface driver hub May 17 01:08:23.313334 kernel: usbcore: registered new device driver usb May 17 01:08:23.333230 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 01:08:23.364566 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 01:08:23.405162 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 May 17 01:08:23.987861 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 01:08:23.987932 kernel: igb 0000:03:00.0: added PHC on eth0 May 17 01:08:23.987989 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:08:23.988040 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d7:e8 May 17 01:08:23.988091 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 May 17 01:08:23.988142 kernel: ahci 0000:00:17.0: version 3.0 May 17 01:08:23.988194 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 01:08:23.988247 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode May 17 01:08:23.988299 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst May 17 01:08:23.988348 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 01:08:23.988397 kernel: igb 0000:04:00.0: added PHC on eth1 May 17 01:08:23.988451 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 May 17 01:08:23.988499 kernel: scsi host0: ahci May 17 01:08:23.988556 kernel: scsi host1: ahci May 17 01:08:23.988610 kernel: scsi host2: ahci May 17 01:08:23.988662 kernel: scsi host3: ahci May 17 01:08:23.988719 kernel: scsi host4: ahci May 17 01:08:23.988770 kernel: scsi host5: ahci May 17 01:08:23.988825 kernel: scsi host6: ahci May 17 01:08:23.988875 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 May 17 01:08:23.988883 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 May 17 01:08:23.988890 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 May 17 01:08:23.988898 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 May 17 01:08:23.988905 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 May 17 01:08:23.988911 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 May 17 01:08:23.988918 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 May 17 01:08:23.988924 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:08:23.988977 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 May 17 01:08:23.989026 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d7:e9 May 17 01:08:23.989076 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 May 17 01:08:23.989127 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 01:08:23.989176 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 01:08:23.989228 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 May 17 01:08:23.989278 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 01:08:23.989329 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed May 17 01:08:23.989378 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 01:08:23.989428 kernel: hub 1-0:1.0: USB hub found May 17 01:08:23.989489 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 01:08:23.989498 kernel: hub 1-0:1.0: 16 ports detected May 17 01:08:23.989552 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 01:08:23.989559 kernel: hub 2-0:1.0: USB hub found May 17 01:08:23.989619 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 01:08:23.989627 kernel: hub 2-0:1.0: 10 ports detected May 17 01:08:23.989679 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 01:08:23.989686 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 May 17 01:08:23.989739 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 01:08:23.989746 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 May 17 01:08:24.902239 kernel: ata7: SATA link down (SStatus 0 SControl 300) May 17 01:08:24.902252 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 01:08:24.902317 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 May 17 01:08:24.902326 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 01:08:24.902332 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 May 17 01:08:24.902339 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 01:08:24.902348 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd May 17 01:08:24.902453 kernel: ata1.00: Features: NCQ-prio May 17 01:08:24.902460 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 01:08:24.902467 kernel: ata2.00: Features: NCQ-prio May 17 01:08:24.902474 kernel: ata1.00: configured for UDMA/133 May 17 01:08:24.902481 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 May 17 01:08:24.902549 kernel: ata2.00: configured for UDMA/133 May 17 01:08:24.902556 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 May 17 01:08:24.902619 kernel: igb 0000:03:00.0 eno1: renamed from eth0 May 17 01:08:24.902680 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:08:24.902687 kernel: hub 1-14:1.0: USB hub found May 17 01:08:24.902754 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:08:24.902761 kernel: hub 1-14:1.0: 4 ports detected May 17 01:08:24.902823 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 01:08:24.902881 kernel: port_module: 9 callbacks suppressed May 17 01:08:24.902891 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged May 17 01:08:24.902948 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 01:08:24.903009 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 01:08:24.903067 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 01:08:24.903121 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 17 01:08:24.903175 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 01:08:24.903233 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 01:08:24.903291 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:08:24.903299 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 01:08:24.903305 kernel: GPT:9289727 != 937703087 May 17 01:08:24.903312 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 01:08:24.903318 kernel: GPT:9289727 != 937703087 May 17 01:08:24.903324 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 01:08:24.903331 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 01:08:24.903337 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:08:24.903344 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 01:08:24.903397 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks May 17 01:08:24.903453 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd May 17 01:08:24.903550 kernel: sd 1:0:0:0: [sdb] Write Protect is off May 17 01:08:24.903608 kernel: igb 0000:04:00.0 eno2: renamed from eth1 May 17 01:08:24.903662 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 01:08:24.903716 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 17 01:08:24.903771 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 01:08:24.903828 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:08:24.903835 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:08:24.903842 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk May 17 01:08:24.903896 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 01:08:24.903904 kernel: usbcore: registered new interface driver usbhid May 17 01:08:24.903911 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (543) May 17 01:08:24.903917 kernel: usbhid: USB HID core driver May 17 01:08:24.903924 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 May 17 01:08:24.903931 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 May 17 01:08:24.794651 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 01:08:24.943352 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 May 17 01:08:24.943422 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 May 17 01:08:24.859338 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 01:08:25.045388 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 May 17 01:08:25.045409 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 May 17 01:08:25.045524 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 May 17 01:08:24.907483 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 01:08:24.972011 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 01:08:25.061207 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 01:08:25.096609 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:08:25.075887 systemd[1]: Starting disk-uuid.service... May 17 01:08:25.126316 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 01:08:25.126327 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:08:25.126376 disk-uuid[692]: Primary Header is updated. May 17 01:08:25.126376 disk-uuid[692]: Secondary Entries is updated. May 17 01:08:25.126376 disk-uuid[692]: Secondary Header is updated. May 17 01:08:25.184325 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 01:08:25.184337 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:08:25.184345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 01:08:26.170373 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:08:26.189255 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 01:08:26.189285 disk-uuid[693]: The operation has completed successfully. May 17 01:08:26.226309 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 01:08:26.339354 kernel: audit: type=1130 audit(1747444106.234:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.339369 kernel: audit: type=1131 audit(1747444106.234:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.339376 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 01:08:26.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.226371 systemd[1]: Finished disk-uuid.service. May 17 01:08:26.235391 systemd[1]: Starting verity-setup.service... May 17 01:08:26.381050 systemd[1]: Found device dev-mapper-usr.device. May 17 01:08:26.381808 systemd[1]: Mounting sysusr-usr.mount... May 17 01:08:26.406457 systemd[1]: Finished verity-setup.service. May 17 01:08:26.475414 kernel: audit: type=1130 audit(1747444106.414:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.475429 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 01:08:26.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.489143 systemd[1]: Mounted sysusr-usr.mount. May 17 01:08:26.497533 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 01:08:26.586638 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 01:08:26.586654 kernel: BTRFS info (device sda6): using free space tree May 17 01:08:26.586662 kernel: BTRFS info (device sda6): has skinny extents May 17 01:08:26.586674 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 01:08:26.497936 systemd[1]: Starting ignition-setup.service... May 17 01:08:26.521693 systemd[1]: Starting parse-ip-for-networkd.service... May 17 01:08:26.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.595686 systemd[1]: Finished ignition-setup.service. May 17 01:08:26.724000 kernel: audit: type=1130 audit(1747444106.612:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.724015 kernel: audit: type=1130 audit(1747444106.674:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.612724 systemd[1]: Finished parse-ip-for-networkd.service. May 17 01:08:26.759247 kernel: audit: type=1334 audit(1747444106.736:24): prog-id=9 op=LOAD May 17 01:08:26.736000 audit: BPF prog-id=9 op=LOAD May 17 01:08:26.675315 systemd[1]: Starting ignition-fetch-offline.service... May 17 01:08:26.737303 systemd[1]: Starting systemd-networkd.service... May 17 01:08:26.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.773791 systemd-networkd[879]: lo: Link UP May 17 01:08:26.850350 kernel: audit: type=1130 audit(1747444106.782:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.797962 ignition[867]: Ignition 2.14.0 May 17 01:08:26.773794 systemd-networkd[879]: lo: Gained carrier May 17 01:08:26.797967 ignition[867]: Stage: fetch-offline May 17 01:08:26.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.774127 systemd-networkd[879]: Enumeration completed May 17 01:08:27.009311 kernel: audit: type=1130 audit(1747444106.876:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:27.009326 kernel: audit: type=1130 audit(1747444106.935:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:27.009334 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 01:08:26.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.797993 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:08:27.034684 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready May 17 01:08:26.774172 systemd[1]: Started systemd-networkd.service. May 17 01:08:26.798006 ignition[867]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:08:26.775081 systemd-networkd[879]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:08:27.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.806003 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:08:26.782470 systemd[1]: Reached target network.target. May 17 01:08:27.091427 iscsid[900]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 01:08:27.091427 iscsid[900]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 01:08:27.091427 iscsid[900]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 01:08:27.091427 iscsid[900]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 01:08:27.091427 iscsid[900]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 01:08:27.091427 iscsid[900]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 01:08:27.091427 iscsid[900]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 01:08:27.262418 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 01:08:27.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:27.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:26.806072 ignition[867]: parsed url from cmdline: "" May 17 01:08:26.810126 unknown[867]: fetched base config from "system" May 17 01:08:26.806074 ignition[867]: no config URL provided May 17 01:08:26.810130 unknown[867]: fetched user config from "system" May 17 01:08:26.806078 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" May 17 01:08:26.843933 systemd[1]: Starting iscsiuio.service... May 17 01:08:26.806099 ignition[867]: parsing config with SHA512: 8d1e3ab22112aa292ad74424266a2624eb3ffbabc3a5e805feac698de7b2f87496dc0be708eead1c918559cf0a07594a113e27553cfe048e3417d6ba83b6f94b May 17 01:08:26.857540 systemd[1]: Started iscsiuio.service. May 17 01:08:26.810493 ignition[867]: fetch-offline: fetch-offline passed May 17 01:08:26.876523 systemd[1]: Finished ignition-fetch-offline.service. May 17 01:08:26.810496 ignition[867]: POST message to Packet Timeline May 17 01:08:26.935496 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 01:08:26.810500 ignition[867]: POST Status error: resource requires networking May 17 01:08:26.935943 systemd[1]: Starting ignition-kargs.service... May 17 01:08:26.810539 ignition[867]: Ignition finished successfully May 17 01:08:27.010184 systemd-networkd[879]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:08:27.013902 ignition[889]: Ignition 2.14.0 May 17 01:08:27.023870 systemd[1]: Starting iscsid.service... May 17 01:08:27.013906 ignition[889]: Stage: kargs May 17 01:08:27.048402 systemd[1]: Started iscsid.service. May 17 01:08:27.013963 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:08:27.062718 systemd[1]: Starting dracut-initqueue.service... May 17 01:08:27.013973 ignition[889]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:08:27.082422 systemd[1]: Finished dracut-initqueue.service. May 17 01:08:27.016552 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:08:27.103333 systemd[1]: Reached target remote-fs-pre.target. May 17 01:08:27.017705 ignition[889]: kargs: kargs passed May 17 01:08:27.114490 systemd[1]: Reached target remote-cryptsetup.target. May 17 01:08:27.017719 ignition[889]: POST message to Packet Timeline May 17 01:08:27.155487 systemd[1]: Reached target remote-fs.target. May 17 01:08:27.017750 ignition[889]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:08:27.177654 systemd[1]: Starting dracut-pre-mount.service... May 17 01:08:27.021399 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47919->[::1]:53: read: connection refused May 17 01:08:27.207517 systemd[1]: Finished dracut-pre-mount.service. May 17 01:08:27.221878 ignition[889]: GET https://metadata.packet.net/metadata: attempt #2 May 17 01:08:27.251940 systemd-networkd[879]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:08:27.222355 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36909->[::1]:53: read: connection refused May 17 01:08:27.280632 systemd-networkd[879]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:08:27.311502 systemd-networkd[879]: enp1s0f1np1: Link UP May 17 01:08:27.311945 systemd-networkd[879]: enp1s0f1np1: Gained carrier May 17 01:08:27.325749 systemd-networkd[879]: enp1s0f0np0: Link UP May 17 01:08:27.326122 systemd-networkd[879]: eno2: Link UP May 17 01:08:27.326487 systemd-networkd[879]: eno1: Link UP May 17 01:08:27.623046 ignition[889]: GET https://metadata.packet.net/metadata: attempt #3 May 17 01:08:27.624107 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39104->[::1]:53: read: connection refused May 17 01:08:28.053014 systemd-networkd[879]: enp1s0f0np0: Gained carrier May 17 01:08:28.061487 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready May 17 01:08:28.099417 systemd-networkd[879]: enp1s0f0np0: DHCPv4 address 147.28.180.193/31, gateway 147.28.180.192 acquired from 145.40.83.140 May 17 01:08:28.349797 systemd-networkd[879]: enp1s0f1np1: Gained IPv6LL May 17 01:08:28.424530 ignition[889]: GET https://metadata.packet.net/metadata: attempt #4 May 17 01:08:28.425686 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39151->[::1]:53: read: connection refused May 17 01:08:29.821845 systemd-networkd[879]: enp1s0f0np0: Gained IPv6LL May 17 01:08:30.027622 ignition[889]: GET https://metadata.packet.net/metadata: attempt #5 May 17 01:08:30.028784 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34772->[::1]:53: read: connection refused May 17 01:08:33.232221 ignition[889]: GET https://metadata.packet.net/metadata: attempt #6 May 17 01:08:34.207020 ignition[889]: GET result: OK May 17 01:08:34.562675 ignition[889]: Ignition finished successfully May 17 01:08:34.566935 systemd[1]: Finished ignition-kargs.service. May 17 01:08:34.655879 kernel: kauditd_printk_skb: 3 callbacks suppressed May 17 01:08:34.655899 kernel: audit: type=1130 audit(1747444114.580:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:34.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:34.590448 ignition[916]: Ignition 2.14.0 May 17 01:08:34.582517 systemd[1]: Starting ignition-disks.service... May 17 01:08:34.590452 ignition[916]: Stage: disks May 17 01:08:34.590527 ignition[916]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:08:34.590536 ignition[916]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:08:34.593219 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:08:34.593897 ignition[916]: disks: disks passed May 17 01:08:34.593900 ignition[916]: POST message to Packet Timeline May 17 01:08:34.593909 ignition[916]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:08:35.538603 ignition[916]: GET result: OK May 17 01:08:35.880856 ignition[916]: Ignition finished successfully May 17 01:08:35.883064 systemd[1]: Finished ignition-disks.service. May 17 01:08:35.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:35.896820 systemd[1]: Reached target initrd-root-device.target. May 17 01:08:35.974503 kernel: audit: type=1130 audit(1747444115.896:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:35.960461 systemd[1]: Reached target local-fs-pre.target. May 17 01:08:35.960497 systemd[1]: Reached target local-fs.target. May 17 01:08:35.983465 systemd[1]: Reached target sysinit.target. May 17 01:08:35.997460 systemd[1]: Reached target basic.target. May 17 01:08:36.011127 systemd[1]: Starting systemd-fsck-root.service... May 17 01:08:36.029814 systemd-fsck[932]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 01:08:36.047548 systemd[1]: Finished systemd-fsck-root.service. May 17 01:08:36.137606 kernel: audit: type=1130 audit(1747444116.057:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:36.137622 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 01:08:36.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:36.057843 systemd[1]: Mounting sysroot.mount... May 17 01:08:36.144880 systemd[1]: Mounted sysroot.mount. May 17 01:08:36.158512 systemd[1]: Reached target initrd-root-fs.target. May 17 01:08:36.175949 systemd[1]: Mounting sysroot-usr.mount... May 17 01:08:36.191684 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 01:08:36.205184 systemd[1]: Starting flatcar-static-network.service... May 17 01:08:36.220431 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 01:08:36.220522 systemd[1]: Reached target ignition-diskful.target. May 17 01:08:36.240683 systemd[1]: Mounted sysroot-usr.mount. May 17 01:08:36.264572 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 01:08:36.404841 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (941) May 17 01:08:36.404859 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 01:08:36.404870 kernel: BTRFS info (device sda6): using free space tree May 17 01:08:36.404882 kernel: BTRFS info (device sda6): has skinny extents May 17 01:08:36.404889 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 01:08:36.404954 coreos-metadata[939]: May 17 01:08:36.350 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:08:36.466347 kernel: audit: type=1130 audit(1747444116.413:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:36.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:36.466384 coreos-metadata[940]: May 17 01:08:36.350 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:08:36.276565 systemd[1]: Starting initrd-setup-root.service... May 17 01:08:36.316699 systemd[1]: Finished initrd-setup-root.service. May 17 01:08:36.517329 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory May 17 01:08:36.589460 kernel: audit: type=1130 audit(1747444116.526:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:36.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:36.414563 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 01:08:36.598439 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory May 17 01:08:36.475854 systemd[1]: Starting ignition-mount.service... May 17 01:08:36.615432 initrd-setup-root[966]: cut: /sysroot/etc/shadow: No such file or directory May 17 01:08:36.625449 ignition[1014]: INFO : Ignition 2.14.0 May 17 01:08:36.625449 ignition[1014]: INFO : Stage: mount May 17 01:08:36.625449 ignition[1014]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:08:36.625449 ignition[1014]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:08:36.625449 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:08:36.625449 ignition[1014]: INFO : mount: mount passed May 17 01:08:36.625449 ignition[1014]: INFO : POST message to Packet Timeline May 17 01:08:36.625449 ignition[1014]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:08:36.495814 systemd[1]: Starting sysroot-boot.service... May 17 01:08:36.716494 initrd-setup-root[974]: cut: /sysroot/etc/gshadow: No such file or directory May 17 01:08:36.510676 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 01:08:36.510721 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 01:08:36.514410 systemd[1]: Finished sysroot-boot.service. May 17 01:08:37.334830 coreos-metadata[940]: May 17 01:08:37.334 INFO Fetch successful May 17 01:08:37.413028 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 01:08:37.544501 kernel: audit: type=1130 audit(1747444117.422:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:37.544520 kernel: audit: type=1131 audit(1747444117.422:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:37.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:37.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:37.413079 systemd[1]: Finished flatcar-static-network.service. May 17 01:08:37.590014 ignition[1014]: INFO : GET result: OK May 17 01:08:37.773671 coreos-metadata[939]: May 17 01:08:37.773 INFO Fetch successful May 17 01:08:37.802863 coreos-metadata[939]: May 17 01:08:37.802 INFO wrote hostname ci-3510.3.7-n-b3aec2dc90 to /sysroot/etc/hostname May 17 01:08:37.803498 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 01:08:37.890460 kernel: audit: type=1130 audit(1747444117.824:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:37.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:38.012520 ignition[1014]: INFO : Ignition finished successfully May 17 01:08:38.013196 systemd[1]: Finished ignition-mount.service. May 17 01:08:38.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:38.031354 systemd[1]: Starting ignition-files.service... May 17 01:08:38.101441 kernel: audit: type=1130 audit(1747444118.030:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:38.096193 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 01:08:38.148352 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1030) May 17 01:08:38.148365 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 01:08:38.183659 kernel: BTRFS info (device sda6): using free space tree May 17 01:08:38.183678 kernel: BTRFS info (device sda6): has skinny extents May 17 01:08:38.233229 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 01:08:38.234535 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 01:08:38.250372 ignition[1049]: INFO : Ignition 2.14.0 May 17 01:08:38.250372 ignition[1049]: INFO : Stage: files May 17 01:08:38.250372 ignition[1049]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:08:38.250372 ignition[1049]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:08:38.250372 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:08:38.250372 ignition[1049]: DEBUG : files: compiled without relabeling support, skipping May 17 01:08:38.250372 ignition[1049]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 01:08:38.250372 ignition[1049]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 01:08:38.252931 unknown[1049]: wrote ssh authorized keys file for user: core May 17 01:08:38.352499 ignition[1049]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 01:08:38.352499 ignition[1049]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 01:08:38.352499 ignition[1049]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 01:08:38.352499 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 01:08:38.352499 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 01:08:38.352499 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 01:08:38.352499 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 01:08:38.455575 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 01:08:38.755947 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 01:08:38.772555 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 01:08:38.772555 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 01:08:39.358758 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 17 01:08:39.420843 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 01:08:39.420843 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition May 17 01:08:39.451505 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2335773702" May 17 01:08:39.451505 ignition[1049]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2335773702": device or resource busy May 17 01:08:39.428200 systemd[1]: mnt-oem2335773702.mount: Deactivated successfully. May 17 01:08:39.724586 ignition[1049]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2335773702", trying btrfs: device or resource busy May 17 01:08:39.724586 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2335773702" May 17 01:08:39.724586 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2335773702" May 17 01:08:39.724586 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem2335773702" May 17 01:08:39.724586 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem2335773702" May 17 01:08:39.724586 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" May 17 01:08:39.724586 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 01:08:39.724586 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 01:08:40.035089 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK May 17 01:08:40.193693 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 01:08:40.193693 ignition[1049]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 01:08:40.193693 ignition[1049]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 01:08:40.193693 ignition[1049]: INFO : files: op(12): [started] processing unit "packet-phone-home.service" May 17 01:08:40.193693 ignition[1049]: INFO : files: op(12): [finished] processing unit "packet-phone-home.service" May 17 01:08:40.193693 ignition[1049]: INFO : files: op(13): [started] processing unit "containerd.service" May 17 01:08:40.193693 ignition[1049]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 01:08:40.296589 ignition[1049]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 01:08:40.296589 ignition[1049]: INFO : files: op(13): [finished] processing unit "containerd.service" May 17 01:08:40.296589 ignition[1049]: INFO : files: op(15): [started] processing unit "prepare-helm.service" May 17 01:08:40.296589 ignition[1049]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:08:40.296589 ignition[1049]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:08:40.296589 ignition[1049]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" May 17 01:08:40.296589 ignition[1049]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 01:08:40.296589 ignition[1049]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 01:08:40.296589 ignition[1049]: INFO : files: op(18): [started] setting preset to enabled for "packet-phone-home.service" May 17 01:08:40.296589 ignition[1049]: INFO : files: op(18): [finished] setting preset to enabled for "packet-phone-home.service" May 17 01:08:40.296589 ignition[1049]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" May 17 01:08:40.296589 ignition[1049]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" May 17 01:08:40.296589 ignition[1049]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 01:08:40.296589 ignition[1049]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 01:08:40.296589 ignition[1049]: INFO : files: files passed May 17 01:08:40.296589 ignition[1049]: INFO : POST message to Packet Timeline May 17 01:08:40.296589 ignition[1049]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:08:41.055134 ignition[1049]: INFO : GET result: OK May 17 01:08:41.411144 ignition[1049]: INFO : Ignition finished successfully May 17 01:08:41.413883 systemd[1]: Finished ignition-files.service. May 17 01:08:41.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.436165 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 01:08:41.507531 kernel: audit: type=1130 audit(1747444121.429:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.497528 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 01:08:41.532525 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:08:41.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.497884 systemd[1]: Starting ignition-quench.service... May 17 01:08:41.728458 kernel: audit: type=1130 audit(1747444121.542:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.728474 kernel: audit: type=1130 audit(1747444121.608:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.728484 kernel: audit: type=1131 audit(1747444121.608:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.514658 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 01:08:41.542767 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 01:08:41.542893 systemd[1]: Finished ignition-quench.service. May 17 01:08:41.892355 kernel: audit: type=1130 audit(1747444121.769:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.892370 kernel: audit: type=1131 audit(1747444121.769:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.608581 systemd[1]: Reached target ignition-complete.target. May 17 01:08:41.737850 systemd[1]: Starting initrd-parse-etc.service... May 17 01:08:41.759251 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 01:08:42.006517 kernel: audit: type=1130 audit(1747444121.940:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.759297 systemd[1]: Finished initrd-parse-etc.service. May 17 01:08:41.770050 systemd[1]: Reached target initrd-fs.target. May 17 01:08:41.901455 systemd[1]: Reached target initrd.target. May 17 01:08:41.901513 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 01:08:41.901876 systemd[1]: Starting dracut-pre-pivot.service... May 17 01:08:42.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.922583 systemd[1]: Finished dracut-pre-pivot.service. May 17 01:08:42.152458 kernel: audit: type=1131 audit(1747444122.077:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:41.941231 systemd[1]: Starting initrd-cleanup.service... May 17 01:08:42.016447 systemd[1]: Stopped target nss-lookup.target. May 17 01:08:42.029496 systemd[1]: Stopped target remote-cryptsetup.target. May 17 01:08:42.045478 systemd[1]: Stopped target timers.target. May 17 01:08:42.052509 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 01:08:42.052584 systemd[1]: Stopped dracut-pre-pivot.service. May 17 01:08:42.077766 systemd[1]: Stopped target initrd.target. May 17 01:08:42.145504 systemd[1]: Stopped target basic.target. May 17 01:08:42.152554 systemd[1]: Stopped target ignition-complete.target. May 17 01:08:42.174631 systemd[1]: Stopped target ignition-diskful.target. May 17 01:08:42.189534 systemd[1]: Stopped target initrd-root-device.target. May 17 01:08:42.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.205548 systemd[1]: Stopped target remote-fs.target. May 17 01:08:42.398447 kernel: audit: type=1131 audit(1747444122.314:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.220591 systemd[1]: Stopped target remote-fs-pre.target. May 17 01:08:42.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.236789 systemd[1]: Stopped target sysinit.target. May 17 01:08:42.483477 kernel: audit: type=1131 audit(1747444122.407:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.252817 systemd[1]: Stopped target local-fs.target. May 17 01:08:42.267836 systemd[1]: Stopped target local-fs-pre.target. May 17 01:08:42.282827 systemd[1]: Stopped target swap.target. May 17 01:08:42.297824 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 01:08:42.298197 systemd[1]: Stopped dracut-pre-mount.service. May 17 01:08:42.315143 systemd[1]: Stopped target cryptsetup.target. May 17 01:08:42.391473 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 01:08:42.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.391543 systemd[1]: Stopped dracut-initqueue.service. May 17 01:08:42.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.407619 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 01:08:42.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.407765 systemd[1]: Stopped ignition-fetch-offline.service. May 17 01:08:42.629467 ignition[1096]: INFO : Ignition 2.14.0 May 17 01:08:42.629467 ignition[1096]: INFO : Stage: umount May 17 01:08:42.629467 ignition[1096]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:08:42.629467 ignition[1096]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:08:42.629467 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:08:42.629467 ignition[1096]: INFO : umount: umount passed May 17 01:08:42.629467 ignition[1096]: INFO : POST message to Packet Timeline May 17 01:08:42.629467 ignition[1096]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:08:42.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.476609 systemd[1]: Stopped target paths.target. May 17 01:08:42.490467 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 01:08:42.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:42.495477 systemd[1]: Stopped systemd-ask-password-console.path. May 17 01:08:42.511465 systemd[1]: Stopped target slices.target. May 17 01:08:42.519519 systemd[1]: Stopped target sockets.target. May 17 01:08:42.542661 systemd[1]: iscsid.socket: Deactivated successfully. May 17 01:08:42.542751 systemd[1]: Closed iscsid.socket. May 17 01:08:42.556735 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 01:08:42.556909 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 01:08:42.573904 systemd[1]: ignition-files.service: Deactivated successfully. May 17 01:08:42.574279 systemd[1]: Stopped ignition-files.service. May 17 01:08:42.589900 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 01:08:42.590287 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 01:08:42.606976 systemd[1]: Stopping ignition-mount.service... May 17 01:08:42.621548 systemd[1]: Stopping iscsiuio.service... May 17 01:08:42.636960 systemd[1]: Stopping sysroot-boot.service... May 17 01:08:42.655474 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 01:08:42.655750 systemd[1]: Stopped systemd-udev-trigger.service. May 17 01:08:42.674891 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 01:08:42.675134 systemd[1]: Stopped dracut-pre-trigger.service. May 17 01:08:42.711592 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 01:08:42.713418 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 01:08:42.713765 systemd[1]: Stopped iscsiuio.service. May 17 01:08:42.729751 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 01:08:42.729985 systemd[1]: Stopped sysroot-boot.service. May 17 01:08:42.745667 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 01:08:42.745932 systemd[1]: Closed iscsiuio.socket. May 17 01:08:42.762172 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 01:08:42.762415 systemd[1]: Finished initrd-cleanup.service. May 17 01:08:43.792606 ignition[1096]: INFO : GET result: OK May 17 01:08:44.139175 ignition[1096]: INFO : Ignition finished successfully May 17 01:08:44.141853 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 01:08:44.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.142090 systemd[1]: Stopped ignition-mount.service. May 17 01:08:44.155843 systemd[1]: Stopped target network.target. May 17 01:08:44.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.171441 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 01:08:44.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.171577 systemd[1]: Stopped ignition-disks.service. May 17 01:08:44.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.187605 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 01:08:44.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.187750 systemd[1]: Stopped ignition-kargs.service. May 17 01:08:44.203604 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 01:08:44.203751 systemd[1]: Stopped ignition-setup.service. May 17 01:08:44.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.220642 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 01:08:44.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.299000 audit: BPF prog-id=6 op=UNLOAD May 17 01:08:44.220787 systemd[1]: Stopped initrd-setup-root.service. May 17 01:08:44.236867 systemd[1]: Stopping systemd-networkd.service... May 17 01:08:44.242361 systemd-networkd[879]: enp1s0f0np0: DHCPv6 lease lost May 17 01:08:44.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.251409 systemd-networkd[879]: enp1s0f1np1: DHCPv6 lease lost May 17 01:08:44.353000 audit: BPF prog-id=9 op=UNLOAD May 17 01:08:44.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.251686 systemd[1]: Stopping systemd-resolved.service... May 17 01:08:44.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.266125 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 01:08:44.266392 systemd[1]: Stopped systemd-resolved.service. May 17 01:08:44.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.282865 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 01:08:44.283206 systemd[1]: Stopped systemd-networkd.service. May 17 01:08:44.299328 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 01:08:44.299348 systemd[1]: Closed systemd-networkd.socket. May 17 01:08:44.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.309852 systemd[1]: Stopping network-cleanup.service... May 17 01:08:44.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.323468 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 01:08:44.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.323504 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 01:08:44.344644 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 01:08:44.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.344714 systemd[1]: Stopped systemd-sysctl.service. May 17 01:08:44.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.361785 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 01:08:44.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.361892 systemd[1]: Stopped systemd-modules-load.service. May 17 01:08:44.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.378830 systemd[1]: Stopping systemd-udevd.service... May 17 01:08:44.397521 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 01:08:44.398124 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 01:08:44.398191 systemd[1]: Stopped systemd-udevd.service. May 17 01:08:44.411608 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 01:08:44.411639 systemd[1]: Closed systemd-udevd-control.socket. May 17 01:08:44.425430 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 01:08:44.425459 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 01:08:44.441304 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 01:08:44.441329 systemd[1]: Stopped dracut-pre-udev.service. May 17 01:08:44.463504 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 01:08:44.463551 systemd[1]: Stopped dracut-cmdline.service. May 17 01:08:44.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:44.479433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:08:44.479492 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 01:08:44.495631 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 01:08:44.512305 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 01:08:44.733000 audit: BPF prog-id=5 op=UNLOAD May 17 01:08:44.733000 audit: BPF prog-id=4 op=UNLOAD May 17 01:08:44.734000 audit: BPF prog-id=3 op=UNLOAD May 17 01:08:44.738000 audit: BPF prog-id=8 op=UNLOAD May 17 01:08:44.738000 audit: BPF prog-id=7 op=UNLOAD May 17 01:08:44.512336 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 01:08:44.527455 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 01:08:44.527485 systemd[1]: Stopped kmod-static-nodes.service. May 17 01:08:44.796265 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). May 17 01:08:44.796307 iscsid[900]: iscsid shutting down. May 17 01:08:44.543406 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:08:44.543459 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 01:08:44.561130 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 01:08:44.562044 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 01:08:44.562186 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 01:08:44.674821 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 01:08:44.675061 systemd[1]: Stopped network-cleanup.service. May 17 01:08:44.685833 systemd[1]: Reached target initrd-switch-root.target. May 17 01:08:44.702173 systemd[1]: Starting initrd-switch-root.service... May 17 01:08:44.723779 systemd[1]: Switching root. May 17 01:08:44.796785 systemd-journald[268]: Journal stopped May 17 01:08:48.668475 kernel: SELinux: Class mctp_socket not defined in policy. May 17 01:08:48.668490 kernel: SELinux: Class anon_inode not defined in policy. May 17 01:08:48.668499 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 01:08:48.668504 kernel: SELinux: policy capability network_peer_controls=1 May 17 01:08:48.668510 kernel: SELinux: policy capability open_perms=1 May 17 01:08:48.668515 kernel: SELinux: policy capability extended_socket_class=1 May 17 01:08:48.668521 kernel: SELinux: policy capability always_check_network=0 May 17 01:08:48.668527 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 01:08:48.668532 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 01:08:48.668539 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 01:08:48.668544 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 01:08:48.668550 systemd[1]: Successfully loaded SELinux policy in 319.277ms. May 17 01:08:48.668557 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.140ms. May 17 01:08:48.668564 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 01:08:48.668572 systemd[1]: Detected architecture x86-64. May 17 01:08:48.668578 systemd[1]: Detected first boot. May 17 01:08:48.668584 systemd[1]: Hostname set to . May 17 01:08:48.668595 systemd[1]: Initializing machine ID from random generator. May 17 01:08:48.668605 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 01:08:48.668613 systemd[1]: Populated /etc with preset unit settings. May 17 01:08:48.668622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:08:48.668630 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:08:48.668637 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:08:48.668644 systemd[1]: Queued start job for default target multi-user.target. May 17 01:08:48.668650 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 01:08:48.668656 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 01:08:48.668662 systemd[1]: Created slice system-addon\x2drun.slice. May 17 01:08:48.668670 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 01:08:48.668676 systemd[1]: Created slice system-getty.slice. May 17 01:08:48.668682 systemd[1]: Created slice system-modprobe.slice. May 17 01:08:48.668689 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 01:08:48.668695 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 01:08:48.668701 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 01:08:48.668707 systemd[1]: Created slice user.slice. May 17 01:08:48.668713 systemd[1]: Started systemd-ask-password-console.path. May 17 01:08:48.668719 systemd[1]: Started systemd-ask-password-wall.path. May 17 01:08:48.668726 systemd[1]: Set up automount boot.automount. May 17 01:08:48.668732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 01:08:48.668738 systemd[1]: Reached target integritysetup.target. May 17 01:08:48.668745 systemd[1]: Reached target remote-cryptsetup.target. May 17 01:08:48.668752 systemd[1]: Reached target remote-fs.target. May 17 01:08:48.668759 systemd[1]: Reached target slices.target. May 17 01:08:48.668765 systemd[1]: Reached target swap.target. May 17 01:08:48.668772 systemd[1]: Reached target torcx.target. May 17 01:08:48.668779 systemd[1]: Reached target veritysetup.target. May 17 01:08:48.668791 systemd[1]: Listening on systemd-coredump.socket. May 17 01:08:48.668801 systemd[1]: Listening on systemd-initctl.socket. May 17 01:08:48.668808 kernel: kauditd_printk_skb: 49 callbacks suppressed May 17 01:08:48.668815 kernel: audit: type=1400 audit(1747444127.913:92): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 01:08:48.668824 systemd[1]: Listening on systemd-journald-audit.socket. May 17 01:08:48.668831 kernel: audit: type=1335 audit(1747444127.913:93): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 01:08:48.668837 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 01:08:48.668845 systemd[1]: Listening on systemd-journald.socket. May 17 01:08:48.668851 systemd[1]: Listening on systemd-networkd.socket. May 17 01:08:48.668858 systemd[1]: Listening on systemd-udevd-control.socket. May 17 01:08:48.668864 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 01:08:48.668872 systemd[1]: Listening on systemd-userdbd.socket. May 17 01:08:48.668879 systemd[1]: Mounting dev-hugepages.mount... May 17 01:08:48.668885 systemd[1]: Mounting dev-mqueue.mount... May 17 01:08:48.668892 systemd[1]: Mounting media.mount... May 17 01:08:48.668898 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:08:48.668905 systemd[1]: Mounting sys-kernel-debug.mount... May 17 01:08:48.668911 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 01:08:48.668918 systemd[1]: Mounting tmp.mount... May 17 01:08:48.668924 systemd[1]: Starting flatcar-tmpfiles.service... May 17 01:08:48.668931 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:08:48.668938 systemd[1]: Starting kmod-static-nodes.service... May 17 01:08:48.668945 systemd[1]: Starting modprobe@configfs.service... May 17 01:08:48.668952 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:08:48.668959 systemd[1]: Starting modprobe@drm.service... May 17 01:08:48.668965 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:08:48.668971 systemd[1]: Starting modprobe@fuse.service... May 17 01:08:48.668978 kernel: fuse: init (API version 7.34) May 17 01:08:48.668984 systemd[1]: Starting modprobe@loop.service... May 17 01:08:48.668996 kernel: loop: module loaded May 17 01:08:48.669006 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 01:08:48.669014 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 01:08:48.669023 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 01:08:48.669029 systemd[1]: Starting systemd-journald.service... May 17 01:08:48.669036 systemd[1]: Starting systemd-modules-load.service... May 17 01:08:48.669043 kernel: audit: type=1305 audit(1747444128.666:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 01:08:48.669051 systemd-journald[1289]: Journal started May 17 01:08:48.669078 systemd-journald[1289]: Runtime Journal (/run/log/journal/7b2866fa8b044dcf8e4d28317cad7d8e) is 8.0M, max 640.1M, 632.1M free. May 17 01:08:47.913000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 01:08:47.913000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 01:08:48.666000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 01:08:48.666000 audit[1289]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc66ae3950 a2=4000 a3=7ffc66ae39ec items=0 ppid=1 pid=1289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:08:48.666000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 01:08:48.716275 kernel: audit: type=1300 audit(1747444128.666:94): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc66ae3950 a2=4000 a3=7ffc66ae39ec items=0 ppid=1 pid=1289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:08:48.716294 kernel: audit: type=1327 audit(1747444128.666:94): proctitle="/usr/lib/systemd/systemd-journald" May 17 01:08:48.830445 systemd[1]: Starting systemd-network-generator.service... May 17 01:08:48.857276 systemd[1]: Starting systemd-remount-fs.service... May 17 01:08:48.883267 systemd[1]: Starting systemd-udev-trigger.service... May 17 01:08:48.928300 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:08:48.947443 systemd[1]: Started systemd-journald.service. May 17 01:08:48.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:48.956010 systemd[1]: Mounted dev-hugepages.mount. May 17 01:08:49.003438 kernel: audit: type=1130 audit(1747444128.955:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.010492 systemd[1]: Mounted dev-mqueue.mount. May 17 01:08:49.017493 systemd[1]: Mounted media.mount. May 17 01:08:49.024500 systemd[1]: Mounted sys-kernel-debug.mount. May 17 01:08:49.033477 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 01:08:49.042482 systemd[1]: Mounted tmp.mount. May 17 01:08:49.049591 systemd[1]: Finished flatcar-tmpfiles.service. May 17 01:08:49.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.058597 systemd[1]: Finished kmod-static-nodes.service. May 17 01:08:49.106392 kernel: audit: type=1130 audit(1747444129.058:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.114561 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 01:08:49.114639 systemd[1]: Finished modprobe@configfs.service. May 17 01:08:49.163427 kernel: audit: type=1130 audit(1747444129.114:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.171574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:08:49.171650 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:08:49.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.222267 kernel: audit: type=1130 audit(1747444129.171:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.222288 kernel: audit: type=1131 audit(1747444129.171:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.281591 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 01:08:49.281667 systemd[1]: Finished modprobe@drm.service. May 17 01:08:49.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.290588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:08:49.290661 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:08:49.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.299573 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 01:08:49.299647 systemd[1]: Finished modprobe@fuse.service. May 17 01:08:49.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.308564 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:08:49.308644 systemd[1]: Finished modprobe@loop.service. May 17 01:08:49.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.318626 systemd[1]: Finished systemd-modules-load.service. May 17 01:08:49.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.327600 systemd[1]: Finished systemd-network-generator.service. May 17 01:08:49.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.336662 systemd[1]: Finished systemd-remount-fs.service. May 17 01:08:49.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.345654 systemd[1]: Finished systemd-udev-trigger.service. May 17 01:08:49.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.354848 systemd[1]: Reached target network-pre.target. May 17 01:08:49.365379 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 01:08:49.375923 systemd[1]: Mounting sys-kernel-config.mount... May 17 01:08:49.383461 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 01:08:49.384476 systemd[1]: Starting systemd-hwdb-update.service... May 17 01:08:49.391918 systemd[1]: Starting systemd-journal-flush.service... May 17 01:08:49.395703 systemd-journald[1289]: Time spent on flushing to /var/log/journal/7b2866fa8b044dcf8e4d28317cad7d8e is 14.653ms for 1527 entries. May 17 01:08:49.395703 systemd-journald[1289]: System Journal (/var/log/journal/7b2866fa8b044dcf8e4d28317cad7d8e) is 8.0M, max 195.6M, 187.6M free. May 17 01:08:49.442586 systemd-journald[1289]: Received client request to flush runtime journal. May 17 01:08:49.408369 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:08:49.408897 systemd[1]: Starting systemd-random-seed.service... May 17 01:08:49.426385 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 01:08:49.426951 systemd[1]: Starting systemd-sysctl.service... May 17 01:08:49.433899 systemd[1]: Starting systemd-sysusers.service... May 17 01:08:49.440923 systemd[1]: Starting systemd-udev-settle.service... May 17 01:08:49.448587 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 01:08:49.456317 systemd[1]: Mounted sys-kernel-config.mount. May 17 01:08:49.464507 systemd[1]: Finished systemd-journal-flush.service. May 17 01:08:49.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.472481 systemd[1]: Finished systemd-random-seed.service. May 17 01:08:49.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.480487 systemd[1]: Finished systemd-sysctl.service. May 17 01:08:49.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.488457 systemd[1]: Finished systemd-sysusers.service. May 17 01:08:49.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.497386 systemd[1]: Reached target first-boot-complete.target. May 17 01:08:49.506009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 01:08:49.515561 udevadm[1315]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 01:08:49.524236 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 01:08:49.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.701952 systemd[1]: Finished systemd-hwdb-update.service. May 17 01:08:49.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.711168 systemd[1]: Starting systemd-udevd.service... May 17 01:08:49.723141 systemd-udevd[1323]: Using default interface naming scheme 'v252'. May 17 01:08:49.740516 systemd[1]: Started systemd-udevd.service. May 17 01:08:49.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:49.751026 systemd[1]: Found device dev-ttyS1.device. May 17 01:08:49.775238 systemd[1]: Starting systemd-networkd.service... May 17 01:08:49.797535 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 May 17 01:08:49.797599 kernel: ACPI: button: Sleep Button [SLPB] May 17 01:08:49.797617 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 01:08:49.813803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 01:08:49.821238 kernel: IPMI message handler: version 39.2 May 17 01:08:49.846237 kernel: ACPI: button: Power Button [PWRF] May 17 01:08:49.853174 systemd[1]: Starting systemd-userdbd.service... May 17 01:08:49.813000 audit[1343]: AVC avc: denied { confidentiality } for pid=1343 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 01:08:49.813000 audit[1343]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557ce3127570 a1=4d9cc a2=7f34c5a51bc5 a3=5 items=42 ppid=1323 pid=1343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:08:49.813000 audit: CWD cwd="/" May 17 01:08:49.813000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=1 name=(null) inode=26632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=2 name=(null) inode=26632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=3 name=(null) inode=26633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=4 name=(null) inode=26632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=5 name=(null) inode=26634 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=6 name=(null) inode=26632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=7 name=(null) inode=26635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=8 name=(null) inode=26635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=9 name=(null) inode=26636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=10 name=(null) inode=26635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=11 name=(null) inode=26637 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=12 name=(null) inode=26635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=13 name=(null) inode=26638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=14 name=(null) inode=26635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=15 name=(null) inode=26639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=16 name=(null) inode=26635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=17 name=(null) inode=26640 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=18 name=(null) inode=26632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=19 name=(null) inode=26641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=20 name=(null) inode=26641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=21 name=(null) inode=26642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=22 name=(null) inode=26641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=23 name=(null) inode=26643 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=24 name=(null) inode=26641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=25 name=(null) inode=26644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=26 name=(null) inode=26641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=27 name=(null) inode=26645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=28 name=(null) inode=26641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=29 name=(null) inode=26646 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=30 name=(null) inode=26632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=31 name=(null) inode=26647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=32 name=(null) inode=26647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=33 name=(null) inode=26648 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=34 name=(null) inode=26647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=35 name=(null) inode=26649 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=36 name=(null) inode=26647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=37 name=(null) inode=26650 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=38 name=(null) inode=26647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=39 name=(null) inode=26651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=40 name=(null) inode=26647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PATH item=41 name=(null) inode=26652 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:08:49.813000 audit: PROCTITLE proctitle="(udev-worker)" May 17 01:08:49.884243 kernel: mousedev: PS/2 mouse device common for all mice May 17 01:08:49.884324 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set May 17 01:08:49.950420 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt May 17 01:08:49.950540 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) May 17 01:08:49.950637 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface May 17 01:08:49.993293 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface May 17 01:08:50.014856 systemd[1]: Started systemd-userdbd.service. May 17 01:08:50.031234 kernel: ipmi device interface May 17 01:08:50.031268 kernel: iTCO_vendor_support: vendor-support=0 May 17 01:08:50.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:50.109054 kernel: ipmi_si: IPMI System Interface driver May 17 01:08:50.109106 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS May 17 01:08:50.153295 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 May 17 01:08:50.153317 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine May 17 01:08:50.153345 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI May 17 01:08:50.272150 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 May 17 01:08:50.272255 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) May 17 01:08:50.272334 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) May 17 01:08:50.272406 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI May 17 01:08:50.272474 kernel: ipmi_si: Adding ACPI-specified kcs state machine May 17 01:08:50.272487 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 May 17 01:08:50.261551 systemd-networkd[1378]: bond0: netdev ready May 17 01:08:50.263750 systemd-networkd[1378]: lo: Link UP May 17 01:08:50.263753 systemd-networkd[1378]: lo: Gained carrier May 17 01:08:50.264262 systemd-networkd[1378]: Enumeration completed May 17 01:08:50.264343 systemd[1]: Started systemd-networkd.service. May 17 01:08:50.264561 systemd-networkd[1378]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 17 01:08:50.265360 systemd-networkd[1378]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:fc:85.network. May 17 01:08:50.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:50.406277 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. May 17 01:08:50.454879 kernel: intel_rapl_common: Found RAPL domain package May 17 01:08:50.454999 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) May 17 01:08:50.455090 kernel: intel_rapl_common: Found RAPL domain core May 17 01:08:50.493155 kernel: intel_rapl_common: Found RAPL domain dram May 17 01:08:50.536274 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized May 17 01:08:50.557285 kernel: ipmi_ssif: IPMI SSIF Interface driver May 17 01:08:50.639344 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 01:08:50.665327 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link May 17 01:08:50.666940 systemd-networkd[1378]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:fc:84.network. May 17 01:08:50.701259 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:08:50.828442 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:08:50.883384 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 01:08:50.907239 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link May 17 01:08:50.928276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready May 17 01:08:50.931535 systemd[1]: Finished systemd-udev-settle.service. May 17 01:08:50.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:50.950533 systemd[1]: Starting lvm2-activation-early.service... May 17 01:08:50.957229 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:08:50.957649 systemd-networkd[1378]: bond0: Link UP May 17 01:08:50.958000 systemd-networkd[1378]: enp1s0f1np1: Link UP May 17 01:08:50.958153 systemd-networkd[1378]: enp1s0f1np1: Gained carrier May 17 01:08:50.959564 systemd-networkd[1378]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:fc:84.network. May 17 01:08:50.967408 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 01:08:51.005891 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex May 17 01:08:51.005916 kernel: bond0: active interface up! May 17 01:08:51.031232 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex May 17 01:08:51.043708 systemd[1]: Finished lvm2-activation-early.service. May 17 01:08:51.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.052416 systemd[1]: Reached target cryptsetup.target. May 17 01:08:51.060946 systemd[1]: Starting lvm2-activation.service... May 17 01:08:51.063237 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 01:08:51.084275 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:08:51.113713 systemd[1]: Finished lvm2-activation.service. May 17 01:08:51.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.123437 systemd[1]: Reached target local-fs-pre.target. May 17 01:08:51.131346 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 01:08:51.131361 systemd[1]: Reached target local-fs.target. May 17 01:08:51.149272 systemd[1]: Reached target machines.target. May 17 01:08:51.156231 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.173969 systemd[1]: Starting ldconfig.service... May 17 01:08:51.179279 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.195784 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:08:51.195807 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:08:51.196505 systemd[1]: Starting systemd-boot-update.service... May 17 01:08:51.201272 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.217739 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 01:08:51.223234 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.241805 systemd[1]: Starting systemd-machine-id-commit.service... May 17 01:08:51.244263 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.244293 systemd[1]: Starting systemd-sysext.service... May 17 01:08:51.244483 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1434 (bootctl) May 17 01:08:51.245108 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 01:08:51.266274 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.272617 systemd[1]: Unmounting usr-share-oem.mount... May 17 01:08:51.288255 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.288553 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 01:08:51.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.288722 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 01:08:51.288848 systemd[1]: Unmounted usr-share-oem.mount. May 17 01:08:51.309230 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.330294 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.330322 kernel: loop0: detected capacity change from 0 to 221472 May 17 01:08:51.345231 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.385233 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.405233 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.425033 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 01:08:51.425230 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.425250 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 01:08:51.425412 systemd[1]: Finished systemd-machine-id-commit.service. May 17 01:08:51.440232 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.465251 systemd-fsck[1447]: fsck.fat 4.2 (2021-01-31) May 17 01:08:51.465251 systemd-fsck[1447]: /dev/sda1: 790 files, 120726/258078 clusters May 17 01:08:51.475626 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 01:08:51.480233 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.499134 systemd[1]: Mounting boot.mount... May 17 01:08:51.500238 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.515235 systemd[1]: Mounted boot.mount. May 17 01:08:51.521232 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.541233 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.547543 systemd[1]: Finished systemd-boot-update.service. May 17 01:08:51.557232 kernel: loop1: detected capacity change from 0 to 221472 May 17 01:08:51.557260 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.571181 (sd-sysext)[1455]: Using extensions 'kubernetes'. May 17 01:08:51.571365 (sd-sysext)[1455]: Merged extensions into '/usr'. May 17 01:08:51.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.595230 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.596278 systemd-networkd[1378]: enp1s0f0np0: Link UP May 17 01:08:51.596462 systemd-networkd[1378]: bond0: Gained carrier May 17 01:08:51.596551 systemd-networkd[1378]: enp1s0f0np0: Gained carrier May 17 01:08:51.599945 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:08:51.600692 systemd[1]: Mounting usr-share-oem.mount... May 17 01:08:51.614279 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:08:51.614310 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave May 17 01:08:51.632961 ldconfig[1433]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 01:08:51.634602 systemd-networkd[1378]: enp1s0f1np1: Link DOWN May 17 01:08:51.634604 systemd-networkd[1378]: enp1s0f1np1: Lost carrier May 17 01:08:51.635447 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:08:51.636171 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:08:51.643883 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:08:51.650849 systemd[1]: Starting modprobe@loop.service... May 17 01:08:51.657356 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:08:51.657431 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:08:51.657497 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:08:51.659477 systemd[1]: Finished ldconfig.service. May 17 01:08:51.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.666451 systemd[1]: Mounted usr-share-oem.mount. May 17 01:08:51.673486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:08:51.673566 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:08:51.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.681508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:08:51.681585 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:08:51.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.689504 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:08:51.689580 systemd[1]: Finished modprobe@loop.service. May 17 01:08:51.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.697560 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:08:51.697624 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 01:08:51.698126 systemd[1]: Finished systemd-sysext.service. May 17 01:08:51.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.707045 systemd[1]: Starting ensure-sysext.service... May 17 01:08:51.713864 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 01:08:51.720326 systemd-tmpfiles[1472]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 01:08:51.720960 systemd-tmpfiles[1472]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 01:08:51.722008 systemd-tmpfiles[1472]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 01:08:51.723494 systemd[1]: Reloading. May 17 01:08:51.744482 /usr/lib/systemd/system-generators/torcx-generator[1491]: time="2025-05-17T01:08:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 01:08:51.744506 /usr/lib/systemd/system-generators/torcx-generator[1491]: time="2025-05-17T01:08:51Z" level=info msg="torcx already run" May 17 01:08:51.784238 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 01:08:51.801232 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 May 17 01:08:51.802705 systemd-networkd[1378]: enp1s0f1np1: Link UP May 17 01:08:51.802864 systemd-networkd[1378]: enp1s0f1np1: Gained carrier May 17 01:08:51.810858 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:08:51.810865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:08:51.823721 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:08:51.853433 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms May 17 01:08:51.853468 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex May 17 01:08:51.867412 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 01:08:51.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:08:51.876914 systemd[1]: Starting audit-rules.service... May 17 01:08:51.883872 systemd[1]: Starting clean-ca-certificates.service... May 17 01:08:51.891000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 01:08:51.891000 audit[1576]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcdbd58660 a2=420 a3=0 items=0 ppid=1559 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:08:51.891000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 01:08:51.892374 augenrules[1576]: No rules May 17 01:08:51.892983 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 01:08:51.902082 systemd[1]: Starting systemd-resolved.service... May 17 01:08:51.909049 systemd[1]: Starting systemd-timesyncd.service... May 17 01:08:51.915812 systemd[1]: Starting systemd-update-utmp.service... May 17 01:08:51.924443 systemd[1]: Finished audit-rules.service. May 17 01:08:51.931485 systemd[1]: Finished clean-ca-certificates.service. May 17 01:08:51.939463 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 01:08:51.952297 systemd[1]: Starting systemd-update-done.service... May 17 01:08:51.959322 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:08:51.959855 systemd[1]: Finished systemd-update-done.service. May 17 01:08:51.969811 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:08:51.970479 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:08:51.977909 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:08:51.984879 systemd[1]: Starting modprobe@loop.service... May 17 01:08:51.985448 systemd-resolved[1583]: Positive Trust Anchors: May 17 01:08:51.985455 systemd-resolved[1583]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:08:51.985475 systemd-resolved[1583]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 01:08:51.989717 systemd-resolved[1583]: Using system hostname 'ci-3510.3.7-n-b3aec2dc90'. May 17 01:08:51.991364 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:08:51.991440 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:08:51.991503 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:08:51.992004 systemd[1]: Started systemd-timesyncd.service. May 17 01:08:52.001035 systemd[1]: Started systemd-resolved.service. May 17 01:08:52.009578 systemd[1]: Finished systemd-update-utmp.service. May 17 01:08:52.017528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:08:52.017607 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:08:52.025525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:08:52.025602 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:08:52.033507 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:08:52.033588 systemd[1]: Finished modprobe@loop.service. May 17 01:08:52.042530 systemd[1]: Reached target network.target. May 17 01:08:52.050356 systemd[1]: Reached target nss-lookup.target. May 17 01:08:52.058365 systemd[1]: Reached target time-set.target. May 17 01:08:52.067412 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:08:52.070989 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:08:52.077897 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:08:52.084865 systemd[1]: Starting modprobe@loop.service... May 17 01:08:52.091339 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:08:52.091409 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:08:52.091471 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:08:52.092036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:08:52.092116 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:08:52.100512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:08:52.100586 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:08:52.108521 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:08:52.108600 systemd[1]: Finished modprobe@loop.service. May 17 01:08:52.116514 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:08:52.116590 systemd[1]: Reached target sysinit.target. May 17 01:08:52.124418 systemd[1]: Started motdgen.path. May 17 01:08:52.131402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 01:08:52.141457 systemd[1]: Started logrotate.timer. May 17 01:08:52.148423 systemd[1]: Started mdadm.timer. May 17 01:08:52.155491 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 01:08:52.163341 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 01:08:52.163404 systemd[1]: Reached target paths.target. May 17 01:08:52.170366 systemd[1]: Reached target timers.target. May 17 01:08:52.177534 systemd[1]: Listening on dbus.socket. May 17 01:08:52.184948 systemd[1]: Starting docker.socket... May 17 01:08:52.192104 systemd[1]: Listening on sshd.socket. May 17 01:08:52.199402 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:08:52.199471 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 01:08:52.200196 systemd[1]: Listening on docker.socket. May 17 01:08:52.208187 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 01:08:52.208255 systemd[1]: Reached target sockets.target. May 17 01:08:52.216322 systemd[1]: Reached target basic.target. May 17 01:08:52.223357 systemd[1]: System is tainted: cgroupsv1 May 17 01:08:52.223386 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 01:08:52.223442 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 01:08:52.224057 systemd[1]: Starting containerd.service... May 17 01:08:52.230772 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 01:08:52.239888 systemd[1]: Starting coreos-metadata.service... May 17 01:08:52.246929 systemd[1]: Starting dbus.service... May 17 01:08:52.253026 systemd[1]: Starting enable-oem-cloudinit.service... May 17 01:08:52.257446 jq[1616]: false May 17 01:08:52.260033 systemd[1]: Starting extend-filesystems.service... May 17 01:08:52.260246 coreos-metadata[1609]: May 17 01:08:52.260 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:08:52.265411 dbus-daemon[1615]: [system] SELinux support is enabled May 17 01:08:52.267327 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 01:08:52.267896 extend-filesystems[1618]: Found loop1 May 17 01:08:52.267896 extend-filesystems[1618]: Found sda May 17 01:08:52.293185 extend-filesystems[1618]: Found sda1 May 17 01:08:52.293185 extend-filesystems[1618]: Found sda2 May 17 01:08:52.293185 extend-filesystems[1618]: Found sda3 May 17 01:08:52.293185 extend-filesystems[1618]: Found usr May 17 01:08:52.293185 extend-filesystems[1618]: Found sda4 May 17 01:08:52.293185 extend-filesystems[1618]: Found sda6 May 17 01:08:52.293185 extend-filesystems[1618]: Found sda7 May 17 01:08:52.293185 extend-filesystems[1618]: Found sda9 May 17 01:08:52.293185 extend-filesystems[1618]: Checking size of /dev/sda9 May 17 01:08:52.293185 extend-filesystems[1618]: Resized partition /dev/sda9 May 17 01:08:52.394356 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks May 17 01:08:52.268019 systemd[1]: Starting modprobe@drm.service... May 17 01:08:52.394413 coreos-metadata[1612]: May 17 01:08:52.269 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:08:52.394560 extend-filesystems[1629]: resize2fs 1.46.5 (30-Dec-2021) May 17 01:08:52.285857 systemd[1]: Starting motdgen.service... May 17 01:08:52.305167 systemd[1]: Starting prepare-helm.service... May 17 01:08:52.320025 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 01:08:52.338929 systemd[1]: Starting sshd-keygen.service... May 17 01:08:52.357014 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 01:08:52.363349 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:08:52.364072 systemd[1]: Starting tcsd.service... May 17 01:08:52.386936 systemd[1]: Starting update-engine.service... May 17 01:08:52.414882 jq[1653]: true May 17 01:08:52.406914 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 01:08:52.424591 systemd[1]: Started dbus.service. May 17 01:08:52.433218 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 01:08:52.433347 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 01:08:52.433612 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 01:08:52.433700 systemd[1]: Finished modprobe@drm.service. May 17 01:08:52.435917 update_engine[1652]: I0517 01:08:52.435443 1652 main.cc:92] Flatcar Update Engine starting May 17 01:08:52.438986 update_engine[1652]: I0517 01:08:52.438947 1652 update_check_scheduler.cc:74] Next update check in 10m48s May 17 01:08:52.441572 systemd[1]: motdgen.service: Deactivated successfully. May 17 01:08:52.441685 systemd[1]: Finished motdgen.service. May 17 01:08:52.448883 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 01:08:52.449004 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 01:08:52.460591 jq[1660]: true May 17 01:08:52.460737 systemd[1]: Finished ensure-sysext.service. May 17 01:08:52.468596 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. May 17 01:08:52.468728 systemd[1]: Condition check resulted in tcsd.service being skipped. May 17 01:08:52.469489 tar[1658]: linux-amd64/helm May 17 01:08:52.469624 env[1661]: time="2025-05-17T01:08:52.469512407Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 01:08:52.480761 systemd[1]: Started update-engine.service. May 17 01:08:52.482876 env[1661]: time="2025-05-17T01:08:52.482096325Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 01:08:52.487930 env[1661]: time="2025-05-17T01:08:52.487890996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 01:08:52.488562 env[1661]: time="2025-05-17T01:08:52.488512392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 01:08:52.488562 env[1661]: time="2025-05-17T01:08:52.488528095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 01:08:52.489538 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:08:52.490331 env[1661]: time="2025-05-17T01:08:52.490315018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:08:52.490369 env[1661]: time="2025-05-17T01:08:52.490331750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 01:08:52.490369 env[1661]: time="2025-05-17T01:08:52.490342955Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 01:08:52.490369 env[1661]: time="2025-05-17T01:08:52.490352003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 01:08:52.490430 env[1661]: time="2025-05-17T01:08:52.490409177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 01:08:52.490506 systemd[1]: Started locksmithd.service. May 17 01:08:52.492454 env[1661]: time="2025-05-17T01:08:52.492444930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 01:08:52.492551 env[1661]: time="2025-05-17T01:08:52.492540634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:08:52.492575 env[1661]: time="2025-05-17T01:08:52.492551529Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 01:08:52.492596 env[1661]: time="2025-05-17T01:08:52.492578033Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 01:08:52.492596 env[1661]: time="2025-05-17T01:08:52.492585531Z" level=info msg="metadata content store policy set" policy=shared May 17 01:08:52.496781 bash[1694]: Updated "/home/core/.ssh/authorized_keys" May 17 01:08:52.497309 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 01:08:52.497327 systemd[1]: Reached target system-config.target. May 17 01:08:52.505044 env[1661]: time="2025-05-17T01:08:52.505003083Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 01:08:52.505044 env[1661]: time="2025-05-17T01:08:52.505022037Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 01:08:52.505044 env[1661]: time="2025-05-17T01:08:52.505030359Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 01:08:52.505119 env[1661]: time="2025-05-17T01:08:52.505048378Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 01:08:52.505119 env[1661]: time="2025-05-17T01:08:52.505057472Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 01:08:52.505119 env[1661]: time="2025-05-17T01:08:52.505065734Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 01:08:52.505119 env[1661]: time="2025-05-17T01:08:52.505074699Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 01:08:52.505119 env[1661]: time="2025-05-17T01:08:52.505083015Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 01:08:52.505119 env[1661]: time="2025-05-17T01:08:52.505090787Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 01:08:52.505119 env[1661]: time="2025-05-17T01:08:52.505099786Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 01:08:52.505119 env[1661]: time="2025-05-17T01:08:52.505110297Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 01:08:52.505119 env[1661]: time="2025-05-17T01:08:52.505118527Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 01:08:52.505255 env[1661]: time="2025-05-17T01:08:52.505169544Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 01:08:52.505255 env[1661]: time="2025-05-17T01:08:52.505218913Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 01:08:52.505490 env[1661]: time="2025-05-17T01:08:52.505432620Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 01:08:52.505490 env[1661]: time="2025-05-17T01:08:52.505457261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505490 env[1661]: time="2025-05-17T01:08:52.505466174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 01:08:52.505566 env[1661]: time="2025-05-17T01:08:52.505497256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505566 env[1661]: time="2025-05-17T01:08:52.505505410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505566 env[1661]: time="2025-05-17T01:08:52.505513079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505566 env[1661]: time="2025-05-17T01:08:52.505529928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505566 env[1661]: time="2025-05-17T01:08:52.505542855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505566 env[1661]: time="2025-05-17T01:08:52.505553038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505566 env[1661]: time="2025-05-17T01:08:52.505562511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505685 env[1661]: time="2025-05-17T01:08:52.505568837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505685 env[1661]: time="2025-05-17T01:08:52.505648529Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 01:08:52.505816 env[1661]: time="2025-05-17T01:08:52.505797060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505862 env[1661]: time="2025-05-17T01:08:52.505814194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505862 env[1661]: time="2025-05-17T01:08:52.505826895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 01:08:52.505862 env[1661]: time="2025-05-17T01:08:52.505844140Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 01:08:52.505948 env[1661]: time="2025-05-17T01:08:52.505879187Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 01:08:52.505948 env[1661]: time="2025-05-17T01:08:52.505907978Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 01:08:52.506002 env[1661]: time="2025-05-17T01:08:52.505933427Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 01:08:52.506040 env[1661]: time="2025-05-17T01:08:52.506029539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 01:08:52.506216 env[1661]: time="2025-05-17T01:08:52.506189662Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506223831Z" level=info msg="Connect containerd service" May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506249852Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506524100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506620932Z" level=info msg="Start subscribing containerd event" May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506640222Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506649800Z" level=info msg="Start recovering state" May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506666190Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506683258Z" level=info msg="Start event monitor" May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506690237Z" level=info msg="Start snapshots syncer" May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506690413Z" level=info msg="containerd successfully booted in 0.037550s" May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506695507Z" level=info msg="Start cni network conf syncer for default" May 17 01:08:52.508413 env[1661]: time="2025-05-17T01:08:52.506699632Z" level=info msg="Start streaming server" May 17 01:08:52.506463 systemd[1]: Starting systemd-logind.service... May 17 01:08:52.513386 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 01:08:52.513413 systemd[1]: Reached target user-config.target. May 17 01:08:52.521329 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:08:52.521503 systemd[1]: Started containerd.service. May 17 01:08:52.528528 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 01:08:52.535854 systemd-logind[1703]: Watching system buttons on /dev/input/event3 (Power Button) May 17 01:08:52.535866 systemd-logind[1703]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 01:08:52.535876 systemd-logind[1703]: Watching system buttons on /dev/input/event0 (HID 0557:2419) May 17 01:08:52.535980 systemd-logind[1703]: New seat seat0. May 17 01:08:52.538523 systemd[1]: Started systemd-logind.service. May 17 01:08:52.555109 locksmithd[1696]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 01:08:52.572714 sshd_keygen[1649]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 01:08:52.584751 systemd[1]: Finished sshd-keygen.service. May 17 01:08:52.592310 systemd[1]: Starting issuegen.service... May 17 01:08:52.599552 systemd[1]: issuegen.service: Deactivated successfully. May 17 01:08:52.599664 systemd[1]: Finished issuegen.service. May 17 01:08:52.606379 systemd-networkd[1378]: bond0: Gained IPv6LL May 17 01:08:52.606603 systemd-timesyncd[1585]: Network configuration changed, trying to establish connection. May 17 01:08:52.607246 systemd[1]: Starting systemd-user-sessions.service... May 17 01:08:52.615633 systemd[1]: Finished systemd-user-sessions.service. May 17 01:08:52.625128 systemd[1]: Started getty@tty1.service. May 17 01:08:52.633038 systemd[1]: Started serial-getty@ttyS1.service. May 17 01:08:52.641418 systemd[1]: Reached target getty.target. May 17 01:08:52.732720 tar[1658]: linux-amd64/LICENSE May 17 01:08:52.732720 tar[1658]: linux-amd64/README.md May 17 01:08:52.735643 systemd[1]: Finished prepare-helm.service. May 17 01:08:52.786260 kernel: EXT4-fs (sda9): resized filesystem to 116605649 May 17 01:08:52.814252 extend-filesystems[1629]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 01:08:52.814252 extend-filesystems[1629]: old_desc_blocks = 1, new_desc_blocks = 56 May 17 01:08:52.814252 extend-filesystems[1629]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. May 17 01:08:52.853326 extend-filesystems[1618]: Resized filesystem in /dev/sda9 May 17 01:08:52.853326 extend-filesystems[1618]: Found sdb May 17 01:08:52.814686 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 01:08:52.814799 systemd[1]: Finished extend-filesystems.service. May 17 01:08:53.630342 systemd-timesyncd[1585]: Network configuration changed, trying to establish connection. May 17 01:08:53.630890 systemd-timesyncd[1585]: Network configuration changed, trying to establish connection. May 17 01:08:53.634581 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 01:08:53.644650 systemd[1]: Reached target network-online.target. May 17 01:08:53.654425 systemd[1]: Starting kubelet.service... May 17 01:08:53.841299 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 May 17 01:08:53.924282 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:1 May 17 01:08:54.052295 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 May 17 01:08:54.463294 systemd[1]: Started kubelet.service. May 17 01:08:55.076001 kubelet[1751]: E0517 01:08:55.075942 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:08:55.077191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:08:55.077307 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:08:57.653218 login[1731]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 01:08:57.660309 systemd[1]: Created slice user-500.slice. May 17 01:08:57.660834 systemd[1]: Starting user-runtime-dir@500.service... May 17 01:08:57.662033 systemd-logind[1703]: New session 1 of user core. May 17 01:08:57.663179 login[1730]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 01:08:57.665434 systemd-logind[1703]: New session 2 of user core. May 17 01:08:57.667716 systemd[1]: Finished user-runtime-dir@500.service. May 17 01:08:57.668376 systemd[1]: Starting user@500.service... May 17 01:08:57.670419 (systemd)[1772]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 01:08:57.767296 systemd[1772]: Queued start job for default target default.target. May 17 01:08:57.767402 systemd[1772]: Reached target paths.target. May 17 01:08:57.767413 systemd[1772]: Reached target sockets.target. May 17 01:08:57.767421 systemd[1772]: Reached target timers.target. May 17 01:08:57.767428 systemd[1772]: Reached target basic.target. May 17 01:08:57.767448 systemd[1772]: Reached target default.target. May 17 01:08:57.767461 systemd[1772]: Startup finished in 93ms. May 17 01:08:57.767524 systemd[1]: Started user@500.service. May 17 01:08:57.768055 systemd[1]: Started session-1.scope. May 17 01:08:57.768407 systemd[1]: Started session-2.scope. May 17 01:08:58.430596 coreos-metadata[1609]: May 17 01:08:58.430 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known May 17 01:08:58.431528 coreos-metadata[1612]: May 17 01:08:58.430 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known May 17 01:08:59.043544 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 May 17 01:08:59.043703 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 May 17 01:08:59.430761 coreos-metadata[1609]: May 17 01:08:59.430 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 17 01:08:59.431689 coreos-metadata[1612]: May 17 01:08:59.430 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 17 01:08:59.697828 systemd[1]: Created slice system-sshd.slice. May 17 01:08:59.698492 systemd[1]: Started sshd@0-147.28.180.193:22-139.178.89.65:51552.service. May 17 01:08:59.745233 sshd[1794]: Accepted publickey for core from 139.178.89.65 port 51552 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:08:59.746113 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:08:59.749174 systemd-logind[1703]: New session 3 of user core. May 17 01:08:59.749835 systemd[1]: Started session-3.scope. May 17 01:08:59.801917 systemd[1]: Started sshd@1-147.28.180.193:22-139.178.89.65:51558.service. May 17 01:08:59.830006 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 51558 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:08:59.830706 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:08:59.832900 systemd-logind[1703]: New session 4 of user core. May 17 01:08:59.833323 systemd[1]: Started session-4.scope. May 17 01:08:59.884940 sshd[1799]: pam_unix(sshd:session): session closed for user core May 17 01:08:59.886586 systemd[1]: Started sshd@2-147.28.180.193:22-139.178.89.65:51562.service. May 17 01:08:59.886881 systemd[1]: sshd@1-147.28.180.193:22-139.178.89.65:51558.service: Deactivated successfully. May 17 01:08:59.887477 systemd[1]: session-4.scope: Deactivated successfully. May 17 01:08:59.887486 systemd-logind[1703]: Session 4 logged out. Waiting for processes to exit. May 17 01:08:59.887942 systemd-logind[1703]: Removed session 4. May 17 01:08:59.915681 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 51562 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:08:59.916633 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:08:59.920184 systemd-logind[1703]: New session 5 of user core. May 17 01:08:59.920885 systemd[1]: Started session-5.scope. May 17 01:08:59.988554 sshd[1805]: pam_unix(sshd:session): session closed for user core May 17 01:08:59.994401 systemd[1]: sshd@2-147.28.180.193:22-139.178.89.65:51562.service: Deactivated successfully. May 17 01:08:59.996932 systemd-logind[1703]: Session 5 logged out. Waiting for processes to exit. May 17 01:08:59.996960 systemd[1]: session-5.scope: Deactivated successfully. May 17 01:08:59.999505 systemd-logind[1703]: Removed session 5. May 17 01:09:00.401658 coreos-metadata[1609]: May 17 01:09:00.401 INFO Fetch successful May 17 01:09:00.486248 unknown[1609]: wrote ssh authorized keys file for user: core May 17 01:09:00.499872 update-ssh-keys[1814]: Updated "/home/core/.ssh/authorized_keys" May 17 01:09:00.500153 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 01:09:00.857659 coreos-metadata[1612]: May 17 01:09:00.857 INFO Fetch successful May 17 01:09:00.939712 systemd[1]: Finished coreos-metadata.service. May 17 01:09:00.940588 systemd[1]: Started packet-phone-home.service. May 17 01:09:00.940709 systemd[1]: Reached target multi-user.target. May 17 01:09:00.941452 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 01:09:00.945457 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 01:09:00.945562 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 01:09:00.945735 systemd[1]: Startup finished in 26.048s (kernel) + 15.921s (userspace) = 41.969s. May 17 01:09:00.945994 curl[1822]: % Total % Received % Xferd Average Speed Time Time Time Current May 17 01:09:00.946114 curl[1822]: Dload Upload Total Spent Left Speed May 17 01:09:01.570019 curl[1822]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 May 17 01:09:01.572431 systemd[1]: packet-phone-home.service: Deactivated successfully. May 17 01:09:05.082120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 01:09:05.082708 systemd[1]: Stopped kubelet.service. May 17 01:09:05.085931 systemd[1]: Starting kubelet.service... May 17 01:09:05.316313 systemd[1]: Started kubelet.service. May 17 01:09:05.340113 kubelet[1834]: E0517 01:09:05.339984 1834 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:09:05.341909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:09:05.341990 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:09:09.995367 systemd[1]: Started sshd@3-147.28.180.193:22-139.178.89.65:52456.service. May 17 01:09:10.024124 sshd[1853]: Accepted publickey for core from 139.178.89.65 port 52456 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:09:10.025013 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:09:10.028036 systemd-logind[1703]: New session 6 of user core. May 17 01:09:10.028676 systemd[1]: Started session-6.scope. May 17 01:09:10.083851 sshd[1853]: pam_unix(sshd:session): session closed for user core May 17 01:09:10.085281 systemd[1]: Started sshd@4-147.28.180.193:22-139.178.89.65:52470.service. May 17 01:09:10.085580 systemd[1]: sshd@3-147.28.180.193:22-139.178.89.65:52456.service: Deactivated successfully. May 17 01:09:10.086035 systemd-logind[1703]: Session 6 logged out. Waiting for processes to exit. May 17 01:09:10.086080 systemd[1]: session-6.scope: Deactivated successfully. May 17 01:09:10.086588 systemd-logind[1703]: Removed session 6. May 17 01:09:10.114101 sshd[1859]: Accepted publickey for core from 139.178.89.65 port 52470 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:09:10.115099 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:09:10.118200 systemd-logind[1703]: New session 7 of user core. May 17 01:09:10.118987 systemd[1]: Started session-7.scope. May 17 01:09:10.175088 sshd[1859]: pam_unix(sshd:session): session closed for user core May 17 01:09:10.180067 systemd[1]: Started sshd@5-147.28.180.193:22-139.178.89.65:52472.service. May 17 01:09:10.180396 systemd[1]: sshd@4-147.28.180.193:22-139.178.89.65:52470.service: Deactivated successfully. May 17 01:09:10.180831 systemd-logind[1703]: Session 7 logged out. Waiting for processes to exit. May 17 01:09:10.180871 systemd[1]: session-7.scope: Deactivated successfully. May 17 01:09:10.181398 systemd-logind[1703]: Removed session 7. May 17 01:09:10.208699 sshd[1866]: Accepted publickey for core from 139.178.89.65 port 52472 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:09:10.209675 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:09:10.212966 systemd-logind[1703]: New session 8 of user core. May 17 01:09:10.213668 systemd[1]: Started session-8.scope. May 17 01:09:10.280858 sshd[1866]: pam_unix(sshd:session): session closed for user core May 17 01:09:10.287194 systemd[1]: Started sshd@6-147.28.180.193:22-139.178.89.65:52474.service. May 17 01:09:10.288839 systemd[1]: sshd@5-147.28.180.193:22-139.178.89.65:52472.service: Deactivated successfully. May 17 01:09:10.291417 systemd-logind[1703]: Session 8 logged out. Waiting for processes to exit. May 17 01:09:10.291523 systemd[1]: session-8.scope: Deactivated successfully. May 17 01:09:10.294190 systemd-logind[1703]: Removed session 8. May 17 01:09:10.354642 sshd[1873]: Accepted publickey for core from 139.178.89.65 port 52474 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:09:10.357891 sshd[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:09:10.368433 systemd-logind[1703]: New session 9 of user core. May 17 01:09:10.370729 systemd[1]: Started session-9.scope. May 17 01:09:10.478658 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 01:09:10.479376 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 01:09:10.504674 systemd[1]: Starting docker.service... May 17 01:09:10.523206 env[1893]: time="2025-05-17T01:09:10.523144626Z" level=info msg="Starting up" May 17 01:09:10.523878 env[1893]: time="2025-05-17T01:09:10.523831193Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 01:09:10.523878 env[1893]: time="2025-05-17T01:09:10.523842291Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 01:09:10.523878 env[1893]: time="2025-05-17T01:09:10.523854873Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 01:09:10.523878 env[1893]: time="2025-05-17T01:09:10.523861616Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 01:09:10.524844 env[1893]: time="2025-05-17T01:09:10.524803001Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 01:09:10.524844 env[1893]: time="2025-05-17T01:09:10.524812066Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 01:09:10.524844 env[1893]: time="2025-05-17T01:09:10.524819484Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 01:09:10.524844 env[1893]: time="2025-05-17T01:09:10.524824694Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 01:09:10.695773 env[1893]: time="2025-05-17T01:09:10.695675781Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 01:09:10.695773 env[1893]: time="2025-05-17T01:09:10.695714389Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 01:09:10.696147 env[1893]: time="2025-05-17T01:09:10.695951215Z" level=info msg="Loading containers: start." May 17 01:09:10.839287 kernel: Initializing XFRM netlink socket May 17 01:09:10.877093 env[1893]: time="2025-05-17T01:09:10.877042644Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 01:09:10.877925 systemd-timesyncd[1585]: Network configuration changed, trying to establish connection. May 17 01:09:10.932405 systemd-networkd[1378]: docker0: Link UP May 17 01:09:10.947298 systemd-timesyncd[1585]: Contacted time server [2606:6680:8:1::d14e:69b8]:123 (2.flatcar.pool.ntp.org). May 17 01:09:10.947387 systemd-timesyncd[1585]: Initial clock synchronization to Sat 2025-05-17 01:09:10.964823 UTC. May 17 01:09:10.959597 env[1893]: time="2025-05-17T01:09:10.959522942Z" level=info msg="Loading containers: done." May 17 01:09:10.976486 env[1893]: time="2025-05-17T01:09:10.976442258Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 01:09:10.976553 env[1893]: time="2025-05-17T01:09:10.976521379Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 01:09:10.976576 env[1893]: time="2025-05-17T01:09:10.976570548Z" level=info msg="Daemon has completed initialization" May 17 01:09:10.977949 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2239007483-merged.mount: Deactivated successfully. May 17 01:09:10.983002 systemd[1]: Started docker.service. May 17 01:09:10.986095 env[1893]: time="2025-05-17T01:09:10.986045262Z" level=info msg="API listen on /run/docker.sock" May 17 01:09:11.999757 env[1661]: time="2025-05-17T01:09:11.999622326Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 01:09:12.638932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798051098.mount: Deactivated successfully. May 17 01:09:13.692308 env[1661]: time="2025-05-17T01:09:13.692232044Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:13.693007 env[1661]: time="2025-05-17T01:09:13.692995457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:13.693977 env[1661]: time="2025-05-17T01:09:13.693964552Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:13.695258 env[1661]: time="2025-05-17T01:09:13.695245492Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:13.695647 env[1661]: time="2025-05-17T01:09:13.695633457Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 01:09:13.696020 env[1661]: time="2025-05-17T01:09:13.696007188Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 01:09:15.027106 env[1661]: time="2025-05-17T01:09:15.027077139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:15.028525 env[1661]: time="2025-05-17T01:09:15.028479626Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:15.029701 env[1661]: time="2025-05-17T01:09:15.029659457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:15.047409 env[1661]: time="2025-05-17T01:09:15.047350123Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:15.047817 env[1661]: time="2025-05-17T01:09:15.047767893Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 01:09:15.048210 env[1661]: time="2025-05-17T01:09:15.048132036Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 01:09:15.581207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 01:09:15.581477 systemd[1]: Stopped kubelet.service. May 17 01:09:15.583018 systemd[1]: Starting kubelet.service... May 17 01:09:15.863880 systemd[1]: Started kubelet.service. May 17 01:09:15.885815 kubelet[2053]: E0517 01:09:15.885789 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:09:15.886907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:09:15.886995 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:09:16.252399 env[1661]: time="2025-05-17T01:09:16.252302564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:16.252951 env[1661]: time="2025-05-17T01:09:16.252906091Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:16.254189 env[1661]: time="2025-05-17T01:09:16.254154714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:16.255114 env[1661]: time="2025-05-17T01:09:16.255073418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:16.255599 env[1661]: time="2025-05-17T01:09:16.255546131Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 01:09:16.256052 env[1661]: time="2025-05-17T01:09:16.255989273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 01:09:17.237141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31451629.mount: Deactivated successfully. May 17 01:09:17.646658 env[1661]: time="2025-05-17T01:09:17.646606835Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:17.647234 env[1661]: time="2025-05-17T01:09:17.647197285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:17.647926 env[1661]: time="2025-05-17T01:09:17.647894352Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:17.648447 env[1661]: time="2025-05-17T01:09:17.648409374Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:17.648768 env[1661]: time="2025-05-17T01:09:17.648734166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 01:09:17.649188 env[1661]: time="2025-05-17T01:09:17.649148906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 01:09:18.199798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2117588808.mount: Deactivated successfully. May 17 01:09:18.917816 env[1661]: time="2025-05-17T01:09:18.917752525Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:18.918534 env[1661]: time="2025-05-17T01:09:18.918500305Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:18.919616 env[1661]: time="2025-05-17T01:09:18.919572317Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:18.920677 env[1661]: time="2025-05-17T01:09:18.920634413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:18.921647 env[1661]: time="2025-05-17T01:09:18.921604294Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 01:09:18.922078 env[1661]: time="2025-05-17T01:09:18.922006922Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 01:09:19.427944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount985481747.mount: Deactivated successfully. May 17 01:09:19.429158 env[1661]: time="2025-05-17T01:09:19.429115594Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:19.430207 env[1661]: time="2025-05-17T01:09:19.430166959Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:19.431026 env[1661]: time="2025-05-17T01:09:19.430987379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:19.431856 env[1661]: time="2025-05-17T01:09:19.431799150Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:19.432246 env[1661]: time="2025-05-17T01:09:19.432212833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 01:09:19.432665 env[1661]: time="2025-05-17T01:09:19.432627889Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 01:09:19.970545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517664123.mount: Deactivated successfully. May 17 01:09:21.579256 env[1661]: time="2025-05-17T01:09:21.579206098Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:21.579963 env[1661]: time="2025-05-17T01:09:21.579904112Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:21.581119 env[1661]: time="2025-05-17T01:09:21.581066644Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:21.582313 env[1661]: time="2025-05-17T01:09:21.582259667Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:21.582758 env[1661]: time="2025-05-17T01:09:21.582718685Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 01:09:22.970652 systemd[1]: Stopped kubelet.service. May 17 01:09:22.971994 systemd[1]: Starting kubelet.service... May 17 01:09:22.988069 systemd[1]: Reloading. May 17 01:09:23.048526 /usr/lib/systemd/system-generators/torcx-generator[2146]: time="2025-05-17T01:09:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 01:09:23.048542 /usr/lib/systemd/system-generators/torcx-generator[2146]: time="2025-05-17T01:09:23Z" level=info msg="torcx already run" May 17 01:09:23.107705 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:09:23.107713 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:09:23.120604 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:09:23.196478 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 01:09:23.196537 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 01:09:23.196716 systemd[1]: Stopped kubelet.service. May 17 01:09:23.197800 systemd[1]: Starting kubelet.service... May 17 01:09:23.422654 systemd[1]: Started kubelet.service. May 17 01:09:23.455439 kubelet[2222]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:09:23.455439 kubelet[2222]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 01:09:23.455439 kubelet[2222]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:09:23.455722 kubelet[2222]: I0517 01:09:23.455478 2222 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 01:09:23.689736 kubelet[2222]: I0517 01:09:23.689664 2222 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 01:09:23.689736 kubelet[2222]: I0517 01:09:23.689678 2222 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 01:09:23.689831 kubelet[2222]: I0517 01:09:23.689821 2222 server.go:934] "Client rotation is on, will bootstrap in background" May 17 01:09:23.714794 kubelet[2222]: E0517 01:09:23.714749 2222 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.180.193:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.180.193:6443: connect: connection refused" logger="UnhandledError" May 17 01:09:23.716271 kubelet[2222]: I0517 01:09:23.716215 2222 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:09:23.721989 kubelet[2222]: E0517 01:09:23.721971 2222 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 01:09:23.722025 kubelet[2222]: I0517 01:09:23.721992 2222 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 01:09:23.742836 kubelet[2222]: I0517 01:09:23.742798 2222 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 01:09:23.743641 kubelet[2222]: I0517 01:09:23.743602 2222 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 01:09:23.743733 kubelet[2222]: I0517 01:09:23.743680 2222 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 01:09:23.743849 kubelet[2222]: I0517 01:09:23.743696 2222 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-b3aec2dc90","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 01:09:23.743849 kubelet[2222]: I0517 01:09:23.743824 2222 topology_manager.go:138] "Creating topology manager with none policy" May 17 01:09:23.743849 kubelet[2222]: I0517 01:09:23.743832 2222 container_manager_linux.go:300] "Creating device plugin manager" May 17 01:09:23.743984 kubelet[2222]: I0517 01:09:23.743897 2222 state_mem.go:36] "Initialized new in-memory state store" May 17 01:09:23.748616 kubelet[2222]: I0517 01:09:23.748576 2222 kubelet.go:408] "Attempting to sync node with API server" May 17 01:09:23.748616 kubelet[2222]: I0517 01:09:23.748592 2222 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 01:09:23.748616 kubelet[2222]: I0517 01:09:23.748614 2222 kubelet.go:314] "Adding apiserver pod source" May 17 01:09:23.748706 kubelet[2222]: I0517 01:09:23.748627 2222 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 01:09:23.749388 kubelet[2222]: W0517 01:09:23.749331 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.193:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.180.193:6443: connect: connection refused May 17 01:09:23.749388 kubelet[2222]: E0517 01:09:23.749367 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.180.193:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.180.193:6443: connect: connection refused" logger="UnhandledError" May 17 01:09:23.751516 kubelet[2222]: I0517 01:09:23.751479 2222 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 01:09:23.751763 kubelet[2222]: I0517 01:09:23.751753 2222 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 01:09:23.753516 kubelet[2222]: W0517 01:09:23.753455 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.193:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-b3aec2dc90&limit=500&resourceVersion=0": dial tcp 147.28.180.193:6443: connect: connection refused May 17 01:09:23.753516 kubelet[2222]: E0517 01:09:23.753490 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.180.193:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-b3aec2dc90&limit=500&resourceVersion=0\": dial tcp 147.28.180.193:6443: connect: connection refused" logger="UnhandledError" May 17 01:09:23.755514 kubelet[2222]: W0517 01:09:23.755475 2222 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 01:09:23.757697 kubelet[2222]: I0517 01:09:23.757657 2222 server.go:1274] "Started kubelet" May 17 01:09:23.757794 kubelet[2222]: I0517 01:09:23.757756 2222 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 01:09:23.757850 kubelet[2222]: I0517 01:09:23.757787 2222 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 01:09:23.757973 kubelet[2222]: I0517 01:09:23.757961 2222 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 01:09:23.772578 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 01:09:23.772706 kubelet[2222]: I0517 01:09:23.772677 2222 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 01:09:23.772773 kubelet[2222]: I0517 01:09:23.772737 2222 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 01:09:23.772773 kubelet[2222]: I0517 01:09:23.772755 2222 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 01:09:23.772832 kubelet[2222]: E0517 01:09:23.772778 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:23.772832 kubelet[2222]: I0517 01:09:23.772804 2222 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 01:09:23.772890 kubelet[2222]: I0517 01:09:23.772864 2222 reconciler.go:26] "Reconciler: start to sync state" May 17 01:09:23.773215 kubelet[2222]: E0517 01:09:23.773180 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.193:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b3aec2dc90?timeout=10s\": dial tcp 147.28.180.193:6443: connect: connection refused" interval="200ms" May 17 01:09:23.773269 kubelet[2222]: I0517 01:09:23.773258 2222 factory.go:221] Registration of the systemd container factory successfully May 17 01:09:23.774292 kubelet[2222]: W0517 01:09:23.774266 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.193:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.193:6443: connect: connection refused May 17 01:09:23.774328 kubelet[2222]: E0517 01:09:23.774298 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.180.193:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.180.193:6443: connect: connection refused" logger="UnhandledError" May 17 01:09:23.774352 kubelet[2222]: E0517 01:09:23.774329 2222 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 01:09:23.774407 kubelet[2222]: I0517 01:09:23.774392 2222 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 01:09:23.774487 kubelet[2222]: I0517 01:09:23.774478 2222 server.go:449] "Adding debug handlers to kubelet server" May 17 01:09:23.774936 kubelet[2222]: I0517 01:09:23.774927 2222 factory.go:221] Registration of the containerd container factory successfully May 17 01:09:23.778853 kubelet[2222]: E0517 01:09:23.774352 2222 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.193:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.193:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-b3aec2dc90.18402b35659a8d01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-b3aec2dc90,UID:ci-3510.3.7-n-b3aec2dc90,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-b3aec2dc90,},FirstTimestamp:2025-05-17 01:09:23.757640961 +0000 UTC m=+0.331674538,LastTimestamp:2025-05-17 01:09:23.757640961 +0000 UTC m=+0.331674538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-b3aec2dc90,}" May 17 01:09:23.782068 kubelet[2222]: I0517 01:09:23.782048 2222 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 01:09:23.782591 kubelet[2222]: I0517 01:09:23.782577 2222 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 01:09:23.782591 kubelet[2222]: I0517 01:09:23.782590 2222 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 01:09:23.782661 kubelet[2222]: I0517 01:09:23.782599 2222 kubelet.go:2321] "Starting kubelet main sync loop" May 17 01:09:23.782661 kubelet[2222]: E0517 01:09:23.782629 2222 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 01:09:23.787220 kubelet[2222]: W0517 01:09:23.787164 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.193:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.193:6443: connect: connection refused May 17 01:09:23.787220 kubelet[2222]: E0517 01:09:23.787197 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.180.193:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.180.193:6443: connect: connection refused" logger="UnhandledError" May 17 01:09:23.873217 kubelet[2222]: E0517 01:09:23.873102 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:23.883838 kubelet[2222]: E0517 01:09:23.883756 2222 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 01:09:23.933544 kubelet[2222]: I0517 01:09:23.933489 2222 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 01:09:23.933544 kubelet[2222]: I0517 01:09:23.933529 2222 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 01:09:23.933877 kubelet[2222]: I0517 01:09:23.933576 2222 state_mem.go:36] "Initialized new in-memory state store" May 17 01:09:23.935618 kubelet[2222]: I0517 01:09:23.935543 2222 policy_none.go:49] "None policy: Start" May 17 01:09:23.936884 kubelet[2222]: I0517 01:09:23.936832 2222 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 01:09:23.937106 kubelet[2222]: I0517 01:09:23.936898 2222 state_mem.go:35] "Initializing new in-memory state store" May 17 01:09:23.947284 kubelet[2222]: I0517 01:09:23.947089 2222 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 01:09:23.947562 kubelet[2222]: I0517 01:09:23.947493 2222 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 01:09:23.947736 kubelet[2222]: I0517 01:09:23.947536 2222 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 01:09:23.947998 kubelet[2222]: I0517 01:09:23.947948 2222 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 01:09:23.949949 kubelet[2222]: E0517 01:09:23.949898 2222 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:23.974825 kubelet[2222]: E0517 01:09:23.974719 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.193:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b3aec2dc90?timeout=10s\": dial tcp 147.28.180.193:6443: connect: connection refused" interval="400ms" May 17 01:09:24.052201 kubelet[2222]: I0517 01:09:24.052134 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.052908 kubelet[2222]: E0517 01:09:24.052854 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.180.193:6443/api/v1/nodes\": dial tcp 147.28.180.193:6443: connect: connection refused" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.174987 kubelet[2222]: I0517 01:09:24.174871 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bc582d9745a887bfbfb35d0ae75ef84-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" (UID: \"9bc582d9745a887bfbfb35d0ae75ef84\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.174987 kubelet[2222]: I0517 01:09:24.174970 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bc582d9745a887bfbfb35d0ae75ef84-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" (UID: \"9bc582d9745a887bfbfb35d0ae75ef84\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.175417 kubelet[2222]: I0517 01:09:24.175107 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bc582d9745a887bfbfb35d0ae75ef84-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" (UID: \"9bc582d9745a887bfbfb35d0ae75ef84\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.175417 kubelet[2222]: I0517 01:09:24.175205 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2faf743c53adcb7084986d4b6643960d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-b3aec2dc90\" (UID: \"2faf743c53adcb7084986d4b6643960d\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.175417 kubelet[2222]: I0517 01:09:24.175281 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2db19a57d13dbd04094c69f121ae3db-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b3aec2dc90\" (UID: \"c2db19a57d13dbd04094c69f121ae3db\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.175417 kubelet[2222]: I0517 01:09:24.175337 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bc582d9745a887bfbfb35d0ae75ef84-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" (UID: \"9bc582d9745a887bfbfb35d0ae75ef84\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.175417 kubelet[2222]: I0517 01:09:24.175388 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9bc582d9745a887bfbfb35d0ae75ef84-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" (UID: \"9bc582d9745a887bfbfb35d0ae75ef84\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.175922 kubelet[2222]: I0517 01:09:24.175433 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2db19a57d13dbd04094c69f121ae3db-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b3aec2dc90\" (UID: \"c2db19a57d13dbd04094c69f121ae3db\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.175922 kubelet[2222]: I0517 01:09:24.175482 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2db19a57d13dbd04094c69f121ae3db-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-b3aec2dc90\" (UID: \"c2db19a57d13dbd04094c69f121ae3db\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.257642 kubelet[2222]: I0517 01:09:24.257435 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.258297 kubelet[2222]: E0517 01:09:24.258200 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.180.193:6443/api/v1/nodes\": dial tcp 147.28.180.193:6443: connect: connection refused" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.375939 kubelet[2222]: E0517 01:09:24.375812 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.193:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b3aec2dc90?timeout=10s\": dial tcp 147.28.180.193:6443: connect: connection refused" interval="800ms" May 17 01:09:24.400158 env[1661]: time="2025-05-17T01:09:24.400006161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-b3aec2dc90,Uid:9bc582d9745a887bfbfb35d0ae75ef84,Namespace:kube-system,Attempt:0,}" May 17 01:09:24.403141 env[1661]: time="2025-05-17T01:09:24.403059982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-b3aec2dc90,Uid:2faf743c53adcb7084986d4b6643960d,Namespace:kube-system,Attempt:0,}" May 17 01:09:24.407466 env[1661]: time="2025-05-17T01:09:24.407382346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-b3aec2dc90,Uid:c2db19a57d13dbd04094c69f121ae3db,Namespace:kube-system,Attempt:0,}" May 17 01:09:24.663422 kubelet[2222]: I0517 01:09:24.663315 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.664427 kubelet[2222]: E0517 01:09:24.664072 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.180.193:6443/api/v1/nodes\": dial tcp 147.28.180.193:6443: connect: connection refused" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:24.757946 kubelet[2222]: W0517 01:09:24.756930 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.193:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-b3aec2dc90&limit=500&resourceVersion=0": dial tcp 147.28.180.193:6443: connect: connection refused May 17 01:09:24.758313 kubelet[2222]: E0517 01:09:24.757945 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.180.193:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-b3aec2dc90&limit=500&resourceVersion=0\": dial tcp 147.28.180.193:6443: connect: connection refused" logger="UnhandledError" May 17 01:09:24.882773 kubelet[2222]: W0517 01:09:24.882628 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.193:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.193:6443: connect: connection refused May 17 01:09:24.883030 kubelet[2222]: E0517 01:09:24.882796 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.180.193:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.180.193:6443: connect: connection refused" logger="UnhandledError" May 17 01:09:24.901762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222888062.mount: Deactivated successfully. May 17 01:09:24.902580 env[1661]: time="2025-05-17T01:09:24.902561762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.903593 env[1661]: time="2025-05-17T01:09:24.903577509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.904141 env[1661]: time="2025-05-17T01:09:24.904126258Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.904871 env[1661]: time="2025-05-17T01:09:24.904858476Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.905687 env[1661]: time="2025-05-17T01:09:24.905664789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.906524 env[1661]: time="2025-05-17T01:09:24.906510825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.906896 env[1661]: time="2025-05-17T01:09:24.906886257Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.907994 env[1661]: time="2025-05-17T01:09:24.907952300Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.909352 env[1661]: time="2025-05-17T01:09:24.909275607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.909712 env[1661]: time="2025-05-17T01:09:24.909673094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.910161 env[1661]: time="2025-05-17T01:09:24.910118328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.911671 env[1661]: time="2025-05-17T01:09:24.911631067Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:24.919607 env[1661]: time="2025-05-17T01:09:24.919531203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:09:24.919607 env[1661]: time="2025-05-17T01:09:24.919558845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:09:24.919607 env[1661]: time="2025-05-17T01:09:24.919572364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:09:24.919734 env[1661]: time="2025-05-17T01:09:24.919665640Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b44355fed3924a1deabcde57f79e59b58db4891920ae49ec3d810c8ac47a843 pid=2273 runtime=io.containerd.runc.v2 May 17 01:09:24.921062 env[1661]: time="2025-05-17T01:09:24.921027455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:09:24.921062 env[1661]: time="2025-05-17T01:09:24.921050842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:09:24.921062 env[1661]: time="2025-05-17T01:09:24.921059110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:09:24.921177 env[1661]: time="2025-05-17T01:09:24.921136236Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd62f573c143739bbd30c3bb8968370d93253ab9471f3634550f06dd41ae01c5 pid=2296 runtime=io.containerd.runc.v2 May 17 01:09:24.921436 env[1661]: time="2025-05-17T01:09:24.921403575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:09:24.921486 env[1661]: time="2025-05-17T01:09:24.921427770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:09:24.921486 env[1661]: time="2025-05-17T01:09:24.921442908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:09:24.921588 env[1661]: time="2025-05-17T01:09:24.921557232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aed9b7bfa992581959831a47722f039f5c951de76538dcba9a924514090334d7 pid=2300 runtime=io.containerd.runc.v2 May 17 01:09:24.951190 env[1661]: time="2025-05-17T01:09:24.951161169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-b3aec2dc90,Uid:2faf743c53adcb7084986d4b6643960d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd62f573c143739bbd30c3bb8968370d93253ab9471f3634550f06dd41ae01c5\"" May 17 01:09:24.951336 env[1661]: time="2025-05-17T01:09:24.951319778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-b3aec2dc90,Uid:9bc582d9745a887bfbfb35d0ae75ef84,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b44355fed3924a1deabcde57f79e59b58db4891920ae49ec3d810c8ac47a843\"" May 17 01:09:24.952066 env[1661]: time="2025-05-17T01:09:24.952047698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-b3aec2dc90,Uid:c2db19a57d13dbd04094c69f121ae3db,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed9b7bfa992581959831a47722f039f5c951de76538dcba9a924514090334d7\"" May 17 01:09:24.952754 env[1661]: time="2025-05-17T01:09:24.952740204Z" level=info msg="CreateContainer within sandbox \"fd62f573c143739bbd30c3bb8968370d93253ab9471f3634550f06dd41ae01c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 01:09:24.952832 env[1661]: time="2025-05-17T01:09:24.952820133Z" level=info msg="CreateContainer within sandbox \"2b44355fed3924a1deabcde57f79e59b58db4891920ae49ec3d810c8ac47a843\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 01:09:24.952902 env[1661]: time="2025-05-17T01:09:24.952889363Z" level=info msg="CreateContainer within sandbox \"aed9b7bfa992581959831a47722f039f5c951de76538dcba9a924514090334d7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 01:09:24.959367 env[1661]: time="2025-05-17T01:09:24.959338822Z" level=info msg="CreateContainer within sandbox \"aed9b7bfa992581959831a47722f039f5c951de76538dcba9a924514090334d7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"da3f82ecf612fa3991b6e4e5c1d8a7c4034428018266b7f3938aec23707d43f6\"" May 17 01:09:24.959685 env[1661]: time="2025-05-17T01:09:24.959669921Z" level=info msg="StartContainer for \"da3f82ecf612fa3991b6e4e5c1d8a7c4034428018266b7f3938aec23707d43f6\"" May 17 01:09:24.960431 env[1661]: time="2025-05-17T01:09:24.960415421Z" level=info msg="CreateContainer within sandbox \"2b44355fed3924a1deabcde57f79e59b58db4891920ae49ec3d810c8ac47a843\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4078153423730ca94c6d4c2f06203df7827f838e500cdfccd49b0500dcbd7e8a\"" May 17 01:09:24.960706 env[1661]: time="2025-05-17T01:09:24.960692204Z" level=info msg="StartContainer for \"4078153423730ca94c6d4c2f06203df7827f838e500cdfccd49b0500dcbd7e8a\"" May 17 01:09:24.961356 env[1661]: time="2025-05-17T01:09:24.961342950Z" level=info msg="CreateContainer within sandbox \"fd62f573c143739bbd30c3bb8968370d93253ab9471f3634550f06dd41ae01c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"914c2bcb834eaa6ee4ae6c987b8d117c80edef1df11cd0a96bcd515d10365dfb\"" May 17 01:09:24.961560 env[1661]: time="2025-05-17T01:09:24.961545059Z" level=info msg="StartContainer for \"914c2bcb834eaa6ee4ae6c987b8d117c80edef1df11cd0a96bcd515d10365dfb\"" May 17 01:09:24.993754 env[1661]: time="2025-05-17T01:09:24.993713153Z" level=info msg="StartContainer for \"da3f82ecf612fa3991b6e4e5c1d8a7c4034428018266b7f3938aec23707d43f6\" returns successfully" May 17 01:09:24.993852 env[1661]: time="2025-05-17T01:09:24.993805897Z" level=info msg="StartContainer for \"4078153423730ca94c6d4c2f06203df7827f838e500cdfccd49b0500dcbd7e8a\" returns successfully" May 17 01:09:24.994330 env[1661]: time="2025-05-17T01:09:24.994317250Z" level=info msg="StartContainer for \"914c2bcb834eaa6ee4ae6c987b8d117c80edef1df11cd0a96bcd515d10365dfb\" returns successfully" May 17 01:09:25.465478 kubelet[2222]: I0517 01:09:25.465439 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:25.892237 kubelet[2222]: E0517 01:09:25.892207 2222 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-b3aec2dc90\" not found" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:26.002962 kubelet[2222]: I0517 01:09:26.002871 2222 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:26.003176 kubelet[2222]: E0517 01:09:26.002971 2222 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.7-n-b3aec2dc90\": node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:26.016966 kubelet[2222]: E0517 01:09:26.016887 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:26.117472 kubelet[2222]: E0517 01:09:26.117400 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:26.218350 kubelet[2222]: E0517 01:09:26.218089 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:26.318595 kubelet[2222]: E0517 01:09:26.318458 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:26.419327 kubelet[2222]: E0517 01:09:26.419196 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:26.519940 kubelet[2222]: E0517 01:09:26.519711 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:26.620211 kubelet[2222]: E0517 01:09:26.620102 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:26.721076 kubelet[2222]: E0517 01:09:26.720962 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:26.821657 kubelet[2222]: E0517 01:09:26.821442 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:26.922535 kubelet[2222]: E0517 01:09:26.922458 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:27.022941 kubelet[2222]: E0517 01:09:27.022853 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:27.123879 kubelet[2222]: E0517 01:09:27.123778 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:27.224006 kubelet[2222]: E0517 01:09:27.223921 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:27.324678 kubelet[2222]: E0517 01:09:27.324569 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:27.425913 kubelet[2222]: E0517 01:09:27.425722 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:27.526576 kubelet[2222]: E0517 01:09:27.526467 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:27.627517 kubelet[2222]: E0517 01:09:27.627409 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:27.728833 kubelet[2222]: E0517 01:09:27.728624 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:28.262180 kubelet[2222]: W0517 01:09:28.262126 2222 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:09:28.390703 systemd[1]: Reloading. May 17 01:09:28.423875 /usr/lib/systemd/system-generators/torcx-generator[2551]: time="2025-05-17T01:09:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 01:09:28.423893 /usr/lib/systemd/system-generators/torcx-generator[2551]: time="2025-05-17T01:09:28Z" level=info msg="torcx already run" May 17 01:09:28.487998 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:09:28.488009 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:09:28.502159 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:09:28.560403 systemd[1]: Stopping kubelet.service... May 17 01:09:28.579681 systemd[1]: kubelet.service: Deactivated successfully. May 17 01:09:28.579841 systemd[1]: Stopped kubelet.service. May 17 01:09:28.580773 systemd[1]: Starting kubelet.service... May 17 01:09:28.808507 systemd[1]: Started kubelet.service. May 17 01:09:28.828224 kubelet[2625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:09:28.828224 kubelet[2625]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 01:09:28.828224 kubelet[2625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:09:28.828589 kubelet[2625]: I0517 01:09:28.828233 2625 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 01:09:28.831773 kubelet[2625]: I0517 01:09:28.831726 2625 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 01:09:28.831773 kubelet[2625]: I0517 01:09:28.831736 2625 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 01:09:28.831896 kubelet[2625]: I0517 01:09:28.831860 2625 server.go:934] "Client rotation is on, will bootstrap in background" May 17 01:09:28.832590 kubelet[2625]: I0517 01:09:28.832555 2625 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 01:09:28.833630 kubelet[2625]: I0517 01:09:28.833594 2625 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:09:28.835561 kubelet[2625]: E0517 01:09:28.835517 2625 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 01:09:28.835561 kubelet[2625]: I0517 01:09:28.835531 2625 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 01:09:28.852732 kubelet[2625]: I0517 01:09:28.852690 2625 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 01:09:28.852972 kubelet[2625]: I0517 01:09:28.852936 2625 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 01:09:28.853055 kubelet[2625]: I0517 01:09:28.853004 2625 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 01:09:28.853124 kubelet[2625]: I0517 01:09:28.853022 2625 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-b3aec2dc90","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 01:09:28.853193 kubelet[2625]: I0517 01:09:28.853131 2625 topology_manager.go:138] "Creating topology manager with none policy" May 17 01:09:28.853193 kubelet[2625]: I0517 01:09:28.853137 2625 container_manager_linux.go:300] "Creating device plugin manager" May 17 01:09:28.853193 kubelet[2625]: I0517 01:09:28.853155 2625 state_mem.go:36] "Initialized new in-memory state store" May 17 01:09:28.853267 kubelet[2625]: I0517 01:09:28.853203 2625 kubelet.go:408] "Attempting to sync node with API server" May 17 01:09:28.853267 kubelet[2625]: I0517 01:09:28.853211 2625 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 01:09:28.853267 kubelet[2625]: I0517 01:09:28.853233 2625 kubelet.go:314] "Adding apiserver pod source" May 17 01:09:28.853267 kubelet[2625]: I0517 01:09:28.853241 2625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 01:09:28.853646 kubelet[2625]: I0517 01:09:28.853633 2625 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 01:09:28.853944 kubelet[2625]: I0517 01:09:28.853908 2625 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 01:09:28.854185 kubelet[2625]: I0517 01:09:28.854176 2625 server.go:1274] "Started kubelet" May 17 01:09:28.854280 kubelet[2625]: I0517 01:09:28.854254 2625 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 01:09:28.854316 kubelet[2625]: I0517 01:09:28.854256 2625 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 01:09:28.854432 kubelet[2625]: I0517 01:09:28.854418 2625 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 01:09:28.855406 kubelet[2625]: I0517 01:09:28.855394 2625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 01:09:28.855657 kubelet[2625]: I0517 01:09:28.855637 2625 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 01:09:28.855732 kubelet[2625]: E0517 01:09:28.855716 2625 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b3aec2dc90\" not found" May 17 01:09:28.855796 kubelet[2625]: I0517 01:09:28.855782 2625 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 01:09:28.855899 kubelet[2625]: I0517 01:09:28.855883 2625 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 01:09:28.856130 kubelet[2625]: E0517 01:09:28.856115 2625 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 01:09:28.856199 kubelet[2625]: I0517 01:09:28.856171 2625 reconciler.go:26] "Reconciler: start to sync state" May 17 01:09:28.856398 kubelet[2625]: I0517 01:09:28.856386 2625 factory.go:221] Registration of the systemd container factory successfully May 17 01:09:28.856676 kubelet[2625]: I0517 01:09:28.856661 2625 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 01:09:28.857312 kubelet[2625]: I0517 01:09:28.857300 2625 server.go:449] "Adding debug handlers to kubelet server" May 17 01:09:28.857377 kubelet[2625]: I0517 01:09:28.857361 2625 factory.go:221] Registration of the containerd container factory successfully May 17 01:09:28.861538 kubelet[2625]: I0517 01:09:28.861509 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 01:09:28.862168 kubelet[2625]: I0517 01:09:28.862155 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 01:09:28.862168 kubelet[2625]: I0517 01:09:28.862167 2625 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 01:09:28.862292 kubelet[2625]: I0517 01:09:28.862179 2625 kubelet.go:2321] "Starting kubelet main sync loop" May 17 01:09:28.862292 kubelet[2625]: E0517 01:09:28.862207 2625 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 01:09:28.881052 kubelet[2625]: I0517 01:09:28.881034 2625 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 01:09:28.881052 kubelet[2625]: I0517 01:09:28.881046 2625 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 01:09:28.881052 kubelet[2625]: I0517 01:09:28.881058 2625 state_mem.go:36] "Initialized new in-memory state store" May 17 01:09:28.881186 kubelet[2625]: I0517 01:09:28.881157 2625 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 01:09:28.881186 kubelet[2625]: I0517 01:09:28.881165 2625 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 01:09:28.881186 kubelet[2625]: I0517 01:09:28.881179 2625 policy_none.go:49] "None policy: Start" May 17 01:09:28.881462 kubelet[2625]: I0517 01:09:28.881452 2625 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 01:09:28.881462 kubelet[2625]: I0517 01:09:28.881464 2625 state_mem.go:35] "Initializing new in-memory state store" May 17 01:09:28.881553 kubelet[2625]: I0517 01:09:28.881545 2625 state_mem.go:75] "Updated machine memory state" May 17 01:09:28.882273 kubelet[2625]: I0517 01:09:28.882263 2625 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 01:09:28.882364 kubelet[2625]: I0517 01:09:28.882358 2625 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 01:09:28.882391 kubelet[2625]: I0517 01:09:28.882366 2625 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 01:09:28.882506 kubelet[2625]: I0517 01:09:28.882492 2625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 01:09:28.971948 kubelet[2625]: W0517 01:09:28.971845 2625 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:09:28.972364 kubelet[2625]: W0517 01:09:28.972265 2625 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:09:28.972565 kubelet[2625]: W0517 01:09:28.972390 2625 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:09:28.972565 kubelet[2625]: E0517 01:09:28.972445 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.7-n-b3aec2dc90\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:28.989940 kubelet[2625]: I0517 01:09:28.989869 2625 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.001423 kubelet[2625]: I0517 01:09:29.001321 2625 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.001628 kubelet[2625]: I0517 01:09:29.001517 2625 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.157280 kubelet[2625]: I0517 01:09:29.157123 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2db19a57d13dbd04094c69f121ae3db-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b3aec2dc90\" (UID: \"c2db19a57d13dbd04094c69f121ae3db\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.157576 kubelet[2625]: I0517 01:09:29.157340 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bc582d9745a887bfbfb35d0ae75ef84-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" (UID: \"9bc582d9745a887bfbfb35d0ae75ef84\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.157576 kubelet[2625]: I0517 01:09:29.157446 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bc582d9745a887bfbfb35d0ae75ef84-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" (UID: \"9bc582d9745a887bfbfb35d0ae75ef84\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.157576 kubelet[2625]: I0517 01:09:29.157527 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2faf743c53adcb7084986d4b6643960d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-b3aec2dc90\" (UID: \"2faf743c53adcb7084986d4b6643960d\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.157949 kubelet[2625]: I0517 01:09:29.157591 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2db19a57d13dbd04094c69f121ae3db-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b3aec2dc90\" (UID: \"c2db19a57d13dbd04094c69f121ae3db\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.157949 kubelet[2625]: I0517 01:09:29.157680 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2db19a57d13dbd04094c69f121ae3db-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-b3aec2dc90\" (UID: \"c2db19a57d13dbd04094c69f121ae3db\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.157949 kubelet[2625]: I0517 01:09:29.157758 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9bc582d9745a887bfbfb35d0ae75ef84-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" (UID: \"9bc582d9745a887bfbfb35d0ae75ef84\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.157949 kubelet[2625]: I0517 01:09:29.157833 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bc582d9745a887bfbfb35d0ae75ef84-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" (UID: \"9bc582d9745a887bfbfb35d0ae75ef84\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.157949 kubelet[2625]: I0517 01:09:29.157900 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bc582d9745a887bfbfb35d0ae75ef84-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" (UID: \"9bc582d9745a887bfbfb35d0ae75ef84\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.388599 sudo[2670]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 01:09:29.388725 sudo[2670]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 01:09:29.714517 sudo[2670]: pam_unix(sudo:session): session closed for user root May 17 01:09:29.854406 kubelet[2625]: I0517 01:09:29.854363 2625 apiserver.go:52] "Watching apiserver" May 17 01:09:29.856819 kubelet[2625]: I0517 01:09:29.856779 2625 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 01:09:29.870051 kubelet[2625]: W0517 01:09:29.870001 2625 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:09:29.870121 kubelet[2625]: E0517 01:09:29.870052 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.7-n-b3aec2dc90\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.870190 kubelet[2625]: W0517 01:09:29.870180 2625 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:09:29.870217 kubelet[2625]: W0517 01:09:29.870208 2625 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:09:29.870252 kubelet[2625]: E0517 01:09:29.870209 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-b3aec2dc90\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.870288 kubelet[2625]: E0517 01:09:29.870251 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.7-n-b3aec2dc90\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" May 17 01:09:29.877477 kubelet[2625]: I0517 01:09:29.877383 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-b3aec2dc90" podStartSLOduration=1.877346129 podStartE2EDuration="1.877346129s" podCreationTimestamp="2025-05-17 01:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:09:29.877302836 +0000 UTC m=+1.066268156" watchObservedRunningTime="2025-05-17 01:09:29.877346129 +0000 UTC m=+1.066311452" May 17 01:09:29.882625 kubelet[2625]: I0517 01:09:29.882605 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b3aec2dc90" podStartSLOduration=1.882598877 podStartE2EDuration="1.882598877s" podCreationTimestamp="2025-05-17 01:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:09:29.882458215 +0000 UTC m=+1.071423537" watchObservedRunningTime="2025-05-17 01:09:29.882598877 +0000 UTC m=+1.071564195" May 17 01:09:29.887957 kubelet[2625]: I0517 01:09:29.887883 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-b3aec2dc90" podStartSLOduration=1.8878743679999999 podStartE2EDuration="1.887874368s" podCreationTimestamp="2025-05-17 01:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:09:29.887782298 +0000 UTC m=+1.076747623" watchObservedRunningTime="2025-05-17 01:09:29.887874368 +0000 UTC m=+1.076839689" May 17 01:09:31.092529 sudo[1878]: pam_unix(sudo:session): session closed for user root May 17 01:09:31.095601 sshd[1873]: pam_unix(sshd:session): session closed for user core May 17 01:09:31.101561 systemd[1]: sshd@6-147.28.180.193:22-139.178.89.65:52474.service: Deactivated successfully. May 17 01:09:31.104205 systemd-logind[1703]: Session 9 logged out. Waiting for processes to exit. May 17 01:09:31.104352 systemd[1]: session-9.scope: Deactivated successfully. May 17 01:09:31.106947 systemd-logind[1703]: Removed session 9. May 17 01:09:34.690820 kubelet[2625]: I0517 01:09:34.690712 2625 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 01:09:34.692008 env[1661]: time="2025-05-17T01:09:34.691519848Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 01:09:34.692742 kubelet[2625]: I0517 01:09:34.692022 2625 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 01:09:35.704727 kubelet[2625]: I0517 01:09:35.704669 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cni-path\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.704727 kubelet[2625]: I0517 01:09:35.704702 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-host-proc-sys-net\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.704727 kubelet[2625]: I0517 01:09:35.704719 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-run\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705142 kubelet[2625]: I0517 01:09:35.704738 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-xtables-lock\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705142 kubelet[2625]: I0517 01:09:35.704754 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a342a11-54dc-4553-a8cb-dfc59516b04f-xtables-lock\") pod \"kube-proxy-rxgd4\" (UID: \"0a342a11-54dc-4553-a8cb-dfc59516b04f\") " pod="kube-system/kube-proxy-rxgd4" May 17 01:09:35.705142 kubelet[2625]: I0517 01:09:35.704777 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/417e8741-47a7-46aa-af0c-11be2cbdafbc-hubble-tls\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705142 kubelet[2625]: I0517 01:09:35.704799 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a342a11-54dc-4553-a8cb-dfc59516b04f-kube-proxy\") pod \"kube-proxy-rxgd4\" (UID: \"0a342a11-54dc-4553-a8cb-dfc59516b04f\") " pod="kube-system/kube-proxy-rxgd4" May 17 01:09:35.705142 kubelet[2625]: I0517 01:09:35.704814 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-etc-cni-netd\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705142 kubelet[2625]: I0517 01:09:35.704826 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-lib-modules\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705344 kubelet[2625]: I0517 01:09:35.704844 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-host-proc-sys-kernel\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705344 kubelet[2625]: I0517 01:09:35.704867 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-bpf-maps\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705344 kubelet[2625]: I0517 01:09:35.704893 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-hostproc\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705344 kubelet[2625]: I0517 01:09:35.704912 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/417e8741-47a7-46aa-af0c-11be2cbdafbc-clustermesh-secrets\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705344 kubelet[2625]: I0517 01:09:35.704927 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p245w\" (UniqueName: \"kubernetes.io/projected/417e8741-47a7-46aa-af0c-11be2cbdafbc-kube-api-access-p245w\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705344 kubelet[2625]: I0517 01:09:35.704942 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a342a11-54dc-4553-a8cb-dfc59516b04f-lib-modules\") pod \"kube-proxy-rxgd4\" (UID: \"0a342a11-54dc-4553-a8cb-dfc59516b04f\") " pod="kube-system/kube-proxy-rxgd4" May 17 01:09:35.705520 kubelet[2625]: I0517 01:09:35.704966 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsb8c\" (UniqueName: \"kubernetes.io/projected/0a342a11-54dc-4553-a8cb-dfc59516b04f-kube-api-access-wsb8c\") pod \"kube-proxy-rxgd4\" (UID: \"0a342a11-54dc-4553-a8cb-dfc59516b04f\") " pod="kube-system/kube-proxy-rxgd4" May 17 01:09:35.705520 kubelet[2625]: I0517 01:09:35.704993 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-cgroup\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.705520 kubelet[2625]: I0517 01:09:35.705008 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-config-path\") pod \"cilium-2qj4j\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " pod="kube-system/cilium-2qj4j" May 17 01:09:35.806172 kubelet[2625]: I0517 01:09:35.806039 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6798eaf2-a216-4e10-a30b-39a2829df313-cilium-config-path\") pod \"cilium-operator-5d85765b45-dhpmz\" (UID: \"6798eaf2-a216-4e10-a30b-39a2829df313\") " pod="kube-system/cilium-operator-5d85765b45-dhpmz" May 17 01:09:35.806573 kubelet[2625]: I0517 01:09:35.806502 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lxl\" (UniqueName: \"kubernetes.io/projected/6798eaf2-a216-4e10-a30b-39a2829df313-kube-api-access-d5lxl\") pod \"cilium-operator-5d85765b45-dhpmz\" (UID: \"6798eaf2-a216-4e10-a30b-39a2829df313\") " pod="kube-system/cilium-operator-5d85765b45-dhpmz" May 17 01:09:35.806906 kubelet[2625]: I0517 01:09:35.806784 2625 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 01:09:35.958915 env[1661]: time="2025-05-17T01:09:35.958644133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxgd4,Uid:0a342a11-54dc-4553-a8cb-dfc59516b04f,Namespace:kube-system,Attempt:0,}" May 17 01:09:35.962536 env[1661]: time="2025-05-17T01:09:35.962424047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qj4j,Uid:417e8741-47a7-46aa-af0c-11be2cbdafbc,Namespace:kube-system,Attempt:0,}" May 17 01:09:35.984609 env[1661]: time="2025-05-17T01:09:35.984370881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:09:35.984609 env[1661]: time="2025-05-17T01:09:35.984489266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:09:35.984609 env[1661]: time="2025-05-17T01:09:35.984542663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:09:35.985133 env[1661]: time="2025-05-17T01:09:35.985017861Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d4151ca51485290e4a607394ac677bc2767e3be005f4ddb2f9c330939723d9d pid=2781 runtime=io.containerd.runc.v2 May 17 01:09:35.987887 env[1661]: time="2025-05-17T01:09:35.987745122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:09:35.987887 env[1661]: time="2025-05-17T01:09:35.987853658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:09:35.988265 env[1661]: time="2025-05-17T01:09:35.987919664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:09:35.988471 env[1661]: time="2025-05-17T01:09:35.988374574Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e pid=2789 runtime=io.containerd.runc.v2 May 17 01:09:36.062356 env[1661]: time="2025-05-17T01:09:36.062191497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxgd4,Uid:0a342a11-54dc-4553-a8cb-dfc59516b04f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d4151ca51485290e4a607394ac677bc2767e3be005f4ddb2f9c330939723d9d\"" May 17 01:09:36.063398 env[1661]: time="2025-05-17T01:09:36.063338139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qj4j,Uid:417e8741-47a7-46aa-af0c-11be2cbdafbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\"" May 17 01:09:36.065786 env[1661]: time="2025-05-17T01:09:36.065695694Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 01:09:36.066679 env[1661]: time="2025-05-17T01:09:36.066606248Z" level=info msg="CreateContainer within sandbox \"7d4151ca51485290e4a607394ac677bc2767e3be005f4ddb2f9c330939723d9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 01:09:36.077019 env[1661]: time="2025-05-17T01:09:36.076916742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dhpmz,Uid:6798eaf2-a216-4e10-a30b-39a2829df313,Namespace:kube-system,Attempt:0,}" May 17 01:09:36.079165 env[1661]: time="2025-05-17T01:09:36.079099911Z" level=info msg="CreateContainer within sandbox \"7d4151ca51485290e4a607394ac677bc2767e3be005f4ddb2f9c330939723d9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dcca42ffbc08fe9bc43f470ccfa01834344135d02b0ae0358a66d6032332e703\"" May 17 01:09:36.079922 env[1661]: time="2025-05-17T01:09:36.079858143Z" level=info msg="StartContainer for \"dcca42ffbc08fe9bc43f470ccfa01834344135d02b0ae0358a66d6032332e703\"" May 17 01:09:36.098107 env[1661]: time="2025-05-17T01:09:36.097958692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:09:36.098107 env[1661]: time="2025-05-17T01:09:36.098030816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:09:36.098107 env[1661]: time="2025-05-17T01:09:36.098059077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:09:36.098565 env[1661]: time="2025-05-17T01:09:36.098432188Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b pid=2864 runtime=io.containerd.runc.v2 May 17 01:09:36.138139 env[1661]: time="2025-05-17T01:09:36.138100105Z" level=info msg="StartContainer for \"dcca42ffbc08fe9bc43f470ccfa01834344135d02b0ae0358a66d6032332e703\" returns successfully" May 17 01:09:36.150114 env[1661]: time="2025-05-17T01:09:36.150086766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dhpmz,Uid:6798eaf2-a216-4e10-a30b-39a2829df313,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\"" May 17 01:09:36.910209 kubelet[2625]: I0517 01:09:36.910051 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rxgd4" podStartSLOduration=1.910007666 podStartE2EDuration="1.910007666s" podCreationTimestamp="2025-05-17 01:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:09:36.909597645 +0000 UTC m=+8.098563035" watchObservedRunningTime="2025-05-17 01:09:36.910007666 +0000 UTC m=+8.098973037" May 17 01:09:37.983366 update_engine[1652]: I0517 01:09:37.983305 1652 update_attempter.cc:509] Updating boot flags... May 17 01:09:40.395824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3662496519.mount: Deactivated successfully. May 17 01:09:42.092860 env[1661]: time="2025-05-17T01:09:42.092808372Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:42.093828 env[1661]: time="2025-05-17T01:09:42.093784781Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:42.095174 env[1661]: time="2025-05-17T01:09:42.095157987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:42.095635 env[1661]: time="2025-05-17T01:09:42.095617480Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 01:09:42.096630 env[1661]: time="2025-05-17T01:09:42.096575758Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 01:09:42.097058 env[1661]: time="2025-05-17T01:09:42.097028157Z" level=info msg="CreateContainer within sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 01:09:42.101125 env[1661]: time="2025-05-17T01:09:42.101086730Z" level=info msg="CreateContainer within sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\"" May 17 01:09:42.101369 env[1661]: time="2025-05-17T01:09:42.101356205Z" level=info msg="StartContainer for \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\"" May 17 01:09:42.102853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201751906.mount: Deactivated successfully. May 17 01:09:42.131094 env[1661]: time="2025-05-17T01:09:42.131022607Z" level=info msg="StartContainer for \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\" returns successfully" May 17 01:09:43.105511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134-rootfs.mount: Deactivated successfully. May 17 01:09:44.786921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628589796.mount: Deactivated successfully. May 17 01:09:44.787969 env[1661]: time="2025-05-17T01:09:44.787895279Z" level=info msg="shim disconnected" id=f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134 May 17 01:09:44.788152 env[1661]: time="2025-05-17T01:09:44.787970959Z" level=warning msg="cleaning up after shim disconnected" id=f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134 namespace=k8s.io May 17 01:09:44.788152 env[1661]: time="2025-05-17T01:09:44.787991317Z" level=info msg="cleaning up dead shim" May 17 01:09:44.792205 env[1661]: time="2025-05-17T01:09:44.792185292Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:09:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3129 runtime=io.containerd.runc.v2\n" May 17 01:09:44.907521 env[1661]: time="2025-05-17T01:09:44.907494384Z" level=info msg="CreateContainer within sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 01:09:44.929045 env[1661]: time="2025-05-17T01:09:44.929017211Z" level=info msg="CreateContainer within sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\"" May 17 01:09:44.929427 env[1661]: time="2025-05-17T01:09:44.929413544Z" level=info msg="StartContainer for \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\"" May 17 01:09:44.930223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1093620790.mount: Deactivated successfully. May 17 01:09:44.950156 env[1661]: time="2025-05-17T01:09:44.950127759Z" level=info msg="StartContainer for \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\" returns successfully" May 17 01:09:44.956378 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 01:09:44.956531 systemd[1]: Stopped systemd-sysctl.service. May 17 01:09:44.956637 systemd[1]: Stopping systemd-sysctl.service... May 17 01:09:44.957516 systemd[1]: Starting systemd-sysctl.service... May 17 01:09:44.961462 systemd[1]: Finished systemd-sysctl.service. May 17 01:09:45.028028 env[1661]: time="2025-05-17T01:09:45.027979846Z" level=info msg="shim disconnected" id=c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c May 17 01:09:45.028220 env[1661]: time="2025-05-17T01:09:45.028031634Z" level=warning msg="cleaning up after shim disconnected" id=c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c namespace=k8s.io May 17 01:09:45.028220 env[1661]: time="2025-05-17T01:09:45.028050291Z" level=info msg="cleaning up dead shim" May 17 01:09:45.035293 env[1661]: time="2025-05-17T01:09:45.035257171Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:09:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3194 runtime=io.containerd.runc.v2\n" May 17 01:09:45.243526 env[1661]: time="2025-05-17T01:09:45.243502888Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:45.244091 env[1661]: time="2025-05-17T01:09:45.244079255Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:45.244774 env[1661]: time="2025-05-17T01:09:45.244763523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:09:45.245388 env[1661]: time="2025-05-17T01:09:45.245372820Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 01:09:45.246746 env[1661]: time="2025-05-17T01:09:45.246732718Z" level=info msg="CreateContainer within sandbox \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 01:09:45.250661 env[1661]: time="2025-05-17T01:09:45.250647032Z" level=info msg="CreateContainer within sandbox \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\"" May 17 01:09:45.251014 env[1661]: time="2025-05-17T01:09:45.251001383Z" level=info msg="StartContainer for \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\"" May 17 01:09:45.271951 env[1661]: time="2025-05-17T01:09:45.271898845Z" level=info msg="StartContainer for \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\" returns successfully" May 17 01:09:45.781914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c-rootfs.mount: Deactivated successfully. May 17 01:09:45.921654 env[1661]: time="2025-05-17T01:09:45.921504831Z" level=info msg="CreateContainer within sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 01:09:45.940307 kubelet[2625]: I0517 01:09:45.940172 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dhpmz" podStartSLOduration=1.8449322700000002 podStartE2EDuration="10.940130161s" podCreationTimestamp="2025-05-17 01:09:35 +0000 UTC" firstStartedPulling="2025-05-17 01:09:36.150663567 +0000 UTC m=+7.339628884" lastFinishedPulling="2025-05-17 01:09:45.245861455 +0000 UTC m=+16.434826775" observedRunningTime="2025-05-17 01:09:45.939576336 +0000 UTC m=+17.128541730" watchObservedRunningTime="2025-05-17 01:09:45.940130161 +0000 UTC m=+17.129095544" May 17 01:09:45.943325 env[1661]: time="2025-05-17T01:09:45.943252376Z" level=info msg="CreateContainer within sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\"" May 17 01:09:45.944047 env[1661]: time="2025-05-17T01:09:45.944005017Z" level=info msg="StartContainer for \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\"" May 17 01:09:45.980216 env[1661]: time="2025-05-17T01:09:45.980185656Z" level=info msg="StartContainer for \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\" returns successfully" May 17 01:09:46.085869 env[1661]: time="2025-05-17T01:09:46.085743765Z" level=info msg="shim disconnected" id=d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4 May 17 01:09:46.086300 env[1661]: time="2025-05-17T01:09:46.085878027Z" level=warning msg="cleaning up after shim disconnected" id=d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4 namespace=k8s.io May 17 01:09:46.086300 env[1661]: time="2025-05-17T01:09:46.085912163Z" level=info msg="cleaning up dead shim" May 17 01:09:46.103359 env[1661]: time="2025-05-17T01:09:46.103262348Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:09:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3298 runtime=io.containerd.runc.v2\n" May 17 01:09:46.781037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4-rootfs.mount: Deactivated successfully. May 17 01:09:46.930505 env[1661]: time="2025-05-17T01:09:46.930381643Z" level=info msg="CreateContainer within sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 01:09:46.947467 env[1661]: time="2025-05-17T01:09:46.947440508Z" level=info msg="CreateContainer within sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\"" May 17 01:09:46.947800 env[1661]: time="2025-05-17T01:09:46.947783382Z" level=info msg="StartContainer for \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\"" May 17 01:09:46.949156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1353685981.mount: Deactivated successfully. May 17 01:09:46.967726 env[1661]: time="2025-05-17T01:09:46.967700424Z" level=info msg="StartContainer for \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\" returns successfully" May 17 01:09:46.975818 env[1661]: time="2025-05-17T01:09:46.975759068Z" level=info msg="shim disconnected" id=c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e May 17 01:09:46.975818 env[1661]: time="2025-05-17T01:09:46.975785540Z" level=warning msg="cleaning up after shim disconnected" id=c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e namespace=k8s.io May 17 01:09:46.975818 env[1661]: time="2025-05-17T01:09:46.975791559Z" level=info msg="cleaning up dead shim" May 17 01:09:46.979192 env[1661]: time="2025-05-17T01:09:46.979173940Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:09:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3352 runtime=io.containerd.runc.v2\n" May 17 01:09:47.785384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e-rootfs.mount: Deactivated successfully. May 17 01:09:47.938293 env[1661]: time="2025-05-17T01:09:47.938179158Z" level=info msg="CreateContainer within sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 01:09:47.954681 env[1661]: time="2025-05-17T01:09:47.954523624Z" level=info msg="CreateContainer within sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\"" May 17 01:09:47.955648 env[1661]: time="2025-05-17T01:09:47.955552484Z" level=info msg="StartContainer for \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\"" May 17 01:09:48.006903 env[1661]: time="2025-05-17T01:09:48.006839055Z" level=info msg="StartContainer for \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\" returns successfully" May 17 01:09:48.093289 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 17 01:09:48.132975 kubelet[2625]: I0517 01:09:48.132957 2625 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 01:09:48.195336 kubelet[2625]: I0517 01:09:48.195312 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/096efc0c-f94c-4270-b525-d85ff9c7451e-config-volume\") pod \"coredns-7c65d6cfc9-bq8fg\" (UID: \"096efc0c-f94c-4270-b525-d85ff9c7451e\") " pod="kube-system/coredns-7c65d6cfc9-bq8fg" May 17 01:09:48.195426 kubelet[2625]: I0517 01:09:48.195346 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmsz2\" (UniqueName: \"kubernetes.io/projected/096efc0c-f94c-4270-b525-d85ff9c7451e-kube-api-access-vmsz2\") pod \"coredns-7c65d6cfc9-bq8fg\" (UID: \"096efc0c-f94c-4270-b525-d85ff9c7451e\") " pod="kube-system/coredns-7c65d6cfc9-bq8fg" May 17 01:09:48.195426 kubelet[2625]: I0517 01:09:48.195371 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb7793b-5648-4bc5-8210-3f6cb7f32729-config-volume\") pod \"coredns-7c65d6cfc9-6b8cc\" (UID: \"ecb7793b-5648-4bc5-8210-3f6cb7f32729\") " pod="kube-system/coredns-7c65d6cfc9-6b8cc" May 17 01:09:48.195426 kubelet[2625]: I0517 01:09:48.195394 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjpsp\" (UniqueName: \"kubernetes.io/projected/ecb7793b-5648-4bc5-8210-3f6cb7f32729-kube-api-access-xjpsp\") pod \"coredns-7c65d6cfc9-6b8cc\" (UID: \"ecb7793b-5648-4bc5-8210-3f6cb7f32729\") " pod="kube-system/coredns-7c65d6cfc9-6b8cc" May 17 01:09:48.246314 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 17 01:09:48.449748 env[1661]: time="2025-05-17T01:09:48.449501291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6b8cc,Uid:ecb7793b-5648-4bc5-8210-3f6cb7f32729,Namespace:kube-system,Attempt:0,}" May 17 01:09:48.449748 env[1661]: time="2025-05-17T01:09:48.449519504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bq8fg,Uid:096efc0c-f94c-4270-b525-d85ff9c7451e,Namespace:kube-system,Attempt:0,}" May 17 01:09:48.949470 kubelet[2625]: I0517 01:09:48.949431 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2qj4j" podStartSLOduration=7.917909472 podStartE2EDuration="13.949418483s" podCreationTimestamp="2025-05-17 01:09:35 +0000 UTC" firstStartedPulling="2025-05-17 01:09:36.064882044 +0000 UTC m=+7.253847430" lastFinishedPulling="2025-05-17 01:09:42.096391124 +0000 UTC m=+13.285356441" observedRunningTime="2025-05-17 01:09:48.949055943 +0000 UTC m=+20.138021263" watchObservedRunningTime="2025-05-17 01:09:48.949418483 +0000 UTC m=+20.138383801" May 17 01:09:49.841295 systemd-networkd[1378]: cilium_host: Link UP May 17 01:09:49.841387 systemd-networkd[1378]: cilium_net: Link UP May 17 01:09:49.848530 systemd-networkd[1378]: cilium_net: Gained carrier May 17 01:09:49.855713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 01:09:49.855787 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 01:09:49.855788 systemd-networkd[1378]: cilium_host: Gained carrier May 17 01:09:49.903894 systemd-networkd[1378]: cilium_vxlan: Link UP May 17 01:09:49.903897 systemd-networkd[1378]: cilium_vxlan: Gained carrier May 17 01:09:50.040300 kernel: NET: Registered PF_ALG protocol family May 17 01:09:50.207497 systemd-networkd[1378]: cilium_host: Gained IPv6LL May 17 01:09:50.471087 systemd-networkd[1378]: lxc_health: Link UP May 17 01:09:50.492249 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 01:09:50.492308 systemd-networkd[1378]: lxc_health: Gained carrier May 17 01:09:50.845362 systemd-networkd[1378]: cilium_net: Gained IPv6LL May 17 01:09:50.999626 systemd-networkd[1378]: lxc106d26386f2f: Link UP May 17 01:09:51.035441 kernel: eth0: renamed from tmpd7189 May 17 01:09:51.050510 systemd-networkd[1378]: lxc147d227cc06e: Link UP May 17 01:09:51.055239 kernel: eth0: renamed from tmpde676 May 17 01:09:51.074713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 01:09:51.074755 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc106d26386f2f: link becomes ready May 17 01:09:51.075113 systemd-networkd[1378]: lxc106d26386f2f: Gained carrier May 17 01:09:51.075240 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 01:09:51.089233 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc147d227cc06e: link becomes ready May 17 01:09:51.089319 systemd-networkd[1378]: lxc147d227cc06e: Gained carrier May 17 01:09:51.677371 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL May 17 01:09:52.381292 systemd-networkd[1378]: lxc_health: Gained IPv6LL May 17 01:09:52.701340 systemd-networkd[1378]: lxc106d26386f2f: Gained IPv6LL May 17 01:09:52.701534 systemd-networkd[1378]: lxc147d227cc06e: Gained IPv6LL May 17 01:09:53.321839 env[1661]: time="2025-05-17T01:09:53.321778074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:09:53.321839 env[1661]: time="2025-05-17T01:09:53.321803613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:09:53.321839 env[1661]: time="2025-05-17T01:09:53.321811209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:09:53.322102 env[1661]: time="2025-05-17T01:09:53.321923704Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7189119655e4b753372ba09b2c65f9f3c71f1e4cf39c1498537d0fd4e1b04b4 pid=4030 runtime=io.containerd.runc.v2 May 17 01:09:53.322102 env[1661]: time="2025-05-17T01:09:53.322039474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:09:53.322102 env[1661]: time="2025-05-17T01:09:53.322056972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:09:53.322102 env[1661]: time="2025-05-17T01:09:53.322063885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:09:53.322183 env[1661]: time="2025-05-17T01:09:53.322122450Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de676081f3d7f89a4ab3d99c4ffb28e656c0350836b4b9b33ac94fa0a0bdd263 pid=4037 runtime=io.containerd.runc.v2 May 17 01:09:53.350464 env[1661]: time="2025-05-17T01:09:53.350434934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6b8cc,Uid:ecb7793b-5648-4bc5-8210-3f6cb7f32729,Namespace:kube-system,Attempt:0,} returns sandbox id \"de676081f3d7f89a4ab3d99c4ffb28e656c0350836b4b9b33ac94fa0a0bdd263\"" May 17 01:09:53.350655 env[1661]: time="2025-05-17T01:09:53.350638947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bq8fg,Uid:096efc0c-f94c-4270-b525-d85ff9c7451e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7189119655e4b753372ba09b2c65f9f3c71f1e4cf39c1498537d0fd4e1b04b4\"" May 17 01:09:53.351603 env[1661]: time="2025-05-17T01:09:53.351588773Z" level=info msg="CreateContainer within sandbox \"de676081f3d7f89a4ab3d99c4ffb28e656c0350836b4b9b33ac94fa0a0bdd263\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 01:09:53.351603 env[1661]: time="2025-05-17T01:09:53.351590259Z" level=info msg="CreateContainer within sandbox \"d7189119655e4b753372ba09b2c65f9f3c71f1e4cf39c1498537d0fd4e1b04b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 01:09:53.356293 env[1661]: time="2025-05-17T01:09:53.356224362Z" level=info msg="CreateContainer within sandbox \"de676081f3d7f89a4ab3d99c4ffb28e656c0350836b4b9b33ac94fa0a0bdd263\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39c765b1c6a6b1f3a67a3f0dbb8a16b670a2ea3a2f03a9b0d357d77636f2cc85\"" May 17 01:09:53.356431 env[1661]: time="2025-05-17T01:09:53.356415857Z" level=info msg="StartContainer for \"39c765b1c6a6b1f3a67a3f0dbb8a16b670a2ea3a2f03a9b0d357d77636f2cc85\"" May 17 01:09:53.357360 env[1661]: time="2025-05-17T01:09:53.357343355Z" level=info msg="CreateContainer within sandbox \"d7189119655e4b753372ba09b2c65f9f3c71f1e4cf39c1498537d0fd4e1b04b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2e02a496ba56384f681092bc43729b9bf067b567397845db2ac3283b3b13d4e\"" May 17 01:09:53.357586 env[1661]: time="2025-05-17T01:09:53.357573464Z" level=info msg="StartContainer for \"e2e02a496ba56384f681092bc43729b9bf067b567397845db2ac3283b3b13d4e\"" May 17 01:09:53.359736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397623430.mount: Deactivated successfully. May 17 01:09:53.380224 env[1661]: time="2025-05-17T01:09:53.380199310Z" level=info msg="StartContainer for \"e2e02a496ba56384f681092bc43729b9bf067b567397845db2ac3283b3b13d4e\" returns successfully" May 17 01:09:53.380336 env[1661]: time="2025-05-17T01:09:53.380321852Z" level=info msg="StartContainer for \"39c765b1c6a6b1f3a67a3f0dbb8a16b670a2ea3a2f03a9b0d357d77636f2cc85\" returns successfully" May 17 01:09:53.962893 kubelet[2625]: I0517 01:09:53.962832 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bq8fg" podStartSLOduration=18.962821216000002 podStartE2EDuration="18.962821216s" podCreationTimestamp="2025-05-17 01:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:09:53.962604591 +0000 UTC m=+25.151569913" watchObservedRunningTime="2025-05-17 01:09:53.962821216 +0000 UTC m=+25.151786534" May 17 01:09:53.968919 kubelet[2625]: I0517 01:09:53.968891 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6b8cc" podStartSLOduration=18.968878837 podStartE2EDuration="18.968878837s" podCreationTimestamp="2025-05-17 01:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:09:53.968590427 +0000 UTC m=+25.157555758" watchObservedRunningTime="2025-05-17 01:09:53.968878837 +0000 UTC m=+25.157844154" May 17 01:09:57.002986 kubelet[2625]: I0517 01:09:57.002862 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:15:31.428369 systemd[1]: Started sshd@7-147.28.180.193:22-139.178.89.65:59448.service. May 17 01:15:31.456934 sshd[4237]: Accepted publickey for core from 139.178.89.65 port 59448 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:31.457922 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:31.461317 systemd-logind[1703]: New session 10 of user core. May 17 01:15:31.462064 systemd[1]: Started session-10.scope. May 17 01:15:31.611019 sshd[4237]: pam_unix(sshd:session): session closed for user core May 17 01:15:31.612583 systemd[1]: sshd@7-147.28.180.193:22-139.178.89.65:59448.service: Deactivated successfully. May 17 01:15:31.613232 systemd-logind[1703]: Session 10 logged out. Waiting for processes to exit. May 17 01:15:31.613287 systemd[1]: session-10.scope: Deactivated successfully. May 17 01:15:31.613868 systemd-logind[1703]: Removed session 10. May 17 01:15:36.618460 systemd[1]: Started sshd@8-147.28.180.193:22-139.178.89.65:55060.service. May 17 01:15:36.647009 sshd[4272]: Accepted publickey for core from 139.178.89.65 port 55060 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:36.647983 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:36.651136 systemd-logind[1703]: New session 11 of user core. May 17 01:15:36.651798 systemd[1]: Started session-11.scope. May 17 01:15:36.742572 sshd[4272]: pam_unix(sshd:session): session closed for user core May 17 01:15:36.743941 systemd[1]: sshd@8-147.28.180.193:22-139.178.89.65:55060.service: Deactivated successfully. May 17 01:15:36.744586 systemd-logind[1703]: Session 11 logged out. Waiting for processes to exit. May 17 01:15:36.744594 systemd[1]: session-11.scope: Deactivated successfully. May 17 01:15:36.745074 systemd-logind[1703]: Removed session 11. May 17 01:15:41.749169 systemd[1]: Started sshd@9-147.28.180.193:22-139.178.89.65:55068.service. May 17 01:15:41.778233 sshd[4300]: Accepted publickey for core from 139.178.89.65 port 55068 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:41.779117 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:41.782159 systemd-logind[1703]: New session 12 of user core. May 17 01:15:41.782839 systemd[1]: Started session-12.scope. May 17 01:15:41.873386 sshd[4300]: pam_unix(sshd:session): session closed for user core May 17 01:15:41.874880 systemd[1]: sshd@9-147.28.180.193:22-139.178.89.65:55068.service: Deactivated successfully. May 17 01:15:41.875542 systemd[1]: session-12.scope: Deactivated successfully. May 17 01:15:41.875588 systemd-logind[1703]: Session 12 logged out. Waiting for processes to exit. May 17 01:15:41.876107 systemd-logind[1703]: Removed session 12. May 17 01:15:46.880854 systemd[1]: Started sshd@10-147.28.180.193:22-139.178.89.65:59092.service. May 17 01:15:46.909670 sshd[4327]: Accepted publickey for core from 139.178.89.65 port 59092 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:46.910600 sshd[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:46.914134 systemd-logind[1703]: New session 13 of user core. May 17 01:15:46.914835 systemd[1]: Started session-13.scope. May 17 01:15:47.000469 sshd[4327]: pam_unix(sshd:session): session closed for user core May 17 01:15:47.002058 systemd[1]: Started sshd@11-147.28.180.193:22-139.178.89.65:59106.service. May 17 01:15:47.002394 systemd[1]: sshd@10-147.28.180.193:22-139.178.89.65:59092.service: Deactivated successfully. May 17 01:15:47.002914 systemd-logind[1703]: Session 13 logged out. Waiting for processes to exit. May 17 01:15:47.002982 systemd[1]: session-13.scope: Deactivated successfully. May 17 01:15:47.003387 systemd-logind[1703]: Removed session 13. May 17 01:15:47.030569 sshd[4351]: Accepted publickey for core from 139.178.89.65 port 59106 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:47.031342 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:47.033919 systemd-logind[1703]: New session 14 of user core. May 17 01:15:47.034385 systemd[1]: Started session-14.scope. May 17 01:15:47.134613 sshd[4351]: pam_unix(sshd:session): session closed for user core May 17 01:15:47.136355 systemd[1]: Started sshd@12-147.28.180.193:22-139.178.89.65:59114.service. May 17 01:15:47.136775 systemd[1]: sshd@11-147.28.180.193:22-139.178.89.65:59106.service: Deactivated successfully. May 17 01:15:47.137487 systemd[1]: session-14.scope: Deactivated successfully. May 17 01:15:47.137500 systemd-logind[1703]: Session 14 logged out. Waiting for processes to exit. May 17 01:15:47.138104 systemd-logind[1703]: Removed session 14. May 17 01:15:47.165287 sshd[4378]: Accepted publickey for core from 139.178.89.65 port 59114 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:47.166182 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:47.168991 systemd-logind[1703]: New session 15 of user core. May 17 01:15:47.169775 systemd[1]: Started session-15.scope. May 17 01:15:47.298347 sshd[4378]: pam_unix(sshd:session): session closed for user core May 17 01:15:47.299914 systemd[1]: sshd@12-147.28.180.193:22-139.178.89.65:59114.service: Deactivated successfully. May 17 01:15:47.300580 systemd[1]: session-15.scope: Deactivated successfully. May 17 01:15:47.300621 systemd-logind[1703]: Session 15 logged out. Waiting for processes to exit. May 17 01:15:47.301139 systemd-logind[1703]: Removed session 15. May 17 01:15:52.305054 systemd[1]: Started sshd@13-147.28.180.193:22-139.178.89.65:59120.service. May 17 01:15:52.333950 sshd[4406]: Accepted publickey for core from 139.178.89.65 port 59120 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:52.334789 sshd[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:52.338095 systemd-logind[1703]: New session 16 of user core. May 17 01:15:52.338699 systemd[1]: Started session-16.scope. May 17 01:15:52.425606 sshd[4406]: pam_unix(sshd:session): session closed for user core May 17 01:15:52.427013 systemd[1]: sshd@13-147.28.180.193:22-139.178.89.65:59120.service: Deactivated successfully. May 17 01:15:52.427685 systemd[1]: session-16.scope: Deactivated successfully. May 17 01:15:52.427747 systemd-logind[1703]: Session 16 logged out. Waiting for processes to exit. May 17 01:15:52.428229 systemd-logind[1703]: Removed session 16. May 17 01:15:57.431943 systemd[1]: Started sshd@14-147.28.180.193:22-139.178.89.65:60768.service. May 17 01:15:57.460564 sshd[4434]: Accepted publickey for core from 139.178.89.65 port 60768 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:57.461449 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:57.464899 systemd-logind[1703]: New session 17 of user core. May 17 01:15:57.465539 systemd[1]: Started session-17.scope. May 17 01:15:57.557341 sshd[4434]: pam_unix(sshd:session): session closed for user core May 17 01:15:57.559149 systemd[1]: Started sshd@15-147.28.180.193:22-139.178.89.65:60782.service. May 17 01:15:57.559567 systemd[1]: sshd@14-147.28.180.193:22-139.178.89.65:60768.service: Deactivated successfully. May 17 01:15:57.560157 systemd-logind[1703]: Session 17 logged out. Waiting for processes to exit. May 17 01:15:57.560204 systemd[1]: session-17.scope: Deactivated successfully. May 17 01:15:57.560789 systemd-logind[1703]: Removed session 17. May 17 01:15:57.588687 sshd[4458]: Accepted publickey for core from 139.178.89.65 port 60782 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:57.589537 sshd[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:57.592532 systemd-logind[1703]: New session 18 of user core. May 17 01:15:57.593178 systemd[1]: Started session-18.scope. May 17 01:15:57.732164 sshd[4458]: pam_unix(sshd:session): session closed for user core May 17 01:15:57.733758 systemd[1]: Started sshd@16-147.28.180.193:22-139.178.89.65:60794.service. May 17 01:15:57.734109 systemd[1]: sshd@15-147.28.180.193:22-139.178.89.65:60782.service: Deactivated successfully. May 17 01:15:57.734587 systemd-logind[1703]: Session 18 logged out. Waiting for processes to exit. May 17 01:15:57.734640 systemd[1]: session-18.scope: Deactivated successfully. May 17 01:15:57.735059 systemd-logind[1703]: Removed session 18. May 17 01:15:57.761695 sshd[4482]: Accepted publickey for core from 139.178.89.65 port 60794 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:57.762459 sshd[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:57.764971 systemd-logind[1703]: New session 19 of user core. May 17 01:15:57.765600 systemd[1]: Started session-19.scope. May 17 01:15:59.022770 sshd[4482]: pam_unix(sshd:session): session closed for user core May 17 01:15:59.024705 systemd[1]: Started sshd@17-147.28.180.193:22-139.178.89.65:60798.service. May 17 01:15:59.025043 systemd[1]: sshd@16-147.28.180.193:22-139.178.89.65:60794.service: Deactivated successfully. May 17 01:15:59.025767 systemd-logind[1703]: Session 19 logged out. Waiting for processes to exit. May 17 01:15:59.025789 systemd[1]: session-19.scope: Deactivated successfully. May 17 01:15:59.026275 systemd-logind[1703]: Removed session 19. May 17 01:15:59.056567 sshd[4514]: Accepted publickey for core from 139.178.89.65 port 60798 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:59.060767 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:59.071791 systemd-logind[1703]: New session 20 of user core. May 17 01:15:59.074199 systemd[1]: Started session-20.scope. May 17 01:15:59.315125 sshd[4514]: pam_unix(sshd:session): session closed for user core May 17 01:15:59.322079 systemd[1]: Started sshd@18-147.28.180.193:22-139.178.89.65:60802.service. May 17 01:15:59.323958 systemd[1]: sshd@17-147.28.180.193:22-139.178.89.65:60798.service: Deactivated successfully. May 17 01:15:59.326474 systemd-logind[1703]: Session 20 logged out. Waiting for processes to exit. May 17 01:15:59.326622 systemd[1]: session-20.scope: Deactivated successfully. May 17 01:15:59.329350 systemd-logind[1703]: Removed session 20. May 17 01:15:59.379360 sshd[4541]: Accepted publickey for core from 139.178.89.65 port 60802 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:15:59.381173 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:15:59.387131 systemd-logind[1703]: New session 21 of user core. May 17 01:15:59.388581 systemd[1]: Started session-21.scope. May 17 01:15:59.522161 sshd[4541]: pam_unix(sshd:session): session closed for user core May 17 01:15:59.523684 systemd[1]: sshd@18-147.28.180.193:22-139.178.89.65:60802.service: Deactivated successfully. May 17 01:15:59.524224 systemd-logind[1703]: Session 21 logged out. Waiting for processes to exit. May 17 01:15:59.524237 systemd[1]: session-21.scope: Deactivated successfully. May 17 01:15:59.524860 systemd-logind[1703]: Removed session 21. May 17 01:16:04.527061 systemd[1]: Started sshd@19-147.28.180.193:22-139.178.89.65:60806.service. May 17 01:16:04.558722 sshd[4571]: Accepted publickey for core from 139.178.89.65 port 60806 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:16:04.559583 sshd[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:16:04.562599 systemd-logind[1703]: New session 22 of user core. May 17 01:16:04.563298 systemd[1]: Started session-22.scope. May 17 01:16:04.647164 sshd[4571]: pam_unix(sshd:session): session closed for user core May 17 01:16:04.648785 systemd[1]: sshd@19-147.28.180.193:22-139.178.89.65:60806.service: Deactivated successfully. May 17 01:16:04.649464 systemd[1]: session-22.scope: Deactivated successfully. May 17 01:16:04.649506 systemd-logind[1703]: Session 22 logged out. Waiting for processes to exit. May 17 01:16:04.650082 systemd-logind[1703]: Removed session 22. May 17 01:16:09.650755 systemd[1]: Started sshd@20-147.28.180.193:22-139.178.89.65:38014.service. May 17 01:16:09.681667 sshd[4600]: Accepted publickey for core from 139.178.89.65 port 38014 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:16:09.682486 sshd[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:16:09.685516 systemd-logind[1703]: New session 23 of user core. May 17 01:16:09.686177 systemd[1]: Started session-23.scope. May 17 01:16:09.770421 sshd[4600]: pam_unix(sshd:session): session closed for user core May 17 01:16:09.771968 systemd[1]: sshd@20-147.28.180.193:22-139.178.89.65:38014.service: Deactivated successfully. May 17 01:16:09.772608 systemd[1]: session-23.scope: Deactivated successfully. May 17 01:16:09.772702 systemd-logind[1703]: Session 23 logged out. Waiting for processes to exit. May 17 01:16:09.773202 systemd-logind[1703]: Removed session 23. May 17 01:16:14.776746 systemd[1]: Started sshd@21-147.28.180.193:22-139.178.89.65:38028.service. May 17 01:16:14.805532 sshd[4624]: Accepted publickey for core from 139.178.89.65 port 38028 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:16:14.806464 sshd[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:16:14.809901 systemd-logind[1703]: New session 24 of user core. May 17 01:16:14.810559 systemd[1]: Started session-24.scope. May 17 01:16:14.895391 sshd[4624]: pam_unix(sshd:session): session closed for user core May 17 01:16:14.897037 systemd[1]: Started sshd@22-147.28.180.193:22-139.178.89.65:38040.service. May 17 01:16:14.897379 systemd[1]: sshd@21-147.28.180.193:22-139.178.89.65:38028.service: Deactivated successfully. May 17 01:16:14.897912 systemd-logind[1703]: Session 24 logged out. Waiting for processes to exit. May 17 01:16:14.897945 systemd[1]: session-24.scope: Deactivated successfully. May 17 01:16:14.898548 systemd-logind[1703]: Removed session 24. May 17 01:16:14.925200 sshd[4646]: Accepted publickey for core from 139.178.89.65 port 38040 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:16:14.925917 sshd[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:16:14.928263 systemd-logind[1703]: New session 25 of user core. May 17 01:16:14.928733 systemd[1]: Started session-25.scope. May 17 01:16:16.271591 env[1661]: time="2025-05-17T01:16:16.271485123Z" level=info msg="StopContainer for \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\" with timeout 30 (s)" May 17 01:16:16.272984 env[1661]: time="2025-05-17T01:16:16.272223080Z" level=info msg="Stop container \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\" with signal terminated" May 17 01:16:16.314279 env[1661]: time="2025-05-17T01:16:16.314185660Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 01:16:16.316888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641-rootfs.mount: Deactivated successfully. May 17 01:16:16.319434 env[1661]: time="2025-05-17T01:16:16.319401377Z" level=info msg="StopContainer for \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\" with timeout 2 (s)" May 17 01:16:16.319603 env[1661]: time="2025-05-17T01:16:16.319578755Z" level=info msg="Stop container \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\" with signal terminated" May 17 01:16:16.323793 env[1661]: time="2025-05-17T01:16:16.323753627Z" level=info msg="shim disconnected" id=a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641 May 17 01:16:16.323912 env[1661]: time="2025-05-17T01:16:16.323796213Z" level=warning msg="cleaning up after shim disconnected" id=a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641 namespace=k8s.io May 17 01:16:16.323912 env[1661]: time="2025-05-17T01:16:16.323807718Z" level=info msg="cleaning up dead shim" May 17 01:16:16.324432 systemd-networkd[1378]: lxc_health: Link DOWN May 17 01:16:16.324437 systemd-networkd[1378]: lxc_health: Lost carrier May 17 01:16:16.329911 env[1661]: time="2025-05-17T01:16:16.329855226Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:16:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4714 runtime=io.containerd.runc.v2\n" May 17 01:16:16.331030 env[1661]: time="2025-05-17T01:16:16.330980718Z" level=info msg="StopContainer for \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\" returns successfully" May 17 01:16:16.331538 env[1661]: time="2025-05-17T01:16:16.331509992Z" level=info msg="StopPodSandbox for \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\"" May 17 01:16:16.331616 env[1661]: time="2025-05-17T01:16:16.331570969Z" level=info msg="Container to stop \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:16:16.334107 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b-shm.mount: Deactivated successfully. May 17 01:16:16.351889 env[1661]: time="2025-05-17T01:16:16.351837794Z" level=info msg="shim disconnected" id=7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b May 17 01:16:16.352044 env[1661]: time="2025-05-17T01:16:16.351890592Z" level=warning msg="cleaning up after shim disconnected" id=7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b namespace=k8s.io May 17 01:16:16.352044 env[1661]: time="2025-05-17T01:16:16.351902297Z" level=info msg="cleaning up dead shim" May 17 01:16:16.353465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b-rootfs.mount: Deactivated successfully. May 17 01:16:16.358006 env[1661]: time="2025-05-17T01:16:16.357951123Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:16:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4748 runtime=io.containerd.runc.v2\n" May 17 01:16:16.358311 env[1661]: time="2025-05-17T01:16:16.358261598Z" level=info msg="TearDown network for sandbox \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\" successfully" May 17 01:16:16.358311 env[1661]: time="2025-05-17T01:16:16.358283450Z" level=info msg="StopPodSandbox for \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\" returns successfully" May 17 01:16:16.420921 env[1661]: time="2025-05-17T01:16:16.420842059Z" level=info msg="shim disconnected" id=bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c May 17 01:16:16.420921 env[1661]: time="2025-05-17T01:16:16.420919652Z" level=warning msg="cleaning up after shim disconnected" id=bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c namespace=k8s.io May 17 01:16:16.421304 env[1661]: time="2025-05-17T01:16:16.420941374Z" level=info msg="cleaning up dead shim" May 17 01:16:16.422018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c-rootfs.mount: Deactivated successfully. May 17 01:16:16.433267 env[1661]: time="2025-05-17T01:16:16.433179663Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:16:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4777 runtime=io.containerd.runc.v2\n" May 17 01:16:16.434998 env[1661]: time="2025-05-17T01:16:16.434900995Z" level=info msg="StopContainer for \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\" returns successfully" May 17 01:16:16.435736 env[1661]: time="2025-05-17T01:16:16.435636613Z" level=info msg="StopPodSandbox for \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\"" May 17 01:16:16.435909 env[1661]: time="2025-05-17T01:16:16.435749127Z" level=info msg="Container to stop \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:16:16.435909 env[1661]: time="2025-05-17T01:16:16.435780638Z" level=info msg="Container to stop \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:16:16.435909 env[1661]: time="2025-05-17T01:16:16.435804413Z" level=info msg="Container to stop \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:16:16.435909 env[1661]: time="2025-05-17T01:16:16.435830317Z" level=info msg="Container to stop \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:16:16.435909 env[1661]: time="2025-05-17T01:16:16.435851286Z" level=info msg="Container to stop \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:16:16.461130 kubelet[2625]: I0517 01:16:16.461050 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5lxl\" (UniqueName: \"kubernetes.io/projected/6798eaf2-a216-4e10-a30b-39a2829df313-kube-api-access-d5lxl\") pod \"6798eaf2-a216-4e10-a30b-39a2829df313\" (UID: \"6798eaf2-a216-4e10-a30b-39a2829df313\") " May 17 01:16:16.461130 kubelet[2625]: I0517 01:16:16.461129 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6798eaf2-a216-4e10-a30b-39a2829df313-cilium-config-path\") pod \"6798eaf2-a216-4e10-a30b-39a2829df313\" (UID: \"6798eaf2-a216-4e10-a30b-39a2829df313\") " May 17 01:16:16.464992 kubelet[2625]: I0517 01:16:16.464912 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6798eaf2-a216-4e10-a30b-39a2829df313-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6798eaf2-a216-4e10-a30b-39a2829df313" (UID: "6798eaf2-a216-4e10-a30b-39a2829df313"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 01:16:16.466075 kubelet[2625]: I0517 01:16:16.465996 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6798eaf2-a216-4e10-a30b-39a2829df313-kube-api-access-d5lxl" (OuterVolumeSpecName: "kube-api-access-d5lxl") pod "6798eaf2-a216-4e10-a30b-39a2829df313" (UID: "6798eaf2-a216-4e10-a30b-39a2829df313"). InnerVolumeSpecName "kube-api-access-d5lxl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 01:16:16.470466 env[1661]: time="2025-05-17T01:16:16.470367323Z" level=info msg="shim disconnected" id=d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e May 17 01:16:16.470743 env[1661]: time="2025-05-17T01:16:16.470467158Z" level=warning msg="cleaning up after shim disconnected" id=d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e namespace=k8s.io May 17 01:16:16.470743 env[1661]: time="2025-05-17T01:16:16.470497926Z" level=info msg="cleaning up dead shim" May 17 01:16:16.482594 env[1661]: time="2025-05-17T01:16:16.482502567Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:16:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4810 runtime=io.containerd.runc.v2\n" May 17 01:16:16.483085 env[1661]: time="2025-05-17T01:16:16.483010467Z" level=info msg="TearDown network for sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" successfully" May 17 01:16:16.483085 env[1661]: time="2025-05-17T01:16:16.483054662Z" level=info msg="StopPodSandbox for \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" returns successfully" May 17 01:16:16.562207 kubelet[2625]: I0517 01:16:16.561941 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-cgroup\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.562207 kubelet[2625]: I0517 01:16:16.562031 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cni-path\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.562207 kubelet[2625]: I0517 01:16:16.562098 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p245w\" (UniqueName: \"kubernetes.io/projected/417e8741-47a7-46aa-af0c-11be2cbdafbc-kube-api-access-p245w\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.562207 kubelet[2625]: I0517 01:16:16.562106 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:16.562207 kubelet[2625]: I0517 01:16:16.562148 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-xtables-lock\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.563401 kubelet[2625]: I0517 01:16:16.562209 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:16.563401 kubelet[2625]: I0517 01:16:16.562265 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cni-path" (OuterVolumeSpecName: "cni-path") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:16.563401 kubelet[2625]: I0517 01:16:16.562325 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-bpf-maps\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.563401 kubelet[2625]: I0517 01:16:16.562397 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/417e8741-47a7-46aa-af0c-11be2cbdafbc-clustermesh-secrets\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.563401 kubelet[2625]: I0517 01:16:16.562425 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:16.564014 kubelet[2625]: I0517 01:16:16.562446 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-host-proc-sys-net\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.564014 kubelet[2625]: I0517 01:16:16.562492 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-lib-modules\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.564014 kubelet[2625]: I0517 01:16:16.562539 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-hostproc\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.564014 kubelet[2625]: I0517 01:16:16.562562 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:16.564014 kubelet[2625]: I0517 01:16:16.562577 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:16.564014 kubelet[2625]: I0517 01:16:16.562606 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-run\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.564697 kubelet[2625]: I0517 01:16:16.562662 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-hostproc" (OuterVolumeSpecName: "hostproc") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:16.564697 kubelet[2625]: I0517 01:16:16.562663 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:16.564697 kubelet[2625]: I0517 01:16:16.562694 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-etc-cni-netd\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.564697 kubelet[2625]: I0517 01:16:16.562742 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-host-proc-sys-kernel\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.564697 kubelet[2625]: I0517 01:16:16.562798 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/417e8741-47a7-46aa-af0c-11be2cbdafbc-hubble-tls\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.565217 kubelet[2625]: I0517 01:16:16.562825 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:16.565217 kubelet[2625]: I0517 01:16:16.562835 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:16.565217 kubelet[2625]: I0517 01:16:16.562850 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-config-path\") pod \"417e8741-47a7-46aa-af0c-11be2cbdafbc\" (UID: \"417e8741-47a7-46aa-af0c-11be2cbdafbc\") " May 17 01:16:16.565217 kubelet[2625]: I0517 01:16:16.563006 2625 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cni-path\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.565217 kubelet[2625]: I0517 01:16:16.563047 2625 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-cgroup\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.565217 kubelet[2625]: I0517 01:16:16.563078 2625 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-xtables-lock\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.565876 kubelet[2625]: I0517 01:16:16.563103 2625 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-bpf-maps\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.565876 kubelet[2625]: I0517 01:16:16.563131 2625 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-host-proc-sys-net\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.565876 kubelet[2625]: I0517 01:16:16.563159 2625 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-lib-modules\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.565876 kubelet[2625]: I0517 01:16:16.563201 2625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5lxl\" (UniqueName: \"kubernetes.io/projected/6798eaf2-a216-4e10-a30b-39a2829df313-kube-api-access-d5lxl\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.565876 kubelet[2625]: I0517 01:16:16.563292 2625 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-hostproc\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.565876 kubelet[2625]: I0517 01:16:16.563327 2625 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-run\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.565876 kubelet[2625]: I0517 01:16:16.563351 2625 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-etc-cni-netd\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.565876 kubelet[2625]: I0517 01:16:16.563376 2625 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/417e8741-47a7-46aa-af0c-11be2cbdafbc-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.566716 kubelet[2625]: I0517 01:16:16.563403 2625 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6798eaf2-a216-4e10-a30b-39a2829df313-cilium-config-path\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.568312 kubelet[2625]: I0517 01:16:16.568213 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 01:16:16.569288 kubelet[2625]: I0517 01:16:16.569198 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417e8741-47a7-46aa-af0c-11be2cbdafbc-kube-api-access-p245w" (OuterVolumeSpecName: "kube-api-access-p245w") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "kube-api-access-p245w". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 01:16:16.569529 kubelet[2625]: I0517 01:16:16.569309 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/417e8741-47a7-46aa-af0c-11be2cbdafbc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 01:16:16.569529 kubelet[2625]: I0517 01:16:16.569487 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417e8741-47a7-46aa-af0c-11be2cbdafbc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "417e8741-47a7-46aa-af0c-11be2cbdafbc" (UID: "417e8741-47a7-46aa-af0c-11be2cbdafbc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 01:16:16.664550 kubelet[2625]: I0517 01:16:16.664412 2625 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/417e8741-47a7-46aa-af0c-11be2cbdafbc-hubble-tls\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.664550 kubelet[2625]: I0517 01:16:16.664496 2625 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/417e8741-47a7-46aa-af0c-11be2cbdafbc-cilium-config-path\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.664550 kubelet[2625]: I0517 01:16:16.664530 2625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p245w\" (UniqueName: \"kubernetes.io/projected/417e8741-47a7-46aa-af0c-11be2cbdafbc-kube-api-access-p245w\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:16.664550 kubelet[2625]: I0517 01:16:16.664560 2625 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/417e8741-47a7-46aa-af0c-11be2cbdafbc-clustermesh-secrets\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:17.067871 kubelet[2625]: I0517 01:16:17.067756 2625 scope.go:117] "RemoveContainer" containerID="a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641" May 17 01:16:17.070837 env[1661]: time="2025-05-17T01:16:17.070752480Z" level=info msg="RemoveContainer for \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\"" May 17 01:16:17.076433 env[1661]: time="2025-05-17T01:16:17.076300676Z" level=info msg="RemoveContainer for \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\" returns successfully" May 17 01:16:17.076969 kubelet[2625]: I0517 01:16:17.076901 2625 scope.go:117] "RemoveContainer" containerID="a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641" May 17 01:16:17.077807 env[1661]: time="2025-05-17T01:16:17.077525000Z" level=error msg="ContainerStatus for \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\": not found" May 17 01:16:17.078076 kubelet[2625]: E0517 01:16:17.078014 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\": not found" containerID="a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641" May 17 01:16:17.078369 kubelet[2625]: I0517 01:16:17.078107 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641"} err="failed to get container status \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4e8b69c7d10139f2ae4c1f0207dd91c013eeb7ed629ecba9c94650bf25d0641\": not found" May 17 01:16:17.078369 kubelet[2625]: I0517 01:16:17.078343 2625 scope.go:117] "RemoveContainer" containerID="bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c" May 17 01:16:17.081130 env[1661]: time="2025-05-17T01:16:17.081022429Z" level=info msg="RemoveContainer for \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\"" May 17 01:16:17.085914 env[1661]: time="2025-05-17T01:16:17.085800440Z" level=info msg="RemoveContainer for \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\" returns successfully" May 17 01:16:17.086323 kubelet[2625]: I0517 01:16:17.086269 2625 scope.go:117] "RemoveContainer" containerID="c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e" May 17 01:16:17.088787 env[1661]: time="2025-05-17T01:16:17.088703106Z" level=info msg="RemoveContainer for \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\"" May 17 01:16:17.093183 env[1661]: time="2025-05-17T01:16:17.093077901Z" level=info msg="RemoveContainer for \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\" returns successfully" May 17 01:16:17.093485 kubelet[2625]: I0517 01:16:17.093426 2625 scope.go:117] "RemoveContainer" containerID="d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4" May 17 01:16:17.096456 env[1661]: time="2025-05-17T01:16:17.096327861Z" level=info msg="RemoveContainer for \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\"" May 17 01:16:17.101214 env[1661]: time="2025-05-17T01:16:17.101123882Z" level=info msg="RemoveContainer for \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\" returns successfully" May 17 01:16:17.101666 kubelet[2625]: I0517 01:16:17.101546 2625 scope.go:117] "RemoveContainer" containerID="c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c" May 17 01:16:17.104436 env[1661]: time="2025-05-17T01:16:17.104354159Z" level=info msg="RemoveContainer for \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\"" May 17 01:16:17.110079 env[1661]: time="2025-05-17T01:16:17.109970319Z" level=info msg="RemoveContainer for \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\" returns successfully" May 17 01:16:17.112172 kubelet[2625]: I0517 01:16:17.112088 2625 scope.go:117] "RemoveContainer" containerID="f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134" May 17 01:16:17.115042 env[1661]: time="2025-05-17T01:16:17.114902355Z" level=info msg="RemoveContainer for \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\"" May 17 01:16:17.119140 env[1661]: time="2025-05-17T01:16:17.119061449Z" level=info msg="RemoveContainer for \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\" returns successfully" May 17 01:16:17.119500 kubelet[2625]: I0517 01:16:17.119452 2625 scope.go:117] "RemoveContainer" containerID="bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c" May 17 01:16:17.120189 env[1661]: time="2025-05-17T01:16:17.120028653Z" level=error msg="ContainerStatus for \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\": not found" May 17 01:16:17.120538 kubelet[2625]: E0517 01:16:17.120449 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\": not found" containerID="bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c" May 17 01:16:17.120538 kubelet[2625]: I0517 01:16:17.120520 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c"} err="failed to get container status \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc450e0366e06ac916774390b5118841163b926f8a74662846282a946b396c2c\": not found" May 17 01:16:17.120893 kubelet[2625]: I0517 01:16:17.120574 2625 scope.go:117] "RemoveContainer" containerID="c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e" May 17 01:16:17.121287 env[1661]: time="2025-05-17T01:16:17.121105880Z" level=error msg="ContainerStatus for \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\": not found" May 17 01:16:17.121620 kubelet[2625]: E0517 01:16:17.121511 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\": not found" containerID="c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e" May 17 01:16:17.121620 kubelet[2625]: I0517 01:16:17.121585 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e"} err="failed to get container status \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8e4f2bba822f03e522c9b98dec53e346c3d81b75fbb0a6d5ee0b42df137d05e\": not found" May 17 01:16:17.121967 kubelet[2625]: I0517 01:16:17.121637 2625 scope.go:117] "RemoveContainer" containerID="d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4" May 17 01:16:17.122363 env[1661]: time="2025-05-17T01:16:17.122123016Z" level=error msg="ContainerStatus for \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\": not found" May 17 01:16:17.122659 kubelet[2625]: E0517 01:16:17.122608 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\": not found" containerID="d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4" May 17 01:16:17.122800 kubelet[2625]: I0517 01:16:17.122677 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4"} err="failed to get container status \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"d07d27908098efdb4e23761ff9818533409c39c584af9190ac6593a917a4bdb4\": not found" May 17 01:16:17.122800 kubelet[2625]: I0517 01:16:17.122725 2625 scope.go:117] "RemoveContainer" containerID="c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c" May 17 01:16:17.123346 env[1661]: time="2025-05-17T01:16:17.123164264Z" level=error msg="ContainerStatus for \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\": not found" May 17 01:16:17.123595 kubelet[2625]: E0517 01:16:17.123539 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\": not found" containerID="c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c" May 17 01:16:17.123765 kubelet[2625]: I0517 01:16:17.123611 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c"} err="failed to get container status \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c59d5f7963b6a062a4e78c444ce77d9556947c7210ca4f36409239eebc926a2c\": not found" May 17 01:16:17.123765 kubelet[2625]: I0517 01:16:17.123664 2625 scope.go:117] "RemoveContainer" containerID="f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134" May 17 01:16:17.124279 env[1661]: time="2025-05-17T01:16:17.124122240Z" level=error msg="ContainerStatus for \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\": not found" May 17 01:16:17.124528 kubelet[2625]: E0517 01:16:17.124481 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\": not found" containerID="f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134" May 17 01:16:17.124667 kubelet[2625]: I0517 01:16:17.124543 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134"} err="failed to get container status \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9cc507d6f72111387f283a4a2dd317d89979145b690015cc1b7df5630b71134\": not found" May 17 01:16:17.295806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e-rootfs.mount: Deactivated successfully. May 17 01:16:17.296162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e-shm.mount: Deactivated successfully. May 17 01:16:17.296488 systemd[1]: var-lib-kubelet-pods-6798eaf2\x2da216\x2d4e10\x2da30b\x2d39a2829df313-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd5lxl.mount: Deactivated successfully. May 17 01:16:17.296781 systemd[1]: var-lib-kubelet-pods-417e8741\x2d47a7\x2d46aa\x2daf0c\x2d11be2cbdafbc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp245w.mount: Deactivated successfully. May 17 01:16:17.297045 systemd[1]: var-lib-kubelet-pods-417e8741\x2d47a7\x2d46aa\x2daf0c\x2d11be2cbdafbc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 01:16:17.297353 systemd[1]: var-lib-kubelet-pods-417e8741\x2d47a7\x2d46aa\x2daf0c\x2d11be2cbdafbc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 01:16:18.209714 sshd[4646]: pam_unix(sshd:session): session closed for user core May 17 01:16:18.211106 systemd[1]: sshd@22-147.28.180.193:22-139.178.89.65:38040.service: Deactivated successfully. May 17 01:16:18.211785 systemd-logind[1703]: Session 25 logged out. Waiting for processes to exit. May 17 01:16:18.212581 systemd[1]: Started sshd@23-147.28.180.193:22-139.178.89.65:39746.service. May 17 01:16:18.213001 systemd[1]: session-25.scope: Deactivated successfully. May 17 01:16:18.213441 systemd-logind[1703]: Removed session 25. May 17 01:16:18.241349 sshd[4827]: Accepted publickey for core from 139.178.89.65 port 39746 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:16:18.242395 sshd[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:16:18.246176 systemd-logind[1703]: New session 26 of user core. May 17 01:16:18.246960 systemd[1]: Started session-26.scope. May 17 01:16:18.785370 sshd[4827]: pam_unix(sshd:session): session closed for user core May 17 01:16:18.787299 systemd[1]: Started sshd@24-147.28.180.193:22-139.178.89.65:39756.service. May 17 01:16:18.787755 systemd[1]: sshd@23-147.28.180.193:22-139.178.89.65:39746.service: Deactivated successfully. May 17 01:16:18.788354 systemd-logind[1703]: Session 26 logged out. Waiting for processes to exit. May 17 01:16:18.788426 systemd[1]: session-26.scope: Deactivated successfully. May 17 01:16:18.788892 systemd-logind[1703]: Removed session 26. May 17 01:16:18.799843 kubelet[2625]: E0517 01:16:18.799811 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="417e8741-47a7-46aa-af0c-11be2cbdafbc" containerName="mount-cgroup" May 17 01:16:18.799843 kubelet[2625]: E0517 01:16:18.799835 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="417e8741-47a7-46aa-af0c-11be2cbdafbc" containerName="mount-bpf-fs" May 17 01:16:18.799843 kubelet[2625]: E0517 01:16:18.799842 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="417e8741-47a7-46aa-af0c-11be2cbdafbc" containerName="clean-cilium-state" May 17 01:16:18.799843 kubelet[2625]: E0517 01:16:18.799849 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="417e8741-47a7-46aa-af0c-11be2cbdafbc" containerName="cilium-agent" May 17 01:16:18.800289 kubelet[2625]: E0517 01:16:18.799855 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="417e8741-47a7-46aa-af0c-11be2cbdafbc" containerName="apply-sysctl-overwrites" May 17 01:16:18.800289 kubelet[2625]: E0517 01:16:18.799861 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6798eaf2-a216-4e10-a30b-39a2829df313" containerName="cilium-operator" May 17 01:16:18.800289 kubelet[2625]: I0517 01:16:18.799885 2625 memory_manager.go:354] "RemoveStaleState removing state" podUID="6798eaf2-a216-4e10-a30b-39a2829df313" containerName="cilium-operator" May 17 01:16:18.800289 kubelet[2625]: I0517 01:16:18.799891 2625 memory_manager.go:354] "RemoveStaleState removing state" podUID="417e8741-47a7-46aa-af0c-11be2cbdafbc" containerName="cilium-agent" May 17 01:16:18.818965 sshd[4851]: Accepted publickey for core from 139.178.89.65 port 39756 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:16:18.819791 sshd[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:16:18.822262 systemd-logind[1703]: New session 27 of user core. May 17 01:16:18.822989 systemd[1]: Started session-27.scope. May 17 01:16:18.868168 kubelet[2625]: I0517 01:16:18.868046 2625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="417e8741-47a7-46aa-af0c-11be2cbdafbc" path="/var/lib/kubelet/pods/417e8741-47a7-46aa-af0c-11be2cbdafbc/volumes" May 17 01:16:18.870167 kubelet[2625]: I0517 01:16:18.870076 2625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6798eaf2-a216-4e10-a30b-39a2829df313" path="/var/lib/kubelet/pods/6798eaf2-a216-4e10-a30b-39a2829df313/volumes" May 17 01:16:18.881612 kubelet[2625]: I0517 01:16:18.881510 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-etc-cni-netd\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.881846 kubelet[2625]: I0517 01:16:18.881631 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-bpf-maps\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.881973 kubelet[2625]: I0517 01:16:18.881811 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-cgroup\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.882109 kubelet[2625]: I0517 01:16:18.881954 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-xtables-lock\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.882109 kubelet[2625]: I0517 01:16:18.882051 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-host-proc-sys-net\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.882379 kubelet[2625]: I0517 01:16:18.882151 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-ipsec-secrets\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.882379 kubelet[2625]: I0517 01:16:18.882264 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-hubble-tls\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.882379 kubelet[2625]: I0517 01:16:18.882355 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cni-path\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.882706 kubelet[2625]: I0517 01:16:18.882443 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-clustermesh-secrets\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.882706 kubelet[2625]: I0517 01:16:18.882578 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-run\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.882706 kubelet[2625]: I0517 01:16:18.882666 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-hostproc\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.883000 kubelet[2625]: I0517 01:16:18.882752 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-lib-modules\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.883000 kubelet[2625]: I0517 01:16:18.882854 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-config-path\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.883000 kubelet[2625]: I0517 01:16:18.882948 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-host-proc-sys-kernel\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.883357 kubelet[2625]: I0517 01:16:18.883031 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb9bt\" (UniqueName: \"kubernetes.io/projected/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-kube-api-access-vb9bt\") pod \"cilium-qcs49\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " pod="kube-system/cilium-qcs49" May 17 01:16:18.980291 sshd[4851]: pam_unix(sshd:session): session closed for user core May 17 01:16:18.982424 systemd[1]: Started sshd@25-147.28.180.193:22-139.178.89.65:39762.service. May 17 01:16:18.982885 systemd[1]: sshd@24-147.28.180.193:22-139.178.89.65:39756.service: Deactivated successfully. May 17 01:16:18.983473 systemd-logind[1703]: Session 27 logged out. Waiting for processes to exit. May 17 01:16:18.983535 systemd[1]: session-27.scope: Deactivated successfully. May 17 01:16:18.984215 systemd-logind[1703]: Removed session 27. May 17 01:16:18.988514 kubelet[2625]: E0517 01:16:18.988487 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-vb9bt], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-qcs49" podUID="1fdf16aa-164c-4fab-8a9d-f7b5f597a218" May 17 01:16:19.002451 kubelet[2625]: E0517 01:16:19.002426 2625 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 01:16:19.011385 sshd[4877]: Accepted publickey for core from 139.178.89.65 port 39762 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:16:19.012221 sshd[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:16:19.014735 systemd-logind[1703]: New session 28 of user core. May 17 01:16:19.015258 systemd[1]: Started session-28.scope. May 17 01:16:19.185289 kubelet[2625]: I0517 01:16:19.185144 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-ipsec-secrets\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.185289 kubelet[2625]: I0517 01:16:19.185262 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-cgroup\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.185813 kubelet[2625]: I0517 01:16:19.185321 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-run\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.185813 kubelet[2625]: I0517 01:16:19.185371 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-hostproc\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.185813 kubelet[2625]: I0517 01:16:19.185418 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cni-path\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.185813 kubelet[2625]: I0517 01:16:19.185413 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:19.185813 kubelet[2625]: I0517 01:16:19.185464 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-etc-cni-netd\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.185813 kubelet[2625]: I0517 01:16:19.185459 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:19.186615 kubelet[2625]: I0517 01:16:19.185489 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-hostproc" (OuterVolumeSpecName: "hostproc") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:19.186615 kubelet[2625]: I0517 01:16:19.185544 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb9bt\" (UniqueName: \"kubernetes.io/projected/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-kube-api-access-vb9bt\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.186615 kubelet[2625]: I0517 01:16:19.185563 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cni-path" (OuterVolumeSpecName: "cni-path") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:19.186615 kubelet[2625]: I0517 01:16:19.185617 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:19.186615 kubelet[2625]: I0517 01:16:19.185635 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-bpf-maps\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.187131 kubelet[2625]: I0517 01:16:19.185709 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:19.187131 kubelet[2625]: I0517 01:16:19.185773 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-host-proc-sys-net\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.187131 kubelet[2625]: I0517 01:16:19.185841 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:19.187131 kubelet[2625]: I0517 01:16:19.185883 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-clustermesh-secrets\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.187131 kubelet[2625]: I0517 01:16:19.185959 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-config-path\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.187696 kubelet[2625]: I0517 01:16:19.186059 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-hubble-tls\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.187696 kubelet[2625]: I0517 01:16:19.186143 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-lib-modules\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.187696 kubelet[2625]: I0517 01:16:19.186259 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-xtables-lock\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.187696 kubelet[2625]: I0517 01:16:19.186265 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:19.187696 kubelet[2625]: I0517 01:16:19.186329 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:19.187696 kubelet[2625]: I0517 01:16:19.186361 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-host-proc-sys-kernel\") pod \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\" (UID: \"1fdf16aa-164c-4fab-8a9d-f7b5f597a218\") " May 17 01:16:19.188421 kubelet[2625]: I0517 01:16:19.186485 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 01:16:19.188421 kubelet[2625]: I0517 01:16:19.186515 2625 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-cgroup\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.188421 kubelet[2625]: I0517 01:16:19.186665 2625 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-run\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.188421 kubelet[2625]: I0517 01:16:19.186733 2625 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-hostproc\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.188421 kubelet[2625]: I0517 01:16:19.186782 2625 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cni-path\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.188421 kubelet[2625]: I0517 01:16:19.186834 2625 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-etc-cni-netd\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.188421 kubelet[2625]: I0517 01:16:19.186882 2625 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-bpf-maps\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.189119 kubelet[2625]: I0517 01:16:19.186938 2625 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-host-proc-sys-net\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.189119 kubelet[2625]: I0517 01:16:19.186994 2625 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-lib-modules\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.189119 kubelet[2625]: I0517 01:16:19.187050 2625 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-xtables-lock\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.190808 kubelet[2625]: I0517 01:16:19.190706 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 01:16:19.192333 kubelet[2625]: I0517 01:16:19.192225 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 01:16:19.192721 kubelet[2625]: I0517 01:16:19.192608 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-kube-api-access-vb9bt" (OuterVolumeSpecName: "kube-api-access-vb9bt") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "kube-api-access-vb9bt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 01:16:19.192928 kubelet[2625]: I0517 01:16:19.192873 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 01:16:19.193477 kubelet[2625]: I0517 01:16:19.193364 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1fdf16aa-164c-4fab-8a9d-f7b5f597a218" (UID: "1fdf16aa-164c-4fab-8a9d-f7b5f597a218"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 01:16:19.198098 systemd[1]: var-lib-kubelet-pods-1fdf16aa\x2d164c\x2d4fab\x2d8a9d\x2df7b5f597a218-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvb9bt.mount: Deactivated successfully. May 17 01:16:19.198512 systemd[1]: var-lib-kubelet-pods-1fdf16aa\x2d164c\x2d4fab\x2d8a9d\x2df7b5f597a218-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 01:16:19.198792 systemd[1]: var-lib-kubelet-pods-1fdf16aa\x2d164c\x2d4fab\x2d8a9d\x2df7b5f597a218-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 01:16:19.199059 systemd[1]: var-lib-kubelet-pods-1fdf16aa\x2d164c\x2d4fab\x2d8a9d\x2df7b5f597a218-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 01:16:19.288142 kubelet[2625]: I0517 01:16:19.288021 2625 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-hubble-tls\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.288142 kubelet[2625]: I0517 01:16:19.288100 2625 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.288142 kubelet[2625]: I0517 01:16:19.288134 2625 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.288142 kubelet[2625]: I0517 01:16:19.288163 2625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb9bt\" (UniqueName: \"kubernetes.io/projected/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-kube-api-access-vb9bt\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.288789 kubelet[2625]: I0517 01:16:19.288194 2625 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-cilium-config-path\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:19.288789 kubelet[2625]: I0517 01:16:19.288221 2625 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1fdf16aa-164c-4fab-8a9d-f7b5f597a218-clustermesh-secrets\") on node \"ci-3510.3.7-n-b3aec2dc90\" DevicePath \"\"" May 17 01:16:20.195593 kubelet[2625]: I0517 01:16:20.195449 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7005efed-5293-4728-8852-35b34698dacc-cilium-config-path\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.195593 kubelet[2625]: I0517 01:16:20.195559 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7005efed-5293-4728-8852-35b34698dacc-bpf-maps\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.196690 kubelet[2625]: I0517 01:16:20.195709 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7005efed-5293-4728-8852-35b34698dacc-host-proc-sys-kernel\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.196690 kubelet[2625]: I0517 01:16:20.195811 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7005efed-5293-4728-8852-35b34698dacc-cilium-run\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.196690 kubelet[2625]: I0517 01:16:20.195864 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7005efed-5293-4728-8852-35b34698dacc-cilium-cgroup\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.196690 kubelet[2625]: I0517 01:16:20.195917 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7005efed-5293-4728-8852-35b34698dacc-host-proc-sys-net\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.196690 kubelet[2625]: I0517 01:16:20.195969 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkgl5\" (UniqueName: \"kubernetes.io/projected/7005efed-5293-4728-8852-35b34698dacc-kube-api-access-fkgl5\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.197255 kubelet[2625]: I0517 01:16:20.196018 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7005efed-5293-4728-8852-35b34698dacc-lib-modules\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.197255 kubelet[2625]: I0517 01:16:20.196063 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7005efed-5293-4728-8852-35b34698dacc-cni-path\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.197255 kubelet[2625]: I0517 01:16:20.196117 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7005efed-5293-4728-8852-35b34698dacc-etc-cni-netd\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.197255 kubelet[2625]: I0517 01:16:20.196165 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7005efed-5293-4728-8852-35b34698dacc-xtables-lock\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.197255 kubelet[2625]: I0517 01:16:20.196212 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7005efed-5293-4728-8852-35b34698dacc-clustermesh-secrets\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.197255 kubelet[2625]: I0517 01:16:20.196281 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7005efed-5293-4728-8852-35b34698dacc-cilium-ipsec-secrets\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.197878 kubelet[2625]: I0517 01:16:20.196330 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7005efed-5293-4728-8852-35b34698dacc-hostproc\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.197878 kubelet[2625]: I0517 01:16:20.196377 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7005efed-5293-4728-8852-35b34698dacc-hubble-tls\") pod \"cilium-7sffj\" (UID: \"7005efed-5293-4728-8852-35b34698dacc\") " pod="kube-system/cilium-7sffj" May 17 01:16:20.415274 env[1661]: time="2025-05-17T01:16:20.415144513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7sffj,Uid:7005efed-5293-4728-8852-35b34698dacc,Namespace:kube-system,Attempt:0,}" May 17 01:16:20.429507 env[1661]: time="2025-05-17T01:16:20.429446798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:16:20.429507 env[1661]: time="2025-05-17T01:16:20.429498544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:16:20.429697 env[1661]: time="2025-05-17T01:16:20.429510167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:16:20.429697 env[1661]: time="2025-05-17T01:16:20.429656400Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a pid=4919 runtime=io.containerd.runc.v2 May 17 01:16:20.447344 env[1661]: time="2025-05-17T01:16:20.447278580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7sffj,Uid:7005efed-5293-4728-8852-35b34698dacc,Namespace:kube-system,Attempt:0,} returns sandbox id \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\"" May 17 01:16:20.448586 env[1661]: time="2025-05-17T01:16:20.448570094Z" level=info msg="CreateContainer within sandbox \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 01:16:20.452954 env[1661]: time="2025-05-17T01:16:20.452933853Z" level=info msg="CreateContainer within sandbox \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"56cfb908d6941878e7980de68c52ce1d34c68adc466302ae1f77e14d23d0cbbe\"" May 17 01:16:20.453153 env[1661]: time="2025-05-17T01:16:20.453139636Z" level=info msg="StartContainer for \"56cfb908d6941878e7980de68c52ce1d34c68adc466302ae1f77e14d23d0cbbe\"" May 17 01:16:20.473108 env[1661]: time="2025-05-17T01:16:20.473082714Z" level=info msg="StartContainer for \"56cfb908d6941878e7980de68c52ce1d34c68adc466302ae1f77e14d23d0cbbe\" returns successfully" May 17 01:16:20.490684 env[1661]: time="2025-05-17T01:16:20.490623222Z" level=info msg="shim disconnected" id=56cfb908d6941878e7980de68c52ce1d34c68adc466302ae1f77e14d23d0cbbe May 17 01:16:20.490684 env[1661]: time="2025-05-17T01:16:20.490651056Z" level=warning msg="cleaning up after shim disconnected" id=56cfb908d6941878e7980de68c52ce1d34c68adc466302ae1f77e14d23d0cbbe namespace=k8s.io May 17 01:16:20.490684 env[1661]: time="2025-05-17T01:16:20.490657044Z" level=info msg="cleaning up dead shim" May 17 01:16:20.494129 env[1661]: time="2025-05-17T01:16:20.494108852Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:16:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5000 runtime=io.containerd.runc.v2\n" May 17 01:16:20.868059 kubelet[2625]: I0517 01:16:20.867990 2625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fdf16aa-164c-4fab-8a9d-f7b5f597a218" path="/var/lib/kubelet/pods/1fdf16aa-164c-4fab-8a9d-f7b5f597a218/volumes" May 17 01:16:21.098837 env[1661]: time="2025-05-17T01:16:21.098623366Z" level=info msg="CreateContainer within sandbox \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 01:16:21.108295 env[1661]: time="2025-05-17T01:16:21.108222058Z" level=info msg="CreateContainer within sandbox \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cd2e03662afe99d02d5b0c27ac6b399516385c08211fb3974995875f5bd319a1\"" May 17 01:16:21.108579 env[1661]: time="2025-05-17T01:16:21.108522310Z" level=info msg="StartContainer for \"cd2e03662afe99d02d5b0c27ac6b399516385c08211fb3974995875f5bd319a1\"" May 17 01:16:21.129711 env[1661]: time="2025-05-17T01:16:21.129628836Z" level=info msg="StartContainer for \"cd2e03662afe99d02d5b0c27ac6b399516385c08211fb3974995875f5bd319a1\" returns successfully" May 17 01:16:21.142122 env[1661]: time="2025-05-17T01:16:21.142094313Z" level=info msg="shim disconnected" id=cd2e03662afe99d02d5b0c27ac6b399516385c08211fb3974995875f5bd319a1 May 17 01:16:21.142122 env[1661]: time="2025-05-17T01:16:21.142123028Z" level=warning msg="cleaning up after shim disconnected" id=cd2e03662afe99d02d5b0c27ac6b399516385c08211fb3974995875f5bd319a1 namespace=k8s.io May 17 01:16:21.142257 env[1661]: time="2025-05-17T01:16:21.142129051Z" level=info msg="cleaning up dead shim" May 17 01:16:21.145571 env[1661]: time="2025-05-17T01:16:21.145549501Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:16:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5062 runtime=io.containerd.runc.v2\n" May 17 01:16:22.106662 env[1661]: time="2025-05-17T01:16:22.106524043Z" level=info msg="CreateContainer within sandbox \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 01:16:22.118143 env[1661]: time="2025-05-17T01:16:22.118122030Z" level=info msg="CreateContainer within sandbox \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3216dabbe8c02a5688a9aef0d39eee4b90c7ea7df745a269e54a0cc2362bbf6\"" May 17 01:16:22.118431 env[1661]: time="2025-05-17T01:16:22.118415928Z" level=info msg="StartContainer for \"a3216dabbe8c02a5688a9aef0d39eee4b90c7ea7df745a269e54a0cc2362bbf6\"" May 17 01:16:22.119850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425798849.mount: Deactivated successfully. May 17 01:16:22.142673 env[1661]: time="2025-05-17T01:16:22.142618389Z" level=info msg="StartContainer for \"a3216dabbe8c02a5688a9aef0d39eee4b90c7ea7df745a269e54a0cc2362bbf6\" returns successfully" May 17 01:16:22.154061 env[1661]: time="2025-05-17T01:16:22.154035453Z" level=info msg="shim disconnected" id=a3216dabbe8c02a5688a9aef0d39eee4b90c7ea7df745a269e54a0cc2362bbf6 May 17 01:16:22.154061 env[1661]: time="2025-05-17T01:16:22.154059651Z" level=warning msg="cleaning up after shim disconnected" id=a3216dabbe8c02a5688a9aef0d39eee4b90c7ea7df745a269e54a0cc2362bbf6 namespace=k8s.io May 17 01:16:22.154195 env[1661]: time="2025-05-17T01:16:22.154066486Z" level=info msg="cleaning up dead shim" May 17 01:16:22.158049 env[1661]: time="2025-05-17T01:16:22.158009145Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:16:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5118 runtime=io.containerd.runc.v2\n" May 17 01:16:22.309679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3216dabbe8c02a5688a9aef0d39eee4b90c7ea7df745a269e54a0cc2362bbf6-rootfs.mount: Deactivated successfully. May 17 01:16:23.114460 env[1661]: time="2025-05-17T01:16:23.114435742Z" level=info msg="CreateContainer within sandbox \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 01:16:23.122215 env[1661]: time="2025-05-17T01:16:23.122190767Z" level=info msg="CreateContainer within sandbox \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fce6f784c8ed86929b747894591bdc6ce0620e20a760102d1471dd4d0140e17a\"" May 17 01:16:23.122503 env[1661]: time="2025-05-17T01:16:23.122484644Z" level=info msg="StartContainer for \"fce6f784c8ed86929b747894591bdc6ce0620e20a760102d1471dd4d0140e17a\"" May 17 01:16:23.124128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818031506.mount: Deactivated successfully. May 17 01:16:23.145170 env[1661]: time="2025-05-17T01:16:23.145142377Z" level=info msg="StartContainer for \"fce6f784c8ed86929b747894591bdc6ce0620e20a760102d1471dd4d0140e17a\" returns successfully" May 17 01:16:23.155863 env[1661]: time="2025-05-17T01:16:23.155828419Z" level=info msg="shim disconnected" id=fce6f784c8ed86929b747894591bdc6ce0620e20a760102d1471dd4d0140e17a May 17 01:16:23.156010 env[1661]: time="2025-05-17T01:16:23.155866653Z" level=warning msg="cleaning up after shim disconnected" id=fce6f784c8ed86929b747894591bdc6ce0620e20a760102d1471dd4d0140e17a namespace=k8s.io May 17 01:16:23.156010 env[1661]: time="2025-05-17T01:16:23.155879325Z" level=info msg="cleaning up dead shim" May 17 01:16:23.160255 env[1661]: time="2025-05-17T01:16:23.160223789Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:16:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5171 runtime=io.containerd.runc.v2\n" May 17 01:16:23.308792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fce6f784c8ed86929b747894591bdc6ce0620e20a760102d1471dd4d0140e17a-rootfs.mount: Deactivated successfully. May 17 01:16:24.004313 kubelet[2625]: E0517 01:16:24.004172 2625 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 01:16:24.123884 env[1661]: time="2025-05-17T01:16:24.123740919Z" level=info msg="CreateContainer within sandbox \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 01:16:24.137476 env[1661]: time="2025-05-17T01:16:24.137350469Z" level=info msg="CreateContainer within sandbox \"53a7c448b5a86b931b863b89fd2144717ae35d531648a43ed639244d1b49071a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc2f60e1a9553c9f0a6c1a0a16bc98d0160b1809454c1048c5dd998d4ca8c79e\"" May 17 01:16:24.138115 env[1661]: time="2025-05-17T01:16:24.138045286Z" level=info msg="StartContainer for \"bc2f60e1a9553c9f0a6c1a0a16bc98d0160b1809454c1048c5dd998d4ca8c79e\"" May 17 01:16:24.140369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276712109.mount: Deactivated successfully. May 17 01:16:24.160093 env[1661]: time="2025-05-17T01:16:24.160046001Z" level=info msg="StartContainer for \"bc2f60e1a9553c9f0a6c1a0a16bc98d0160b1809454c1048c5dd998d4ca8c79e\" returns successfully" May 17 01:16:24.311242 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 01:16:25.148684 kubelet[2625]: I0517 01:16:25.148629 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7sffj" podStartSLOduration=5.148617152 podStartE2EDuration="5.148617152s" podCreationTimestamp="2025-05-17 01:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:16:25.148230771 +0000 UTC m=+416.337196092" watchObservedRunningTime="2025-05-17 01:16:25.148617152 +0000 UTC m=+416.337582471" May 17 01:16:27.102662 kubelet[2625]: I0517 01:16:27.102570 2625 setters.go:600] "Node became not ready" node="ci-3510.3.7-n-b3aec2dc90" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T01:16:27Z","lastTransitionTime":"2025-05-17T01:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 01:16:27.579292 systemd-networkd[1378]: lxc_health: Link UP May 17 01:16:27.605248 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 01:16:27.605250 systemd-networkd[1378]: lxc_health: Gained carrier May 17 01:16:28.867648 env[1661]: time="2025-05-17T01:16:28.867595035Z" level=info msg="StopPodSandbox for \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\"" May 17 01:16:28.867897 env[1661]: time="2025-05-17T01:16:28.867648660Z" level=info msg="TearDown network for sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" successfully" May 17 01:16:28.867897 env[1661]: time="2025-05-17T01:16:28.867671673Z" level=info msg="StopPodSandbox for \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" returns successfully" May 17 01:16:28.867897 env[1661]: time="2025-05-17T01:16:28.867812065Z" level=info msg="RemovePodSandbox for \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\"" May 17 01:16:28.867897 env[1661]: time="2025-05-17T01:16:28.867826248Z" level=info msg="Forcibly stopping sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\"" May 17 01:16:28.867897 env[1661]: time="2025-05-17T01:16:28.867878568Z" level=info msg="TearDown network for sandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" successfully" May 17 01:16:28.869171 env[1661]: time="2025-05-17T01:16:28.869132147Z" level=info msg="RemovePodSandbox \"d8911e682dedfdfa2d669ed404336ce05e99eed72c4f931202407420266b322e\" returns successfully" May 17 01:16:28.869397 env[1661]: time="2025-05-17T01:16:28.869355903Z" level=info msg="StopPodSandbox for \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\"" May 17 01:16:28.869434 env[1661]: time="2025-05-17T01:16:28.869392655Z" level=info msg="TearDown network for sandbox \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\" successfully" May 17 01:16:28.869434 env[1661]: time="2025-05-17T01:16:28.869410098Z" level=info msg="StopPodSandbox for \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\" returns successfully" May 17 01:16:28.869594 env[1661]: time="2025-05-17T01:16:28.869559672Z" level=info msg="RemovePodSandbox for \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\"" May 17 01:16:28.869594 env[1661]: time="2025-05-17T01:16:28.869572265Z" level=info msg="Forcibly stopping sandbox \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\"" May 17 01:16:28.869649 env[1661]: time="2025-05-17T01:16:28.869610400Z" level=info msg="TearDown network for sandbox \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\" successfully" May 17 01:16:28.870680 env[1661]: time="2025-05-17T01:16:28.870668991Z" level=info msg="RemovePodSandbox \"7b0130506090a969b9a63cab1477c4791cb2a410dda9ca31a1dc6ed08079b13b\" returns successfully" May 17 01:16:29.245330 systemd-networkd[1378]: lxc_health: Gained IPv6LL May 17 01:16:33.667657 sshd[4877]: pam_unix(sshd:session): session closed for user core May 17 01:16:33.674017 systemd[1]: sshd@25-147.28.180.193:22-139.178.89.65:39762.service: Deactivated successfully. May 17 01:16:33.676754 systemd[1]: session-28.scope: Deactivated successfully. May 17 01:16:33.676759 systemd-logind[1703]: Session 28 logged out. Waiting for processes to exit. May 17 01:16:33.678697 systemd-logind[1703]: Removed session 28.