May 17 01:28:01.552819 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 01:28:01.552832 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 01:28:01.552840 kernel: BIOS-provided physical RAM map: May 17 01:28:01.552844 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable May 17 01:28:01.552848 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved May 17 01:28:01.552851 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved May 17 01:28:01.552856 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable May 17 01:28:01.552860 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved May 17 01:28:01.552864 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b18fff] usable May 17 01:28:01.552868 kernel: BIOS-e820: [mem 0x0000000081b19000-0x0000000081b19fff] ACPI NVS May 17 01:28:01.552872 kernel: BIOS-e820: [mem 0x0000000081b1a000-0x0000000081b1afff] reserved May 17 01:28:01.552876 kernel: BIOS-e820: [mem 0x0000000081b1b000-0x000000008afc4fff] usable May 17 01:28:01.552880 kernel: BIOS-e820: [mem 0x000000008afc5000-0x000000008c0a9fff] reserved May 17 01:28:01.552884 kernel: BIOS-e820: [mem 0x000000008c0aa000-0x000000008c232fff] usable May 17 01:28:01.552889 kernel: BIOS-e820: [mem 0x000000008c233000-0x000000008c664fff] ACPI NVS May 17 01:28:01.552894 kernel: BIOS-e820: [mem 0x000000008c665000-0x000000008eefefff] reserved May 17 01:28:01.552898 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable May 17 01:28:01.552903 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved May 17 01:28:01.552907 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 17 01:28:01.552911 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved May 17 01:28:01.552915 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved May 17 01:28:01.552919 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 17 01:28:01.552924 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved May 17 01:28:01.552928 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable May 17 01:28:01.552932 kernel: NX (Execute Disable) protection: active May 17 01:28:01.552936 kernel: SMBIOS 3.2.1 present. May 17 01:28:01.552942 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 May 17 01:28:01.552947 kernel: tsc: Detected 3400.000 MHz processor May 17 01:28:01.552952 kernel: tsc: Detected 3399.906 MHz TSC May 17 01:28:01.552957 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 01:28:01.552962 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 01:28:01.552967 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 May 17 01:28:01.552972 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 01:28:01.552977 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 May 17 01:28:01.552982 kernel: Using GB pages for direct mapping May 17 01:28:01.552987 kernel: ACPI: Early table checksum verification disabled May 17 01:28:01.552993 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) May 17 01:28:01.552997 kernel: ACPI: XSDT 0x000000008C5460C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) May 17 01:28:01.553002 kernel: ACPI: FACP 0x000000008C582670 000114 (v06 01072009 AMI 00010013) May 17 01:28:01.553006 kernel: ACPI: DSDT 0x000000008C546268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) May 17 01:28:01.553012 kernel: ACPI: FACS 0x000000008C664F80 000040 May 17 01:28:01.553017 kernel: ACPI: APIC 0x000000008C582788 00012C (v04 01072009 AMI 00010013) May 17 01:28:01.553022 kernel: ACPI: FPDT 0x000000008C5828B8 000044 (v01 01072009 AMI 00010013) May 17 01:28:01.553027 kernel: ACPI: FIDT 0x000000008C582900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) May 17 01:28:01.553032 kernel: ACPI: MCFG 0x000000008C5829A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) May 17 01:28:01.553037 kernel: ACPI: SPMI 0x000000008C5829E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) May 17 01:28:01.553041 kernel: ACPI: SSDT 0x000000008C582A28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) May 17 01:28:01.553046 kernel: ACPI: SSDT 0x000000008C584548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) May 17 01:28:01.553051 kernel: ACPI: SSDT 0x000000008C587710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) May 17 01:28:01.553056 kernel: ACPI: HPET 0x000000008C589A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:28:01.553061 kernel: ACPI: SSDT 0x000000008C589A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) May 17 01:28:01.553066 kernel: ACPI: SSDT 0x000000008C58AA28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) May 17 01:28:01.553070 kernel: ACPI: UEFI 0x000000008C58B320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:28:01.553075 kernel: ACPI: LPIT 0x000000008C58B368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:28:01.553080 kernel: ACPI: SSDT 0x000000008C58B400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) May 17 01:28:01.553085 kernel: ACPI: SSDT 0x000000008C58DBE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) May 17 01:28:01.553089 kernel: ACPI: DBGP 0x000000008C58F0C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:28:01.553094 kernel: ACPI: DBG2 0x000000008C58F100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) May 17 01:28:01.553099 kernel: ACPI: SSDT 0x000000008C58F158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) May 17 01:28:01.553104 kernel: ACPI: DMAR 0x000000008C590CC0 000070 (v01 INTEL EDK2 00000002 01000013) May 17 01:28:01.553109 kernel: ACPI: SSDT 0x000000008C590D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) May 17 01:28:01.553114 kernel: ACPI: TPM2 0x000000008C590E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) May 17 01:28:01.553118 kernel: ACPI: SSDT 0x000000008C590EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) May 17 01:28:01.553123 kernel: ACPI: WSMT 0x000000008C591C40 000028 (v01 SUPERM 01072009 AMI 00010013) May 17 01:28:01.553128 kernel: ACPI: EINJ 0x000000008C591C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) May 17 01:28:01.553132 kernel: ACPI: ERST 0x000000008C591D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) May 17 01:28:01.553137 kernel: ACPI: BERT 0x000000008C591FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) May 17 01:28:01.553143 kernel: ACPI: HEST 0x000000008C591FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) May 17 01:28:01.553147 kernel: ACPI: SSDT 0x000000008C592278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) May 17 01:28:01.553152 kernel: ACPI: Reserving FACP table memory at [mem 0x8c582670-0x8c582783] May 17 01:28:01.553157 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c546268-0x8c58266b] May 17 01:28:01.553162 kernel: ACPI: Reserving FACS table memory at [mem 0x8c664f80-0x8c664fbf] May 17 01:28:01.553166 kernel: ACPI: Reserving APIC table memory at [mem 0x8c582788-0x8c5828b3] May 17 01:28:01.553171 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c5828b8-0x8c5828fb] May 17 01:28:01.553176 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c582900-0x8c58299b] May 17 01:28:01.553181 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c5829a0-0x8c5829db] May 17 01:28:01.553186 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c5829e0-0x8c582a20] May 17 01:28:01.553190 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c582a28-0x8c584543] May 17 01:28:01.553195 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c584548-0x8c58770d] May 17 01:28:01.553200 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c587710-0x8c589a3a] May 17 01:28:01.553204 kernel: ACPI: Reserving HPET table memory at [mem 0x8c589a40-0x8c589a77] May 17 01:28:01.553209 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c589a78-0x8c58aa25] May 17 01:28:01.553214 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58b31b] May 17 01:28:01.553218 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c58b320-0x8c58b361] May 17 01:28:01.553224 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c58b368-0x8c58b3fb] May 17 01:28:01.553228 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b400-0x8c58dbdd] May 17 01:28:01.553233 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58dbe0-0x8c58f0c1] May 17 01:28:01.553238 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c58f0c8-0x8c58f0fb] May 17 01:28:01.553242 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c58f100-0x8c58f153] May 17 01:28:01.553247 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f158-0x8c590cbe] May 17 01:28:01.553252 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c590cc0-0x8c590d2f] May 17 01:28:01.553256 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590d30-0x8c590e73] May 17 01:28:01.553261 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c590e78-0x8c590eab] May 17 01:28:01.553267 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590eb0-0x8c591c3e] May 17 01:28:01.553271 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c591c40-0x8c591c67] May 17 01:28:01.553276 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c591c68-0x8c591d97] May 17 01:28:01.553281 kernel: ACPI: Reserving ERST table memory at [mem 0x8c591d98-0x8c591fc7] May 17 01:28:01.553285 kernel: ACPI: Reserving BERT table memory at [mem 0x8c591fc8-0x8c591ff7] May 17 01:28:01.553290 kernel: ACPI: Reserving HEST table memory at [mem 0x8c591ff8-0x8c592273] May 17 01:28:01.553316 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592278-0x8c5923d9] May 17 01:28:01.553321 kernel: No NUMA configuration found May 17 01:28:01.553326 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] May 17 01:28:01.553332 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] May 17 01:28:01.553353 kernel: Zone ranges: May 17 01:28:01.553358 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 01:28:01.553363 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 01:28:01.553367 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] May 17 01:28:01.553372 kernel: Movable zone start for each node May 17 01:28:01.553377 kernel: Early memory node ranges May 17 01:28:01.553381 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] May 17 01:28:01.553386 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] May 17 01:28:01.553391 kernel: node 0: [mem 0x0000000040400000-0x0000000081b18fff] May 17 01:28:01.553396 kernel: node 0: [mem 0x0000000081b1b000-0x000000008afc4fff] May 17 01:28:01.553401 kernel: node 0: [mem 0x000000008c0aa000-0x000000008c232fff] May 17 01:28:01.553406 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] May 17 01:28:01.553410 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] May 17 01:28:01.553415 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] May 17 01:28:01.553420 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 01:28:01.553428 kernel: On node 0, zone DMA: 103 pages in unavailable ranges May 17 01:28:01.553433 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges May 17 01:28:01.553438 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges May 17 01:28:01.553443 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges May 17 01:28:01.553449 kernel: On node 0, zone DMA32: 11468 pages in unavailable ranges May 17 01:28:01.553454 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges May 17 01:28:01.553459 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges May 17 01:28:01.553464 kernel: ACPI: PM-Timer IO Port: 0x1808 May 17 01:28:01.553470 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 17 01:28:01.553475 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 17 01:28:01.553479 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 17 01:28:01.553485 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 17 01:28:01.553490 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 17 01:28:01.553495 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 17 01:28:01.553500 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 17 01:28:01.553505 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 17 01:28:01.553510 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 17 01:28:01.553515 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 17 01:28:01.553520 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 17 01:28:01.553525 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 17 01:28:01.553531 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 17 01:28:01.553536 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 17 01:28:01.553541 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 17 01:28:01.553546 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 17 01:28:01.553551 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 May 17 01:28:01.553556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 01:28:01.553561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 01:28:01.553566 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 01:28:01.553571 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 01:28:01.553576 kernel: TSC deadline timer available May 17 01:28:01.553581 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs May 17 01:28:01.553586 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices May 17 01:28:01.553591 kernel: Booting paravirtualized kernel on bare hardware May 17 01:28:01.553597 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 01:28:01.553602 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 May 17 01:28:01.553607 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 May 17 01:28:01.553612 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 May 17 01:28:01.553617 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 17 01:28:01.553622 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232407 May 17 01:28:01.553627 kernel: Policy zone: Normal May 17 01:28:01.553633 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 01:28:01.553638 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 01:28:01.553643 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) May 17 01:28:01.553648 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) May 17 01:28:01.553653 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 01:28:01.553658 kernel: Memory: 32722572K/33452948K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 730116K reserved, 0K cma-reserved) May 17 01:28:01.553664 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 17 01:28:01.553670 kernel: ftrace: allocating 34585 entries in 136 pages May 17 01:28:01.553675 kernel: ftrace: allocated 136 pages with 2 groups May 17 01:28:01.553680 kernel: rcu: Hierarchical RCU implementation. May 17 01:28:01.553685 kernel: rcu: RCU event tracing is enabled. May 17 01:28:01.553690 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 17 01:28:01.553695 kernel: Rude variant of Tasks RCU enabled. May 17 01:28:01.553700 kernel: Tracing variant of Tasks RCU enabled. May 17 01:28:01.553706 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 01:28:01.553711 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 17 01:28:01.553716 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 May 17 01:28:01.553721 kernel: random: crng init done May 17 01:28:01.553726 kernel: Console: colour dummy device 80x25 May 17 01:28:01.553731 kernel: printk: console [tty0] enabled May 17 01:28:01.553736 kernel: printk: console [ttyS1] enabled May 17 01:28:01.553741 kernel: ACPI: Core revision 20210730 May 17 01:28:01.553746 kernel: hpet: HPET dysfunctional in PC10. Force disabled. May 17 01:28:01.553751 kernel: APIC: Switch to symmetric I/O mode setup May 17 01:28:01.553757 kernel: DMAR: Host address width 39 May 17 01:28:01.553762 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 May 17 01:28:01.553767 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da May 17 01:28:01.553772 kernel: DMAR: RMRR base: 0x0000008cf10000 end: 0x0000008d159fff May 17 01:28:01.553777 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 May 17 01:28:01.553782 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 May 17 01:28:01.553787 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. May 17 01:28:01.553792 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode May 17 01:28:01.553797 kernel: x2apic enabled May 17 01:28:01.553803 kernel: Switched APIC routing to cluster x2apic. May 17 01:28:01.553808 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns May 17 01:28:01.553813 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) May 17 01:28:01.553818 kernel: CPU0: Thermal monitoring enabled (TM1) May 17 01:28:01.553823 kernel: process: using mwait in idle threads May 17 01:28:01.553828 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 01:28:01.553833 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 01:28:01.553838 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 01:28:01.553843 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 17 01:28:01.553849 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 01:28:01.553854 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 01:28:01.553859 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 17 01:28:01.553864 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 17 01:28:01.553869 kernel: RETBleed: Mitigation: Enhanced IBRS May 17 01:28:01.553874 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 01:28:01.553879 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 01:28:01.553883 kernel: TAA: Mitigation: TSX disabled May 17 01:28:01.553888 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers May 17 01:28:01.553893 kernel: SRBDS: Mitigation: Microcode May 17 01:28:01.553898 kernel: GDS: Vulnerable: No microcode May 17 01:28:01.553904 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 01:28:01.553909 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 01:28:01.553914 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 01:28:01.553919 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 01:28:01.553924 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 01:28:01.553929 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 01:28:01.553934 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 01:28:01.553939 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 01:28:01.553944 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. May 17 01:28:01.553949 kernel: Freeing SMP alternatives memory: 32K May 17 01:28:01.553953 kernel: pid_max: default: 32768 minimum: 301 May 17 01:28:01.553959 kernel: LSM: Security Framework initializing May 17 01:28:01.553964 kernel: SELinux: Initializing. May 17 01:28:01.553969 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 01:28:01.553974 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 01:28:01.553979 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 May 17 01:28:01.553984 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 17 01:28:01.553989 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. May 17 01:28:01.553994 kernel: ... version: 4 May 17 01:28:01.553999 kernel: ... bit width: 48 May 17 01:28:01.554004 kernel: ... generic registers: 4 May 17 01:28:01.554009 kernel: ... value mask: 0000ffffffffffff May 17 01:28:01.554015 kernel: ... max period: 00007fffffffffff May 17 01:28:01.554020 kernel: ... fixed-purpose events: 3 May 17 01:28:01.554025 kernel: ... event mask: 000000070000000f May 17 01:28:01.554030 kernel: signal: max sigframe size: 2032 May 17 01:28:01.554035 kernel: rcu: Hierarchical SRCU implementation. May 17 01:28:01.554040 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. May 17 01:28:01.554045 kernel: smp: Bringing up secondary CPUs ... May 17 01:28:01.554050 kernel: x86: Booting SMP configuration: May 17 01:28:01.554055 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 May 17 01:28:01.554061 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 01:28:01.554066 kernel: #9 #10 #11 #12 #13 #14 #15 May 17 01:28:01.554071 kernel: smp: Brought up 1 node, 16 CPUs May 17 01:28:01.554076 kernel: smpboot: Max logical packages: 1 May 17 01:28:01.554081 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) May 17 01:28:01.554086 kernel: devtmpfs: initialized May 17 01:28:01.554091 kernel: x86/mm: Memory block size: 128MB May 17 01:28:01.554096 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b19000-0x81b19fff] (4096 bytes) May 17 01:28:01.554101 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c233000-0x8c664fff] (4399104 bytes) May 17 01:28:01.554107 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 01:28:01.554112 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 17 01:28:01.554117 kernel: pinctrl core: initialized pinctrl subsystem May 17 01:28:01.554122 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 01:28:01.554127 kernel: audit: initializing netlink subsys (disabled) May 17 01:28:01.554132 kernel: audit: type=2000 audit(1747445276.041:1): state=initialized audit_enabled=0 res=1 May 17 01:28:01.554137 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 01:28:01.554142 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 01:28:01.554148 kernel: cpuidle: using governor menu May 17 01:28:01.554153 kernel: ACPI: bus type PCI registered May 17 01:28:01.554158 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 01:28:01.554163 kernel: dca service started, version 1.12.1 May 17 01:28:01.554168 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 17 01:28:01.554173 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 May 17 01:28:01.554178 kernel: PCI: Using configuration type 1 for base access May 17 01:28:01.554183 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' May 17 01:28:01.554188 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 01:28:01.554193 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 01:28:01.554199 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 01:28:01.554204 kernel: ACPI: Added _OSI(Module Device) May 17 01:28:01.554209 kernel: ACPI: Added _OSI(Processor Device) May 17 01:28:01.554214 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 01:28:01.554219 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 01:28:01.554224 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 01:28:01.554229 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 01:28:01.554233 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 01:28:01.554239 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded May 17 01:28:01.554244 kernel: ACPI: Dynamic OEM Table Load: May 17 01:28:01.554249 kernel: ACPI: SSDT 0xFFFF93E94021BB00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) May 17 01:28:01.554254 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked May 17 01:28:01.554259 kernel: ACPI: Dynamic OEM Table Load: May 17 01:28:01.554264 kernel: ACPI: SSDT 0xFFFF93E941AE5000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) May 17 01:28:01.554269 kernel: ACPI: Dynamic OEM Table Load: May 17 01:28:01.554274 kernel: ACPI: SSDT 0xFFFF93E941A5E800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) May 17 01:28:01.554279 kernel: ACPI: Dynamic OEM Table Load: May 17 01:28:01.554284 kernel: ACPI: SSDT 0xFFFF93E941B4F800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) May 17 01:28:01.554290 kernel: ACPI: Dynamic OEM Table Load: May 17 01:28:01.554297 kernel: ACPI: SSDT 0xFFFF93E94014D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) May 17 01:28:01.554302 kernel: ACPI: Dynamic OEM Table Load: May 17 01:28:01.554307 kernel: ACPI: SSDT 0xFFFF93E941AE0C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) May 17 01:28:01.554312 kernel: ACPI: Interpreter enabled May 17 01:28:01.554317 kernel: ACPI: PM: (supports S0 S5) May 17 01:28:01.554322 kernel: ACPI: Using IOAPIC for interrupt routing May 17 01:28:01.554327 kernel: HEST: Enabling Firmware First mode for corrected errors. May 17 01:28:01.554332 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. May 17 01:28:01.554337 kernel: HEST: Table parsing has been initialized. May 17 01:28:01.554342 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. May 17 01:28:01.554347 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 01:28:01.554352 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F May 17 01:28:01.554357 kernel: ACPI: PM: Power Resource [USBC] May 17 01:28:01.554362 kernel: ACPI: PM: Power Resource [V0PR] May 17 01:28:01.554367 kernel: ACPI: PM: Power Resource [V1PR] May 17 01:28:01.554372 kernel: ACPI: PM: Power Resource [V2PR] May 17 01:28:01.554377 kernel: ACPI: PM: Power Resource [WRST] May 17 01:28:01.554383 kernel: ACPI: PM: Power Resource [FN00] May 17 01:28:01.554388 kernel: ACPI: PM: Power Resource [FN01] May 17 01:28:01.554393 kernel: ACPI: PM: Power Resource [FN02] May 17 01:28:01.554398 kernel: ACPI: PM: Power Resource [FN03] May 17 01:28:01.554403 kernel: ACPI: PM: Power Resource [FN04] May 17 01:28:01.554407 kernel: ACPI: PM: Power Resource [PIN] May 17 01:28:01.554412 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) May 17 01:28:01.554480 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:28:01.554528 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] May 17 01:28:01.554573 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] May 17 01:28:01.554580 kernel: PCI host bridge to bus 0000:00 May 17 01:28:01.554630 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 01:28:01.554670 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 01:28:01.554708 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 01:28:01.554747 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] May 17 01:28:01.554788 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] May 17 01:28:01.554826 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] May 17 01:28:01.554878 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 May 17 01:28:01.554927 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 May 17 01:28:01.554973 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold May 17 01:28:01.555021 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 May 17 01:28:01.555068 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x95520000-0x95520fff 64bit] May 17 01:28:01.555116 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 May 17 01:28:01.555161 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] May 17 01:28:01.555209 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 May 17 01:28:01.555254 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] May 17 01:28:01.555302 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold May 17 01:28:01.555351 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 May 17 01:28:01.555397 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] May 17 01:28:01.555440 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551e000-0x9551efff 64bit] May 17 01:28:01.555489 kernel: pci 0000:00:14.5: [8086:a375] type 00 class 0x080501 May 17 01:28:01.555532 kernel: pci 0000:00:14.5: reg 0x10: [mem 0x9551d000-0x9551dfff 64bit] May 17 01:28:01.555580 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 May 17 01:28:01.555623 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:28:01.555674 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 May 17 01:28:01.555717 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:28:01.555764 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 May 17 01:28:01.555807 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] May 17 01:28:01.555849 kernel: pci 0000:00:16.0: PME# supported from D3hot May 17 01:28:01.555898 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 May 17 01:28:01.555944 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] May 17 01:28:01.555987 kernel: pci 0000:00:16.1: PME# supported from D3hot May 17 01:28:01.556033 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 May 17 01:28:01.556076 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] May 17 01:28:01.556119 kernel: pci 0000:00:16.4: PME# supported from D3hot May 17 01:28:01.556165 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 May 17 01:28:01.556210 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] May 17 01:28:01.556260 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] May 17 01:28:01.556338 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] May 17 01:28:01.556395 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] May 17 01:28:01.556438 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] May 17 01:28:01.556480 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] May 17 01:28:01.556522 kernel: pci 0000:00:17.0: PME# supported from D3hot May 17 01:28:01.556569 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 May 17 01:28:01.556614 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold May 17 01:28:01.556661 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 May 17 01:28:01.556704 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold May 17 01:28:01.556752 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 May 17 01:28:01.556795 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold May 17 01:28:01.556844 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 May 17 01:28:01.556890 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold May 17 01:28:01.556939 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 May 17 01:28:01.556984 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold May 17 01:28:01.557031 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 May 17 01:28:01.557077 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:28:01.557123 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 May 17 01:28:01.557170 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 May 17 01:28:01.557213 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] May 17 01:28:01.557255 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] May 17 01:28:01.557327 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 May 17 01:28:01.557392 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] May 17 01:28:01.557442 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 01:28:01.557487 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] May 17 01:28:01.557532 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] May 17 01:28:01.557576 kernel: pci 0000:01:00.0: PME# supported from D3cold May 17 01:28:01.557621 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 01:28:01.557665 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:28:01.557718 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 01:28:01.557765 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] May 17 01:28:01.557811 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] May 17 01:28:01.557855 kernel: pci 0000:01:00.1: PME# supported from D3cold May 17 01:28:01.557903 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 01:28:01.557947 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:28:01.557990 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:28:01.558036 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 01:28:01.558078 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:28:01.558121 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 01:28:01.558170 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect May 17 01:28:01.558271 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 May 17 01:28:01.558319 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] May 17 01:28:01.558382 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] May 17 01:28:01.558428 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] May 17 01:28:01.558472 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 17 01:28:01.558515 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 01:28:01.558558 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 01:28:01.558601 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 01:28:01.558648 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect May 17 01:28:01.558694 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 May 17 01:28:01.558740 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] May 17 01:28:01.558784 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] May 17 01:28:01.558827 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] May 17 01:28:01.558872 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold May 17 01:28:01.558915 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 01:28:01.558959 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 01:28:01.559003 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 01:28:01.559046 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 01:28:01.559097 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 May 17 01:28:01.559143 kernel: pci 0000:06:00.0: enabling Extended Tags May 17 01:28:01.559188 kernel: pci 0000:06:00.0: supports D1 D2 May 17 01:28:01.559232 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:28:01.559276 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 01:28:01.559357 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 01:28:01.559401 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 01:28:01.559448 kernel: pci_bus 0000:07: extended config space not accessible May 17 01:28:01.559501 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 May 17 01:28:01.559548 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] May 17 01:28:01.559595 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] May 17 01:28:01.559642 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] May 17 01:28:01.559688 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 01:28:01.559734 kernel: pci 0000:07:00.0: supports D1 D2 May 17 01:28:01.559781 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:28:01.559828 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 01:28:01.559873 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 01:28:01.559917 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 01:28:01.559925 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 May 17 01:28:01.559931 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 May 17 01:28:01.559936 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 May 17 01:28:01.559941 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 May 17 01:28:01.559946 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 May 17 01:28:01.559953 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 May 17 01:28:01.559958 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 May 17 01:28:01.559963 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 May 17 01:28:01.559969 kernel: iommu: Default domain type: Translated May 17 01:28:01.559974 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 01:28:01.560019 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device May 17 01:28:01.560065 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 01:28:01.560111 kernel: pci 0000:07:00.0: vgaarb: bridge control possible May 17 01:28:01.560120 kernel: vgaarb: loaded May 17 01:28:01.560126 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 01:28:01.560131 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 01:28:01.560137 kernel: PTP clock support registered May 17 01:28:01.560142 kernel: PCI: Using ACPI for IRQ routing May 17 01:28:01.560147 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 01:28:01.560152 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] May 17 01:28:01.560157 kernel: e820: reserve RAM buffer [mem 0x81b19000-0x83ffffff] May 17 01:28:01.560164 kernel: e820: reserve RAM buffer [mem 0x8afc5000-0x8bffffff] May 17 01:28:01.560169 kernel: e820: reserve RAM buffer [mem 0x8c233000-0x8fffffff] May 17 01:28:01.560175 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] May 17 01:28:01.560180 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] May 17 01:28:01.560185 kernel: clocksource: Switched to clocksource tsc-early May 17 01:28:01.560190 kernel: VFS: Disk quotas dquot_6.6.0 May 17 01:28:01.560196 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 01:28:01.560201 kernel: pnp: PnP ACPI init May 17 01:28:01.560247 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved May 17 01:28:01.560290 kernel: pnp 00:02: [dma 0 disabled] May 17 01:28:01.560376 kernel: pnp 00:03: [dma 0 disabled] May 17 01:28:01.560417 kernel: system 00:04: [io 0x0680-0x069f] has been reserved May 17 01:28:01.560458 kernel: system 00:04: [io 0x164e-0x164f] has been reserved May 17 01:28:01.560499 kernel: system 00:05: [io 0x1854-0x1857] has been reserved May 17 01:28:01.560544 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved May 17 01:28:01.560585 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved May 17 01:28:01.560623 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved May 17 01:28:01.560664 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved May 17 01:28:01.560702 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved May 17 01:28:01.560741 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved May 17 01:28:01.560780 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved May 17 01:28:01.560818 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved May 17 01:28:01.560861 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved May 17 01:28:01.560902 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved May 17 01:28:01.560941 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved May 17 01:28:01.560979 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved May 17 01:28:01.561018 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved May 17 01:28:01.561057 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved May 17 01:28:01.561096 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved May 17 01:28:01.561140 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved May 17 01:28:01.561148 kernel: pnp: PnP ACPI: found 10 devices May 17 01:28:01.561154 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 01:28:01.561159 kernel: NET: Registered PF_INET protocol family May 17 01:28:01.561164 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:28:01.561170 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 01:28:01.561175 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 01:28:01.561181 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:28:01.561186 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 01:28:01.561193 kernel: TCP: Hash tables configured (established 262144 bind 65536) May 17 01:28:01.561198 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 01:28:01.561203 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 01:28:01.561209 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 01:28:01.561214 kernel: NET: Registered PF_XDP protocol family May 17 01:28:01.561257 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] May 17 01:28:01.561326 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] May 17 01:28:01.561371 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] May 17 01:28:01.561420 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 01:28:01.561466 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 01:28:01.561513 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 01:28:01.561559 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 01:28:01.561603 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:28:01.561647 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 01:28:01.561694 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:28:01.561738 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 01:28:01.561782 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 01:28:01.561827 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 01:28:01.561870 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 01:28:01.561914 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 01:28:01.561957 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 01:28:01.562004 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 01:28:01.562047 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 01:28:01.562092 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 01:28:01.562138 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 01:28:01.562183 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 01:28:01.562227 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 01:28:01.562270 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 01:28:01.562317 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 01:28:01.562357 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 01:28:01.562399 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 01:28:01.562437 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 01:28:01.562475 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 01:28:01.562514 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] May 17 01:28:01.562552 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] May 17 01:28:01.562597 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] May 17 01:28:01.562638 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:28:01.562685 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] May 17 01:28:01.562726 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] May 17 01:28:01.562771 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 01:28:01.562812 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] May 17 01:28:01.562859 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] May 17 01:28:01.562900 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] May 17 01:28:01.562945 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] May 17 01:28:01.562988 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] May 17 01:28:01.562996 kernel: PCI: CLS 64 bytes, default 64 May 17 01:28:01.563002 kernel: DMAR: No ATSR found May 17 01:28:01.563007 kernel: DMAR: No SATC found May 17 01:28:01.563013 kernel: DMAR: dmar0: Using Queued invalidation May 17 01:28:01.563055 kernel: pci 0000:00:00.0: Adding to iommu group 0 May 17 01:28:01.563101 kernel: pci 0000:00:01.0: Adding to iommu group 1 May 17 01:28:01.563146 kernel: pci 0000:00:08.0: Adding to iommu group 2 May 17 01:28:01.563189 kernel: pci 0000:00:12.0: Adding to iommu group 3 May 17 01:28:01.563233 kernel: pci 0000:00:14.0: Adding to iommu group 4 May 17 01:28:01.563276 kernel: pci 0000:00:14.2: Adding to iommu group 4 May 17 01:28:01.563323 kernel: pci 0000:00:14.5: Adding to iommu group 4 May 17 01:28:01.563365 kernel: pci 0000:00:15.0: Adding to iommu group 5 May 17 01:28:01.563409 kernel: pci 0000:00:15.1: Adding to iommu group 5 May 17 01:28:01.563452 kernel: pci 0000:00:16.0: Adding to iommu group 6 May 17 01:28:01.563496 kernel: pci 0000:00:16.1: Adding to iommu group 6 May 17 01:28:01.563541 kernel: pci 0000:00:16.4: Adding to iommu group 6 May 17 01:28:01.563584 kernel: pci 0000:00:17.0: Adding to iommu group 7 May 17 01:28:01.563628 kernel: pci 0000:00:1b.0: Adding to iommu group 8 May 17 01:28:01.563672 kernel: pci 0000:00:1b.4: Adding to iommu group 9 May 17 01:28:01.563715 kernel: pci 0000:00:1b.5: Adding to iommu group 10 May 17 01:28:01.563758 kernel: pci 0000:00:1c.0: Adding to iommu group 11 May 17 01:28:01.563801 kernel: pci 0000:00:1c.3: Adding to iommu group 12 May 17 01:28:01.563845 kernel: pci 0000:00:1e.0: Adding to iommu group 13 May 17 01:28:01.563889 kernel: pci 0000:00:1f.0: Adding to iommu group 14 May 17 01:28:01.563933 kernel: pci 0000:00:1f.4: Adding to iommu group 14 May 17 01:28:01.563975 kernel: pci 0000:00:1f.5: Adding to iommu group 14 May 17 01:28:01.564020 kernel: pci 0000:01:00.0: Adding to iommu group 1 May 17 01:28:01.564065 kernel: pci 0000:01:00.1: Adding to iommu group 1 May 17 01:28:01.564111 kernel: pci 0000:03:00.0: Adding to iommu group 15 May 17 01:28:01.564155 kernel: pci 0000:04:00.0: Adding to iommu group 16 May 17 01:28:01.564201 kernel: pci 0000:06:00.0: Adding to iommu group 17 May 17 01:28:01.564250 kernel: pci 0000:07:00.0: Adding to iommu group 17 May 17 01:28:01.564258 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O May 17 01:28:01.564264 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 01:28:01.564269 kernel: software IO TLB: mapped [mem 0x0000000086fc5000-0x000000008afc5000] (64MB) May 17 01:28:01.564275 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer May 17 01:28:01.564280 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules May 17 01:28:01.564286 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules May 17 01:28:01.564291 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules May 17 01:28:01.564340 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) May 17 01:28:01.564350 kernel: Initialise system trusted keyrings May 17 01:28:01.564355 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 May 17 01:28:01.564361 kernel: Key type asymmetric registered May 17 01:28:01.564366 kernel: Asymmetric key parser 'x509' registered May 17 01:28:01.564372 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 01:28:01.564377 kernel: io scheduler mq-deadline registered May 17 01:28:01.564382 kernel: io scheduler kyber registered May 17 01:28:01.564388 kernel: io scheduler bfq registered May 17 01:28:01.564432 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 May 17 01:28:01.564477 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 May 17 01:28:01.564521 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 May 17 01:28:01.564583 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 May 17 01:28:01.564627 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 May 17 01:28:01.564670 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 May 17 01:28:01.564718 kernel: thermal LNXTHERM:00: registered as thermal_zone0 May 17 01:28:01.564728 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) May 17 01:28:01.564733 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. May 17 01:28:01.564739 kernel: pstore: Registered erst as persistent store backend May 17 01:28:01.564744 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 01:28:01.564750 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 01:28:01.564755 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 01:28:01.564760 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 01:28:01.564765 kernel: hpet_acpi_add: no address or irqs in _CRS May 17 01:28:01.564808 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) May 17 01:28:01.564818 kernel: i8042: PNP: No PS/2 controller found. May 17 01:28:01.564858 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 May 17 01:28:01.564897 kernel: rtc_cmos rtc_cmos: registered as rtc0 May 17 01:28:01.564937 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-05-17T01:28:00 UTC (1747445280) May 17 01:28:01.564977 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram May 17 01:28:01.564985 kernel: intel_pstate: Intel P-state driver initializing May 17 01:28:01.564990 kernel: intel_pstate: Disabling energy efficiency optimization May 17 01:28:01.564996 kernel: intel_pstate: HWP enabled May 17 01:28:01.565002 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 May 17 01:28:01.565008 kernel: vesafb: scrolling: redraw May 17 01:28:01.565013 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 May 17 01:28:01.565018 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000023c4f187, using 768k, total 768k May 17 01:28:01.565024 kernel: Console: switching to colour frame buffer device 128x48 May 17 01:28:01.565029 kernel: fb0: VESA VGA frame buffer device May 17 01:28:01.565034 kernel: NET: Registered PF_INET6 protocol family May 17 01:28:01.565040 kernel: Segment Routing with IPv6 May 17 01:28:01.565045 kernel: In-situ OAM (IOAM) with IPv6 May 17 01:28:01.565051 kernel: NET: Registered PF_PACKET protocol family May 17 01:28:01.565056 kernel: Key type dns_resolver registered May 17 01:28:01.565061 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 May 17 01:28:01.565067 kernel: microcode: Microcode Update Driver: v2.2. May 17 01:28:01.565072 kernel: IPI shorthand broadcast: enabled May 17 01:28:01.565077 kernel: sched_clock: Marking stable (1686011808, 1339971909)->(4470645119, -1444661402) May 17 01:28:01.565082 kernel: registered taskstats version 1 May 17 01:28:01.565088 kernel: Loading compiled-in X.509 certificates May 17 01:28:01.565093 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 01:28:01.565099 kernel: Key type .fscrypt registered May 17 01:28:01.565104 kernel: Key type fscrypt-provisioning registered May 17 01:28:01.565110 kernel: pstore: Using crash dump compression: deflate May 17 01:28:01.565115 kernel: ima: Allocated hash algorithm: sha1 May 17 01:28:01.565120 kernel: ima: No architecture policies found May 17 01:28:01.565125 kernel: clk: Disabling unused clocks May 17 01:28:01.565131 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 01:28:01.565136 kernel: Write protecting the kernel read-only data: 28672k May 17 01:28:01.565141 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 01:28:01.565147 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 01:28:01.565152 kernel: Run /init as init process May 17 01:28:01.565158 kernel: with arguments: May 17 01:28:01.565163 kernel: /init May 17 01:28:01.565168 kernel: with environment: May 17 01:28:01.565173 kernel: HOME=/ May 17 01:28:01.565178 kernel: TERM=linux May 17 01:28:01.565183 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 01:28:01.565190 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 01:28:01.565197 systemd[1]: Detected architecture x86-64. May 17 01:28:01.565203 systemd[1]: Running in initrd. May 17 01:28:01.565208 systemd[1]: No hostname configured, using default hostname. May 17 01:28:01.565213 systemd[1]: Hostname set to . May 17 01:28:01.565219 systemd[1]: Initializing machine ID from random generator. May 17 01:28:01.565224 systemd[1]: Queued start job for default target initrd.target. May 17 01:28:01.565230 systemd[1]: Started systemd-ask-password-console.path. May 17 01:28:01.565236 systemd[1]: Reached target cryptsetup.target. May 17 01:28:01.565241 systemd[1]: Reached target paths.target. May 17 01:28:01.565247 systemd[1]: Reached target slices.target. May 17 01:28:01.565252 systemd[1]: Reached target swap.target. May 17 01:28:01.565257 systemd[1]: Reached target timers.target. May 17 01:28:01.565262 systemd[1]: Listening on iscsid.socket. May 17 01:28:01.565268 systemd[1]: Listening on iscsiuio.socket. May 17 01:28:01.565274 systemd[1]: Listening on systemd-journald-audit.socket. May 17 01:28:01.565280 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 01:28:01.565285 systemd[1]: Listening on systemd-journald.socket. May 17 01:28:01.565291 systemd[1]: Listening on systemd-networkd.socket. May 17 01:28:01.565318 systemd[1]: Listening on systemd-udevd-control.socket. May 17 01:28:01.565323 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz May 17 01:28:01.565329 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 01:28:01.565334 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns May 17 01:28:01.565340 kernel: clocksource: Switched to clocksource tsc May 17 01:28:01.565366 systemd[1]: Reached target sockets.target. May 17 01:28:01.565371 systemd[1]: Starting kmod-static-nodes.service... May 17 01:28:01.565376 systemd[1]: Finished network-cleanup.service. May 17 01:28:01.565382 systemd[1]: Starting systemd-fsck-usr.service... May 17 01:28:01.565387 systemd[1]: Starting systemd-journald.service... May 17 01:28:01.565393 systemd[1]: Starting systemd-modules-load.service... May 17 01:28:01.565400 systemd-journald[267]: Journal started May 17 01:28:01.565427 systemd-journald[267]: Runtime Journal (/run/log/journal/5aefb808579446a4aa748913947b6b36) is 8.0M, max 640.0M, 632.0M free. May 17 01:28:01.566496 systemd-modules-load[268]: Inserted module 'overlay' May 17 01:28:01.571000 audit: BPF prog-id=6 op=LOAD May 17 01:28:01.590345 kernel: audit: type=1334 audit(1747445281.571:2): prog-id=6 op=LOAD May 17 01:28:01.590361 systemd[1]: Starting systemd-resolved.service... May 17 01:28:01.639345 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 01:28:01.639361 systemd[1]: Starting systemd-vconsole-setup.service... May 17 01:28:01.672333 kernel: Bridge firewalling registered May 17 01:28:01.672349 systemd[1]: Started systemd-journald.service. May 17 01:28:01.686505 systemd-modules-load[268]: Inserted module 'br_netfilter' May 17 01:28:01.734238 kernel: audit: type=1130 audit(1747445281.693:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:01.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:01.689010 systemd-resolved[269]: Positive Trust Anchors: May 17 01:28:01.790826 kernel: SCSI subsystem initialized May 17 01:28:01.790846 kernel: audit: type=1130 audit(1747445281.745:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:01.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:01.689016 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:28:01.891098 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 01:28:01.891124 kernel: audit: type=1130 audit(1747445281.815:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:01.891141 kernel: device-mapper: uevent: version 1.0.3 May 17 01:28:01.891164 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 01:28:01.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:01.689036 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 01:28:01.982520 kernel: audit: type=1130 audit(1747445281.916:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:01.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:01.690604 systemd-resolved[269]: Defaulting to hostname 'linux'. May 17 01:28:01.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:01.694534 systemd[1]: Started systemd-resolved.service. May 17 01:28:02.083383 kernel: audit: type=1130 audit(1747445281.981:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:02.083403 kernel: audit: type=1130 audit(1747445282.036:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:02.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:01.746464 systemd[1]: Finished kmod-static-nodes.service. May 17 01:28:01.816464 systemd[1]: Finished systemd-fsck-usr.service. May 17 01:28:01.914461 systemd-modules-load[268]: Inserted module 'dm_multipath' May 17 01:28:01.917609 systemd[1]: Finished systemd-modules-load.service. May 17 01:28:01.982667 systemd[1]: Finished systemd-vconsole-setup.service. May 17 01:28:02.037583 systemd[1]: Reached target nss-lookup.target. May 17 01:28:02.091841 systemd[1]: Starting dracut-cmdline-ask.service... May 17 01:28:02.112824 systemd[1]: Starting systemd-sysctl.service... May 17 01:28:02.113125 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 01:28:02.115923 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 01:28:02.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:02.116599 systemd[1]: Finished systemd-sysctl.service. May 17 01:28:02.165399 kernel: audit: type=1130 audit(1747445282.114:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:02.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:02.178637 systemd[1]: Finished dracut-cmdline-ask.service. May 17 01:28:02.241425 kernel: audit: type=1130 audit(1747445282.177:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:02.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:02.227541 systemd[1]: Starting dracut-cmdline.service... May 17 01:28:02.256416 dracut-cmdline[293]: dracut-dracut-053 May 17 01:28:02.256416 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA May 17 01:28:02.256416 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 01:28:02.322381 kernel: Loading iSCSI transport class v2.0-870. May 17 01:28:02.322394 kernel: iscsi: registered transport (tcp) May 17 01:28:02.374786 kernel: iscsi: registered transport (qla4xxx) May 17 01:28:02.374802 kernel: QLogic iSCSI HBA Driver May 17 01:28:02.391285 systemd[1]: Finished dracut-cmdline.service. May 17 01:28:02.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:02.391909 systemd[1]: Starting dracut-pre-udev.service... May 17 01:28:02.447367 kernel: raid6: avx2x4 gen() 48930 MB/s May 17 01:28:02.482330 kernel: raid6: avx2x4 xor() 22106 MB/s May 17 01:28:02.517329 kernel: raid6: avx2x2 gen() 53717 MB/s May 17 01:28:02.552371 kernel: raid6: avx2x2 xor() 32073 MB/s May 17 01:28:02.587329 kernel: raid6: avx2x1 gen() 45219 MB/s May 17 01:28:02.622332 kernel: raid6: avx2x1 xor() 27877 MB/s May 17 01:28:02.657366 kernel: raid6: sse2x4 gen() 21326 MB/s May 17 01:28:02.691329 kernel: raid6: sse2x4 xor() 11982 MB/s May 17 01:28:02.725329 kernel: raid6: sse2x2 gen() 21695 MB/s May 17 01:28:02.759329 kernel: raid6: sse2x2 xor() 13423 MB/s May 17 01:28:02.793366 kernel: raid6: sse2x1 gen() 18276 MB/s May 17 01:28:02.845216 kernel: raid6: sse2x1 xor() 8997 MB/s May 17 01:28:02.845232 kernel: raid6: using algorithm avx2x2 gen() 53717 MB/s May 17 01:28:02.845239 kernel: raid6: .... xor() 32073 MB/s, rmw enabled May 17 01:28:02.863425 kernel: raid6: using avx2x2 recovery algorithm May 17 01:28:02.909343 kernel: xor: automatically using best checksumming function avx May 17 01:28:02.988334 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 01:28:02.993360 systemd[1]: Finished dracut-pre-udev.service. May 17 01:28:03.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:03.000000 audit: BPF prog-id=7 op=LOAD May 17 01:28:03.000000 audit: BPF prog-id=8 op=LOAD May 17 01:28:03.002195 systemd[1]: Starting systemd-udevd.service... May 17 01:28:03.010343 systemd-udevd[475]: Using default interface naming scheme 'v252'. May 17 01:28:03.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:03.016553 systemd[1]: Started systemd-udevd.service. May 17 01:28:03.057413 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation May 17 01:28:03.033956 systemd[1]: Starting dracut-pre-trigger.service... May 17 01:28:03.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:03.058534 systemd[1]: Finished dracut-pre-trigger.service. May 17 01:28:03.066079 systemd[1]: Starting systemd-udev-trigger.service... May 17 01:28:03.116023 systemd[1]: Finished systemd-udev-trigger.service. May 17 01:28:03.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:03.143303 kernel: cryptd: max_cpu_qlen set to 1000 May 17 01:28:03.162305 kernel: ACPI: bus type USB registered May 17 01:28:03.177304 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 01:28:03.177339 kernel: usbcore: registered new interface driver usbfs May 17 01:28:03.177349 kernel: sdhci: Secure Digital Host Controller Interface driver May 17 01:28:03.177358 kernel: sdhci: Copyright(c) Pierre Ossman May 17 01:28:03.199770 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 01:28:03.199807 kernel: usbcore: registered new interface driver hub May 17 01:28:03.286647 kernel: igb 0000:03:00.0: added PHC on eth0 May 17 01:28:03.373435 kernel: usbcore: registered new device driver usb May 17 01:28:03.373447 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:28:03.373509 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:5e May 17 01:28:03.373562 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 May 17 01:28:03.373614 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 01:28:03.373665 kernel: AVX2 version of gcm_enc/dec engaged. May 17 01:28:03.373673 kernel: libata version 3.00 loaded. May 17 01:28:03.403659 kernel: AES CTR mode by8 optimization enabled May 17 01:28:03.437383 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 May 17 01:28:04.224589 kernel: igb 0000:04:00.0: added PHC on eth1 May 17 01:28:04.224655 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 01:28:04.224710 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:28:04.224761 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 01:28:04.224813 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:5f May 17 01:28:04.224863 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 May 17 01:28:04.224913 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 May 17 01:28:04.224964 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 May 17 01:28:04.225011 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 01:28:04.225067 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 01:28:04.225115 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 May 17 01:28:04.225163 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed May 17 01:28:04.225209 kernel: ahci 0000:00:17.0: version 3.0 May 17 01:28:04.225258 kernel: hub 1-0:1.0: USB hub found May 17 01:28:04.225370 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:04.282817 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode May 17 01:28:04.282878 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst May 17 01:28:04.282928 kernel: igb 0000:03:00.0 eno1: renamed from eth0 May 17 01:28:04.282983 kernel: hub 1-0:1.0: 16 ports detected May 17 01:28:04.283039 kernel: scsi host0: ahci May 17 01:28:04.283099 kernel: scsi host1: ahci May 17 01:28:04.283154 kernel: scsi host2: ahci May 17 01:28:04.283206 kernel: scsi host3: ahci May 17 01:28:04.283257 kernel: scsi host4: ahci May 17 01:28:04.283334 kernel: scsi host5: ahci May 17 01:28:04.283404 kernel: scsi host6: ahci May 17 01:28:04.283454 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 138 May 17 01:28:04.283463 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 138 May 17 01:28:04.283469 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 138 May 17 01:28:04.283476 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 138 May 17 01:28:04.283482 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 138 May 17 01:28:04.283489 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 138 May 17 01:28:04.283496 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 138 May 17 01:28:04.283502 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:04.283550 kernel: hub 2-0:1.0: USB hub found May 17 01:28:04.283608 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:04.283656 kernel: hub 2-0:1.0: 10 ports detected May 17 01:28:04.283709 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 01:28:04.283758 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:04.283804 kernel: igb 0000:04:00.0 eno2: renamed from eth1 May 17 01:28:04.283853 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd May 17 01:28:04.283914 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:04.283962 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 01:28:04.283971 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 01:28:04.284021 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 01:28:04.284028 kernel: hub 1-14:1.0: USB hub found May 17 01:28:04.284086 kernel: ata7: SATA link down (SStatus 0 SControl 300) May 17 01:28:04.284093 kernel: hub 1-14:1.0: 4 ports detected May 17 01:28:04.284146 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 01:28:04.284154 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:04.284200 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 01:28:04.284208 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 May 17 01:28:04.284215 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 01:28:04.284221 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 May 17 01:28:04.284271 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 May 17 01:28:04.284278 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:04.284333 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 May 17 01:28:04.742217 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 01:28:04.742285 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 01:28:04.742300 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:04.796313 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 01:28:04.796324 kernel: ata1.00: Features: NCQ-prio May 17 01:28:04.796331 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 01:28:04.796338 kernel: ata2.00: Features: NCQ-prio May 17 01:28:04.796344 kernel: ata1.00: configured for UDMA/133 May 17 01:28:04.796351 kernel: ata2.00: configured for UDMA/133 May 17 01:28:04.796357 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 May 17 01:28:04.796432 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd May 17 01:28:04.796539 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 May 17 01:28:04.992331 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:28:04.992341 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:28:04.992348 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 01:28:04.992412 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 01:28:04.992470 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 01:28:04.992524 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 01:28:04.992580 kernel: port_module: 9 callbacks suppressed May 17 01:28:04.992587 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged May 17 01:28:04.992637 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 01:28:04.992688 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks May 17 01:28:04.992742 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 01:28:04.992796 kernel: sd 1:0:0:0: [sdb] Write Protect is off May 17 01:28:04.992850 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 17 01:28:04.992904 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:05.010350 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 17 01:28:05.010418 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 01:28:05.010481 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 01:28:05.010556 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 01:28:05.010569 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:28:05.010576 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:28:05.010585 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 01:28:05.010657 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:05.010712 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:28:05.010720 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 May 17 01:28:05.010774 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 01:28:05.010782 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:05.010832 kernel: GPT:9289727 != 937703087 May 17 01:28:05.010839 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:05.010888 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 01:28:05.010897 kernel: GPT:9289727 != 937703087 May 17 01:28:05.010904 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 01:28:05.010910 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:28:05.010916 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:28:05.010923 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk May 17 01:28:05.010979 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:05.028300 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 May 17 01:28:05.037330 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:05.113714 kernel: usbcore: registered new interface driver usbhid May 17 01:28:05.113728 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (526) May 17 01:28:05.113736 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:05.113796 kernel: usbhid: USB HID core driver May 17 01:28:05.113804 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 May 17 01:28:05.040452 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 01:28:05.173465 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 May 17 01:28:05.173413 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 01:28:05.196509 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 01:28:05.325424 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 May 17 01:28:05.325541 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:05.345833 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 May 17 01:28:05.345842 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 May 17 01:28:05.345915 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:05.292235 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 01:28:05.369396 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:28:05.345774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 01:28:05.414386 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:28:05.414399 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:28:05.357984 systemd[1]: Starting disk-uuid.service... May 17 01:28:05.431386 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:28:05.431396 disk-uuid[692]: Primary Header is updated. May 17 01:28:05.431396 disk-uuid[692]: Secondary Entries is updated. May 17 01:28:05.431396 disk-uuid[692]: Secondary Header is updated. May 17 01:28:05.487357 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:28:05.487368 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:28:06.458516 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:28:06.478260 disk-uuid[693]: The operation has completed successfully. May 17 01:28:06.486506 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:28:06.517782 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 01:28:06.618052 kernel: audit: type=1130 audit(1747445286.524:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:06.618069 kernel: audit: type=1131 audit(1747445286.524:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:06.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:06.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:06.517825 systemd[1]: Finished disk-uuid.service. May 17 01:28:06.648405 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 01:28:06.526004 systemd[1]: Starting verity-setup.service... May 17 01:28:06.679252 systemd[1]: Found device dev-mapper-usr.device. May 17 01:28:06.688324 systemd[1]: Mounting sysusr-usr.mount... May 17 01:28:06.702503 systemd[1]: Finished verity-setup.service. May 17 01:28:06.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:06.765301 kernel: audit: type=1130 audit(1747445286.716:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:06.794346 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 01:28:06.794311 systemd[1]: Mounted sysusr-usr.mount. May 17 01:28:06.801587 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 01:28:06.801982 systemd[1]: Starting ignition-setup.service... May 17 01:28:06.895151 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 17 01:28:06.895166 kernel: BTRFS info (device sdb6): using free space tree May 17 01:28:06.895177 kernel: BTRFS info (device sdb6): has skinny extents May 17 01:28:06.895184 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 17 01:28:06.832737 systemd[1]: Starting parse-ip-for-networkd.service... May 17 01:28:06.903692 systemd[1]: Finished ignition-setup.service. May 17 01:28:06.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:06.921641 systemd[1]: Finished parse-ip-for-networkd.service. May 17 01:28:07.028209 kernel: audit: type=1130 audit(1747445286.920:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:07.028235 kernel: audit: type=1130 audit(1747445286.977:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:06.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:06.978954 systemd[1]: Starting ignition-fetch-offline.service... May 17 01:28:07.059779 kernel: audit: type=1334 audit(1747445287.035:24): prog-id=9 op=LOAD May 17 01:28:07.035000 audit: BPF prog-id=9 op=LOAD May 17 01:28:07.037230 systemd[1]: Starting systemd-networkd.service... May 17 01:28:07.074409 systemd-networkd[879]: lo: Link UP May 17 01:28:07.135479 kernel: audit: type=1130 audit(1747445287.083:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:07.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:07.074411 systemd-networkd[879]: lo: Gained carrier May 17 01:28:07.104994 ignition[868]: Ignition 2.14.0 May 17 01:28:07.074747 systemd-networkd[879]: Enumeration completed May 17 01:28:07.104998 ignition[868]: Stage: fetch-offline May 17 01:28:07.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:07.074821 systemd[1]: Started systemd-networkd.service. May 17 01:28:07.313360 kernel: audit: type=1130 audit(1747445287.176:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:07.313379 kernel: audit: type=1130 audit(1747445287.237:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:07.313387 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 01:28:07.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:07.105026 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:28:07.075620 systemd-networkd[879]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:28:07.364412 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready May 17 01:28:07.105039 ignition[868]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:28:07.084389 systemd[1]: Reached target network.target. May 17 01:28:07.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:07.113884 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:28:07.118062 unknown[868]: fetched base config from "system" May 17 01:28:07.395528 iscsid[900]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 01:28:07.395528 iscsid[900]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 01:28:07.395528 iscsid[900]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 01:28:07.395528 iscsid[900]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 01:28:07.395528 iscsid[900]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 01:28:07.395528 iscsid[900]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 01:28:07.395528 iscsid[900]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 01:28:07.557510 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 01:28:07.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:07.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:07.113949 ignition[868]: parsed url from cmdline: "" May 17 01:28:07.118067 unknown[868]: fetched user config from "system" May 17 01:28:07.113951 ignition[868]: no config URL provided May 17 01:28:07.143998 systemd[1]: Starting iscsiuio.service... May 17 01:28:07.113953 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" May 17 01:28:07.158607 systemd[1]: Started iscsiuio.service. May 17 01:28:07.113974 ignition[868]: parsing config with SHA512: 24b031ad820e81d4d3c78c2e6db6c028c3c5b4d370bc6302fef770a9ce4ae8504113d4a4c11243a2ea49e5e182ffe1c082d1eb0782a4d66a143710afacc561ba May 17 01:28:07.177671 systemd[1]: Finished ignition-fetch-offline.service. May 17 01:28:07.118367 ignition[868]: fetch-offline: fetch-offline passed May 17 01:28:07.238639 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 01:28:07.118370 ignition[868]: POST message to Packet Timeline May 17 01:28:07.239140 systemd[1]: Starting ignition-kargs.service... May 17 01:28:07.118375 ignition[868]: POST Status error: resource requires networking May 17 01:28:07.314110 systemd-networkd[879]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:28:07.118434 ignition[868]: Ignition finished successfully May 17 01:28:07.327860 systemd[1]: Starting iscsid.service... May 17 01:28:07.317579 ignition[889]: Ignition 2.14.0 May 17 01:28:07.352523 systemd[1]: Started iscsid.service. May 17 01:28:07.317583 ignition[889]: Stage: kargs May 17 01:28:07.371822 systemd[1]: Starting dracut-initqueue.service... May 17 01:28:07.317643 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:28:07.385511 systemd[1]: Finished dracut-initqueue.service. May 17 01:28:07.317652 ignition[889]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:28:07.408492 systemd[1]: Reached target remote-fs-pre.target. May 17 01:28:07.320695 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:28:07.420575 systemd[1]: Reached target remote-cryptsetup.target. May 17 01:28:07.321246 ignition[889]: kargs: kargs passed May 17 01:28:07.463548 systemd[1]: Reached target remote-fs.target. May 17 01:28:07.321250 ignition[889]: POST message to Packet Timeline May 17 01:28:07.484998 systemd[1]: Starting dracut-pre-mount.service... May 17 01:28:07.321259 ignition[889]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:28:07.520556 systemd[1]: Finished dracut-pre-mount.service. May 17 01:28:07.324681 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34067->[::1]:53: read: connection refused May 17 01:28:07.527552 systemd-networkd[879]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:28:07.525034 ignition[889]: GET https://metadata.packet.net/metadata: attempt #2 May 17 01:28:07.555891 systemd-networkd[879]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:28:07.525426 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40704->[::1]:53: read: connection refused May 17 01:28:07.585020 systemd-networkd[879]: enp1s0f1np1: Link UP May 17 01:28:07.585193 systemd-networkd[879]: enp1s0f1np1: Gained carrier May 17 01:28:07.598679 systemd-networkd[879]: enp1s0f0np0: Link UP May 17 01:28:07.598924 systemd-networkd[879]: eno2: Link UP May 17 01:28:07.599146 systemd-networkd[879]: eno1: Link UP May 17 01:28:07.926111 ignition[889]: GET https://metadata.packet.net/metadata: attempt #3 May 17 01:28:07.927172 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39678->[::1]:53: read: connection refused May 17 01:28:08.368033 systemd-networkd[879]: enp1s0f0np0: Gained carrier May 17 01:28:08.376552 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready May 17 01:28:08.397525 systemd-networkd[879]: enp1s0f0np0: DHCPv4 address 145.40.90.133/31, gateway 145.40.90.132 acquired from 145.40.83.140 May 17 01:28:08.727699 ignition[889]: GET https://metadata.packet.net/metadata: attempt #4 May 17 01:28:08.729166 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39734->[::1]:53: read: connection refused May 17 01:28:08.856904 systemd-networkd[879]: enp1s0f1np1: Gained IPv6LL May 17 01:28:10.328910 systemd-networkd[879]: enp1s0f0np0: Gained IPv6LL May 17 01:28:10.330467 ignition[889]: GET https://metadata.packet.net/metadata: attempt #5 May 17 01:28:10.331743 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46701->[::1]:53: read: connection refused May 17 01:28:13.532276 ignition[889]: GET https://metadata.packet.net/metadata: attempt #6 May 17 01:28:14.563518 ignition[889]: GET result: OK May 17 01:28:14.886379 ignition[889]: Ignition finished successfully May 17 01:28:14.888980 systemd[1]: Finished ignition-kargs.service. May 17 01:28:14.978150 kernel: kauditd_printk_skb: 3 callbacks suppressed May 17 01:28:14.978166 kernel: audit: type=1130 audit(1747445294.900:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:14.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:14.910524 ignition[919]: Ignition 2.14.0 May 17 01:28:14.903656 systemd[1]: Starting ignition-disks.service... May 17 01:28:14.910528 ignition[919]: Stage: disks May 17 01:28:14.910585 ignition[919]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:28:14.910594 ignition[919]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:28:14.911998 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:28:14.913319 ignition[919]: disks: disks passed May 17 01:28:14.913322 ignition[919]: POST message to Packet Timeline May 17 01:28:14.913333 ignition[919]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:28:15.777029 ignition[919]: GET result: OK May 17 01:28:16.105484 ignition[919]: Ignition finished successfully May 17 01:28:16.106838 systemd[1]: Finished ignition-disks.service. May 17 01:28:16.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:16.120777 systemd[1]: Reached target initrd-root-device.target. May 17 01:28:16.184579 kernel: audit: type=1130 audit(1747445296.119:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:16.184545 systemd[1]: Reached target local-fs-pre.target. May 17 01:28:16.198527 systemd[1]: Reached target local-fs.target. May 17 01:28:16.198561 systemd[1]: Reached target sysinit.target. May 17 01:28:16.222519 systemd[1]: Reached target basic.target. May 17 01:28:16.237260 systemd[1]: Starting systemd-fsck-root.service... May 17 01:28:16.257249 systemd-fsck[935]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 01:28:16.269720 systemd[1]: Finished systemd-fsck-root.service. May 17 01:28:16.360729 kernel: audit: type=1130 audit(1747445296.277:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:16.360744 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 01:28:16.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:16.284269 systemd[1]: Mounting sysroot.mount... May 17 01:28:16.368961 systemd[1]: Mounted sysroot.mount. May 17 01:28:16.383568 systemd[1]: Reached target initrd-root-fs.target. May 17 01:28:16.405299 systemd[1]: Mounting sysroot-usr.mount... May 17 01:28:16.413171 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 01:28:16.434001 systemd[1]: Starting flatcar-static-network.service... May 17 01:28:16.434096 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 01:28:16.434129 systemd[1]: Reached target ignition-diskful.target. May 17 01:28:16.458487 systemd[1]: Mounted sysroot-usr.mount. May 17 01:28:16.481582 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 01:28:16.558414 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (946) May 17 01:28:16.558433 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 17 01:28:16.494205 systemd[1]: Starting initrd-setup-root.service... May 17 01:28:16.628312 kernel: BTRFS info (device sdb6): using free space tree May 17 01:28:16.628328 kernel: BTRFS info (device sdb6): has skinny extents May 17 01:28:16.628336 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 17 01:28:16.628347 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory May 17 01:28:16.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:16.690432 coreos-metadata[943]: May 17 01:28:16.571 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:28:16.713550 kernel: audit: type=1130 audit(1747445296.636:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:16.564739 systemd[1]: Finished initrd-setup-root.service. May 17 01:28:16.713614 coreos-metadata[942]: May 17 01:28:16.571 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:28:16.740393 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory May 17 01:28:16.638592 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 01:28:16.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:16.795463 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory May 17 01:28:16.829511 kernel: audit: type=1130 audit(1747445296.765:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:16.699916 systemd[1]: Starting ignition-mount.service... May 17 01:28:16.836507 initrd-setup-root[977]: cut: /sysroot/etc/gshadow: No such file or directory May 17 01:28:16.727865 systemd[1]: Starting sysroot-boot.service... May 17 01:28:16.853511 ignition[1018]: INFO : Ignition 2.14.0 May 17 01:28:16.853511 ignition[1018]: INFO : Stage: mount May 17 01:28:16.853511 ignition[1018]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:28:16.853511 ignition[1018]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:28:16.853511 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:28:16.853511 ignition[1018]: INFO : mount: mount passed May 17 01:28:16.853511 ignition[1018]: INFO : POST message to Packet Timeline May 17 01:28:16.853511 ignition[1018]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:28:16.749039 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 01:28:16.749079 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 01:28:16.749690 systemd[1]: Finished sysroot-boot.service. May 17 01:28:17.563502 coreos-metadata[943]: May 17 01:28:17.563 INFO Fetch successful May 17 01:28:17.644107 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 01:28:17.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:17.644164 systemd[1]: Finished flatcar-static-network.service. May 17 01:28:17.767005 kernel: audit: type=1130 audit(1747445297.652:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:17.767020 kernel: audit: type=1131 audit(1747445297.652:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:17.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:17.743099 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 01:28:17.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:17.811408 coreos-metadata[942]: May 17 01:28:17.709 INFO Fetch successful May 17 01:28:17.811408 coreos-metadata[942]: May 17 01:28:17.742 INFO wrote hostname ci-3510.3.7-n-2b1b6103b5 to /sysroot/etc/hostname May 17 01:28:17.861562 kernel: audit: type=1130 audit(1747445297.781:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:17.861589 ignition[1018]: INFO : GET result: OK May 17 01:28:18.075284 ignition[1018]: INFO : Ignition finished successfully May 17 01:28:18.076117 systemd[1]: Finished ignition-mount.service. May 17 01:28:18.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:18.092468 systemd[1]: Starting ignition-files.service... May 17 01:28:18.169397 kernel: audit: type=1130 audit(1747445298.090:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:18.164240 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 01:28:18.227882 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1031) May 17 01:28:18.227898 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 17 01:28:18.227905 kernel: BTRFS info (device sdb6): using free space tree May 17 01:28:18.251043 kernel: BTRFS info (device sdb6): has skinny extents May 17 01:28:18.300310 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 17 01:28:18.301572 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 01:28:18.318457 ignition[1050]: INFO : Ignition 2.14.0 May 17 01:28:18.318457 ignition[1050]: INFO : Stage: files May 17 01:28:18.318457 ignition[1050]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:28:18.318457 ignition[1050]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:28:18.318457 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:28:18.318457 ignition[1050]: DEBUG : files: compiled without relabeling support, skipping May 17 01:28:18.321512 unknown[1050]: wrote ssh authorized keys file for user: core May 17 01:28:18.394396 ignition[1050]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 01:28:18.394396 ignition[1050]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 01:28:18.394396 ignition[1050]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 01:28:18.394396 ignition[1050]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 01:28:18.394396 ignition[1050]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 01:28:18.394396 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 01:28:18.394396 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 01:28:18.394396 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 01:28:18.515394 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 01:28:18.533534 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 01:28:18.533534 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 01:28:19.803786 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 01:28:19.856120 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 01:28:19.856120 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 17 01:28:19.886413 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2670666153" May 17 01:28:19.886413 ignition[1050]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2670666153": device or resource busy May 17 01:28:19.864071 systemd[1]: mnt-oem2670666153.mount: Deactivated successfully. May 17 01:28:20.160587 ignition[1050]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2670666153", trying btrfs: device or resource busy May 17 01:28:20.160587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2670666153" May 17 01:28:20.160587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2670666153" May 17 01:28:20.160587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2670666153" May 17 01:28:20.160587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2670666153" May 17 01:28:20.160587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" May 17 01:28:20.160587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:28:20.160587 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 01:28:20.458710 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK May 17 01:28:20.622633 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:28:20.622633 ignition[1050]: INFO : files: op(10): [started] processing unit "packet-phone-home.service" May 17 01:28:20.622633 ignition[1050]: INFO : files: op(10): [finished] processing unit "packet-phone-home.service" May 17 01:28:20.622633 ignition[1050]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 01:28:20.622633 ignition[1050]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 01:28:20.692620 ignition[1050]: INFO : files: op(12): [started] processing unit "prepare-helm.service" May 17 01:28:20.692620 ignition[1050]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:28:20.692620 ignition[1050]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:28:20.692620 ignition[1050]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" May 17 01:28:20.692620 ignition[1050]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 01:28:20.692620 ignition[1050]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 01:28:20.692620 ignition[1050]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 17 01:28:20.692620 ignition[1050]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 17 01:28:20.692620 ignition[1050]: INFO : files: op(16): [started] setting preset to enabled for "packet-phone-home.service" May 17 01:28:20.692620 ignition[1050]: INFO : files: op(16): [finished] setting preset to enabled for "packet-phone-home.service" May 17 01:28:20.692620 ignition[1050]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 01:28:20.692620 ignition[1050]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 01:28:20.692620 ignition[1050]: INFO : files: files passed May 17 01:28:20.692620 ignition[1050]: INFO : POST message to Packet Timeline May 17 01:28:20.692620 ignition[1050]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:28:21.992832 ignition[1050]: INFO : GET result: OK May 17 01:28:22.304547 ignition[1050]: INFO : Ignition finished successfully May 17 01:28:22.306638 systemd[1]: Finished ignition-files.service. May 17 01:28:22.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.327482 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 01:28:22.398574 kernel: audit: type=1130 audit(1747445302.320:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.388560 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 01:28:22.422544 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:28:22.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.388944 systemd[1]: Starting ignition-quench.service... May 17 01:28:22.613656 kernel: audit: type=1130 audit(1747445302.431:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.613672 kernel: audit: type=1130 audit(1747445302.499:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.613680 kernel: audit: type=1131 audit(1747445302.499:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.405772 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 01:28:22.432774 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 01:28:22.432839 systemd[1]: Finished ignition-quench.service. May 17 01:28:22.768565 kernel: audit: type=1130 audit(1747445302.653:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.768580 kernel: audit: type=1131 audit(1747445302.653:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.500572 systemd[1]: Reached target ignition-complete.target. May 17 01:28:22.622966 systemd[1]: Starting initrd-parse-etc.service... May 17 01:28:22.643267 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 01:28:22.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.643338 systemd[1]: Finished initrd-parse-etc.service. May 17 01:28:22.888403 kernel: audit: type=1130 audit(1747445302.816:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.674733 systemd[1]: Reached target initrd-fs.target. May 17 01:28:22.777543 systemd[1]: Reached target initrd.target. May 17 01:28:22.777602 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 01:28:22.777963 systemd[1]: Starting dracut-pre-pivot.service... May 17 01:28:22.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.799673 systemd[1]: Finished dracut-pre-pivot.service. May 17 01:28:23.019548 kernel: audit: type=1131 audit(1747445302.944:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:22.817898 systemd[1]: Starting initrd-cleanup.service... May 17 01:28:22.884279 systemd[1]: Stopped target nss-lookup.target. May 17 01:28:22.897565 systemd[1]: Stopped target remote-cryptsetup.target. May 17 01:28:22.913559 systemd[1]: Stopped target timers.target. May 17 01:28:22.920584 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 01:28:22.920652 systemd[1]: Stopped dracut-pre-pivot.service. May 17 01:28:22.945736 systemd[1]: Stopped target initrd.target. May 17 01:28:23.011628 systemd[1]: Stopped target basic.target. May 17 01:28:23.026566 systemd[1]: Stopped target ignition-complete.target. May 17 01:28:23.042699 systemd[1]: Stopped target ignition-diskful.target. May 17 01:28:23.058586 systemd[1]: Stopped target initrd-root-device.target. May 17 01:28:23.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.076645 systemd[1]: Stopped target remote-fs.target. May 17 01:28:23.275513 kernel: audit: type=1131 audit(1747445303.189:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.091765 systemd[1]: Stopped target remote-fs-pre.target. May 17 01:28:23.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.110920 systemd[1]: Stopped target sysinit.target. May 17 01:28:23.351519 kernel: audit: type=1131 audit(1747445303.274:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.126908 systemd[1]: Stopped target local-fs.target. May 17 01:28:23.141879 systemd[1]: Stopped target local-fs-pre.target. May 17 01:28:23.156891 systemd[1]: Stopped target swap.target. May 17 01:28:23.172792 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 01:28:23.173155 systemd[1]: Stopped dracut-pre-mount.service. May 17 01:28:23.191119 systemd[1]: Stopped target cryptsetup.target. May 17 01:28:23.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.268523 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 01:28:23.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.268596 systemd[1]: Stopped dracut-initqueue.service. May 17 01:28:23.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.275634 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 01:28:23.493549 ignition[1099]: INFO : Ignition 2.14.0 May 17 01:28:23.493549 ignition[1099]: INFO : Stage: umount May 17 01:28:23.493549 ignition[1099]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:28:23.493549 ignition[1099]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:28:23.493549 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:28:23.493549 ignition[1099]: INFO : umount: umount passed May 17 01:28:23.493549 ignition[1099]: INFO : POST message to Packet Timeline May 17 01:28:23.493549 ignition[1099]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:28:23.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.275692 systemd[1]: Stopped ignition-fetch-offline.service. May 17 01:28:23.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.344653 systemd[1]: Stopped target paths.target. May 17 01:28:23.358606 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 01:28:23.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:23.362533 systemd[1]: Stopped systemd-ask-password-console.path. May 17 01:28:23.374621 systemd[1]: Stopped target slices.target. May 17 01:28:23.388534 systemd[1]: Stopped target sockets.target. May 17 01:28:23.404638 systemd[1]: iscsid.socket: Deactivated successfully. May 17 01:28:23.404726 systemd[1]: Closed iscsid.socket. May 17 01:28:23.419753 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 01:28:23.419951 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 01:28:23.437000 systemd[1]: ignition-files.service: Deactivated successfully. May 17 01:28:23.437385 systemd[1]: Stopped ignition-files.service. May 17 01:28:23.452995 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 01:28:23.453388 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 01:28:23.471063 systemd[1]: Stopping ignition-mount.service... May 17 01:28:23.483564 systemd[1]: Stopping iscsiuio.service... May 17 01:28:23.500466 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 01:28:23.500564 systemd[1]: Stopped kmod-static-nodes.service. May 17 01:28:23.508185 systemd[1]: Stopping sysroot-boot.service... May 17 01:28:23.536542 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 01:28:23.536796 systemd[1]: Stopped systemd-udev-trigger.service. May 17 01:28:23.566043 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 01:28:23.566450 systemd[1]: Stopped dracut-pre-trigger.service. May 17 01:28:23.594361 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 01:28:23.596199 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 01:28:23.596542 systemd[1]: Stopped iscsiuio.service. May 17 01:28:23.603008 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 01:28:23.603227 systemd[1]: Stopped sysroot-boot.service. May 17 01:28:23.624932 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 01:28:23.625107 systemd[1]: Closed iscsiuio.socket. May 17 01:28:23.639201 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 01:28:23.639445 systemd[1]: Finished initrd-cleanup.service. May 17 01:28:24.431276 ignition[1099]: INFO : GET result: OK May 17 01:28:24.778463 ignition[1099]: INFO : Ignition finished successfully May 17 01:28:24.781147 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 01:28:24.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.781407 systemd[1]: Stopped ignition-mount.service. May 17 01:28:24.796814 systemd[1]: Stopped target network.target. May 17 01:28:24.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.812560 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 01:28:24.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.812723 systemd[1]: Stopped ignition-disks.service. May 17 01:28:24.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.827699 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 01:28:24.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.827843 systemd[1]: Stopped ignition-kargs.service. May 17 01:28:24.842716 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 01:28:24.842866 systemd[1]: Stopped ignition-setup.service. May 17 01:28:24.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.858721 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 01:28:24.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.938000 audit: BPF prog-id=6 op=UNLOAD May 17 01:28:24.858867 systemd[1]: Stopped initrd-setup-root.service. May 17 01:28:24.874004 systemd[1]: Stopping systemd-networkd.service... May 17 01:28:24.881496 systemd-networkd[879]: enp1s0f1np1: DHCPv6 lease lost May 17 01:28:24.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.888820 systemd[1]: Stopping systemd-resolved.service... May 17 01:28:25.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.890538 systemd-networkd[879]: enp1s0f0np0: DHCPv6 lease lost May 17 01:28:25.009000 audit: BPF prog-id=9 op=UNLOAD May 17 01:28:25.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.904156 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 01:28:24.904453 systemd[1]: Stopped systemd-resolved.service. May 17 01:28:24.921927 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 01:28:25.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.922238 systemd[1]: Stopped systemd-networkd.service. May 17 01:28:24.937972 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 01:28:24.938059 systemd[1]: Closed systemd-networkd.socket. May 17 01:28:25.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.956978 systemd[1]: Stopping network-cleanup.service... May 17 01:28:25.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.969546 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 01:28:25.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.969698 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 01:28:24.986658 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 01:28:25.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:24.986781 systemd[1]: Stopped systemd-sysctl.service. May 17 01:28:25.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:25.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:25.002996 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 01:28:25.003155 systemd[1]: Stopped systemd-modules-load.service. May 17 01:28:25.018889 systemd[1]: Stopping systemd-udevd.service... May 17 01:28:25.038306 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 01:28:25.039746 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 01:28:25.039806 systemd[1]: Stopped systemd-udevd.service. May 17 01:28:25.056823 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 01:28:25.056853 systemd[1]: Closed systemd-udevd-control.socket. May 17 01:28:25.072483 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 01:28:25.072512 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 01:28:25.088488 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 01:28:25.088561 systemd[1]: Stopped dracut-pre-udev.service. May 17 01:28:25.104713 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 01:28:25.104845 systemd[1]: Stopped dracut-cmdline.service. May 17 01:28:25.119504 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:28:25.119527 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 01:28:25.135044 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 01:28:25.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:25.152378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:28:25.152433 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 01:28:25.168990 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 01:28:25.169116 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 01:28:25.322181 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 01:28:25.322430 systemd[1]: Stopped network-cleanup.service. May 17 01:28:25.431315 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). May 17 01:28:25.431342 iscsid[900]: iscsid shutting down. May 17 01:28:25.335831 systemd[1]: Reached target initrd-switch-root.target. May 17 01:28:25.354042 systemd[1]: Starting initrd-switch-root.service... May 17 01:28:25.383844 systemd[1]: Switching root. May 17 01:28:25.431481 systemd-journald[267]: Journal stopped May 17 01:28:29.404370 kernel: SELinux: Class mctp_socket not defined in policy. May 17 01:28:29.404383 kernel: SELinux: Class anon_inode not defined in policy. May 17 01:28:29.404392 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 01:28:29.404397 kernel: SELinux: policy capability network_peer_controls=1 May 17 01:28:29.404402 kernel: SELinux: policy capability open_perms=1 May 17 01:28:29.404407 kernel: SELinux: policy capability extended_socket_class=1 May 17 01:28:29.404413 kernel: SELinux: policy capability always_check_network=0 May 17 01:28:29.404419 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 01:28:29.404424 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 01:28:29.404430 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 01:28:29.404435 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 01:28:29.404441 systemd[1]: Successfully loaded SELinux policy in 297.525ms. May 17 01:28:29.404448 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.865ms. May 17 01:28:29.404455 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 01:28:29.404463 systemd[1]: Detected architecture x86-64. May 17 01:28:29.404469 systemd[1]: Detected first boot. May 17 01:28:29.404474 systemd[1]: Hostname set to . May 17 01:28:29.404481 systemd[1]: Initializing machine ID from random generator. May 17 01:28:29.404487 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 01:28:29.404493 systemd[1]: Populated /etc with preset unit settings. May 17 01:28:29.404498 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:28:29.404506 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:28:29.404512 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:28:29.404518 kernel: kauditd_printk_skb: 48 callbacks suppressed May 17 01:28:29.404524 kernel: audit: type=1334 audit(1747445307.676:91): prog-id=12 op=LOAD May 17 01:28:29.404530 kernel: audit: type=1334 audit(1747445307.676:92): prog-id=3 op=UNLOAD May 17 01:28:29.404535 kernel: audit: type=1334 audit(1747445307.721:93): prog-id=13 op=LOAD May 17 01:28:29.404540 kernel: audit: type=1334 audit(1747445307.765:94): prog-id=14 op=LOAD May 17 01:28:29.404547 systemd[1]: iscsid.service: Deactivated successfully. May 17 01:28:29.404552 kernel: audit: type=1334 audit(1747445307.765:95): prog-id=4 op=UNLOAD May 17 01:28:29.404558 kernel: audit: type=1334 audit(1747445307.765:96): prog-id=5 op=UNLOAD May 17 01:28:29.404564 systemd[1]: Stopped iscsid.service. May 17 01:28:29.404570 kernel: audit: type=1131 audit(1747445307.766:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.404575 kernel: audit: type=1131 audit(1747445307.927:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.404581 kernel: audit: type=1334 audit(1747445307.977:99): prog-id=12 op=UNLOAD May 17 01:28:29.404586 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 01:28:29.404594 systemd[1]: Stopped initrd-switch-root.service. May 17 01:28:29.404600 kernel: audit: type=1130 audit(1747445308.046:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.404605 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 01:28:29.404611 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 01:28:29.404619 systemd[1]: Created slice system-addon\x2drun.slice. May 17 01:28:29.404625 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 01:28:29.404632 systemd[1]: Created slice system-getty.slice. May 17 01:28:29.404638 systemd[1]: Created slice system-modprobe.slice. May 17 01:28:29.404645 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 01:28:29.404651 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 01:28:29.404657 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 01:28:29.404663 systemd[1]: Created slice user.slice. May 17 01:28:29.404669 systemd[1]: Started systemd-ask-password-console.path. May 17 01:28:29.404675 systemd[1]: Started systemd-ask-password-wall.path. May 17 01:28:29.404681 systemd[1]: Set up automount boot.automount. May 17 01:28:29.404688 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 01:28:29.404695 systemd[1]: Stopped target initrd-switch-root.target. May 17 01:28:29.404701 systemd[1]: Stopped target initrd-fs.target. May 17 01:28:29.404707 systemd[1]: Stopped target initrd-root-fs.target. May 17 01:28:29.404713 systemd[1]: Reached target integritysetup.target. May 17 01:28:29.404719 systemd[1]: Reached target remote-cryptsetup.target. May 17 01:28:29.404725 systemd[1]: Reached target remote-fs.target. May 17 01:28:29.404732 systemd[1]: Reached target slices.target. May 17 01:28:29.404738 systemd[1]: Reached target swap.target. May 17 01:28:29.404745 systemd[1]: Reached target torcx.target. May 17 01:28:29.404751 systemd[1]: Reached target veritysetup.target. May 17 01:28:29.404757 systemd[1]: Listening on systemd-coredump.socket. May 17 01:28:29.404764 systemd[1]: Listening on systemd-initctl.socket. May 17 01:28:29.404771 systemd[1]: Listening on systemd-networkd.socket. May 17 01:28:29.404778 systemd[1]: Listening on systemd-udevd-control.socket. May 17 01:28:29.404784 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 01:28:29.404790 systemd[1]: Listening on systemd-userdbd.socket. May 17 01:28:29.404796 systemd[1]: Mounting dev-hugepages.mount... May 17 01:28:29.404803 systemd[1]: Mounting dev-mqueue.mount... May 17 01:28:29.404809 systemd[1]: Mounting media.mount... May 17 01:28:29.404816 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:28:29.404822 systemd[1]: Mounting sys-kernel-debug.mount... May 17 01:28:29.404829 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 01:28:29.404836 systemd[1]: Mounting tmp.mount... May 17 01:28:29.404842 systemd[1]: Starting flatcar-tmpfiles.service... May 17 01:28:29.404848 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:28:29.404855 systemd[1]: Starting kmod-static-nodes.service... May 17 01:28:29.404861 systemd[1]: Starting modprobe@configfs.service... May 17 01:28:29.404867 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:28:29.404873 systemd[1]: Starting modprobe@drm.service... May 17 01:28:29.404879 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:28:29.404887 systemd[1]: Starting modprobe@fuse.service... May 17 01:28:29.404893 kernel: fuse: init (API version 7.34) May 17 01:28:29.404899 systemd[1]: Starting modprobe@loop.service... May 17 01:28:29.404905 kernel: loop: module loaded May 17 01:28:29.404911 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 01:28:29.404917 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 01:28:29.404924 systemd[1]: Stopped systemd-fsck-root.service. May 17 01:28:29.404930 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 01:28:29.404936 systemd[1]: Stopped systemd-fsck-usr.service. May 17 01:28:29.404943 systemd[1]: Stopped systemd-journald.service. May 17 01:28:29.404950 systemd[1]: systemd-journald.service: Consumed 1.143s CPU time. May 17 01:28:29.404956 systemd[1]: Starting systemd-journald.service... May 17 01:28:29.404963 systemd[1]: Starting systemd-modules-load.service... May 17 01:28:29.404971 systemd-journald[1251]: Journal started May 17 01:28:29.404996 systemd-journald[1251]: Runtime Journal (/run/log/journal/5d2cc95817984887b756255e50eb416f) is 8.0M, max 640.0M, 632.0M free. May 17 01:28:25.780000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 01:28:26.051000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 01:28:26.053000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 01:28:26.053000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 01:28:26.053000 audit: BPF prog-id=10 op=LOAD May 17 01:28:26.053000 audit: BPF prog-id=10 op=UNLOAD May 17 01:28:26.053000 audit: BPF prog-id=11 op=LOAD May 17 01:28:26.053000 audit: BPF prog-id=11 op=UNLOAD May 17 01:28:26.119000 audit[1139]: AVC avc: denied { associate } for pid=1139 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 01:28:26.119000 audit[1139]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001d9892 a1=c00015adf8 a2=c0001630c0 a3=32 items=0 ppid=1122 pid=1139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:28:26.119000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 01:28:26.145000 audit[1139]: AVC avc: denied { associate } for pid=1139 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 01:28:26.145000 audit[1139]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001d9969 a2=1ed a3=0 items=2 ppid=1122 pid=1139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:28:26.145000 audit: CWD cwd="/" May 17 01:28:26.145000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:26.145000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:26.145000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 01:28:27.676000 audit: BPF prog-id=12 op=LOAD May 17 01:28:27.676000 audit: BPF prog-id=3 op=UNLOAD May 17 01:28:27.721000 audit: BPF prog-id=13 op=LOAD May 17 01:28:27.765000 audit: BPF prog-id=14 op=LOAD May 17 01:28:27.765000 audit: BPF prog-id=4 op=UNLOAD May 17 01:28:27.765000 audit: BPF prog-id=5 op=UNLOAD May 17 01:28:27.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:27.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:27.977000 audit: BPF prog-id=12 op=UNLOAD May 17 01:28:28.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:28.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.376000 audit: BPF prog-id=15 op=LOAD May 17 01:28:29.376000 audit: BPF prog-id=16 op=LOAD May 17 01:28:29.377000 audit: BPF prog-id=17 op=LOAD May 17 01:28:29.377000 audit: BPF prog-id=13 op=UNLOAD May 17 01:28:29.377000 audit: BPF prog-id=14 op=UNLOAD May 17 01:28:29.400000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 01:28:29.400000 audit[1251]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe31b1e860 a2=4000 a3=7ffe31b1e8fc items=0 ppid=1 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:28:29.400000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 01:28:26.118372 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 01:28:27.675762 systemd[1]: Queued start job for default target multi-user.target. May 17 01:28:26.118881 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 01:28:27.675769 systemd[1]: Unnecessary job was removed for dev-sdb6.device. May 17 01:28:26.118896 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 01:28:27.767581 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 01:28:26.118918 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 01:28:27.767717 systemd[1]: systemd-journald.service: Consumed 1.143s CPU time. May 17 01:28:26.118925 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 01:28:26.118944 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 01:28:26.118953 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 01:28:26.119087 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 01:28:26.119116 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 01:28:26.119126 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 01:28:26.120528 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 01:28:26.120552 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 01:28:26.120565 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 01:28:26.120589 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 01:28:26.120603 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 01:28:26.120613 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 01:28:27.322966 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 01:28:27.323110 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 01:28:27.323170 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 01:28:27.323267 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 01:28:27.323300 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 01:28:27.323335 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2025-05-17T01:28:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 01:28:29.434518 systemd[1]: Starting systemd-network-generator.service... May 17 01:28:29.456358 systemd[1]: Starting systemd-remount-fs.service... May 17 01:28:29.478343 systemd[1]: Starting systemd-udev-trigger.service... May 17 01:28:29.511000 systemd[1]: verity-setup.service: Deactivated successfully. May 17 01:28:29.511040 systemd[1]: Stopped verity-setup.service. May 17 01:28:29.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.545332 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:28:29.560492 systemd[1]: Started systemd-journald.service. May 17 01:28:29.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.568841 systemd[1]: Mounted dev-hugepages.mount. May 17 01:28:29.576571 systemd[1]: Mounted dev-mqueue.mount. May 17 01:28:29.583556 systemd[1]: Mounted media.mount. May 17 01:28:29.590554 systemd[1]: Mounted sys-kernel-debug.mount. May 17 01:28:29.599540 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 01:28:29.608533 systemd[1]: Mounted tmp.mount. May 17 01:28:29.615604 systemd[1]: Finished flatcar-tmpfiles.service. May 17 01:28:29.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.623630 systemd[1]: Finished kmod-static-nodes.service. May 17 01:28:29.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.631648 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 01:28:29.631754 systemd[1]: Finished modprobe@configfs.service. May 17 01:28:29.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.640772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:28:29.640907 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:28:29.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.649871 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 01:28:29.650049 systemd[1]: Finished modprobe@drm.service. May 17 01:28:29.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.659095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:28:29.659396 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:28:29.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.668135 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 01:28:29.668473 systemd[1]: Finished modprobe@fuse.service. May 17 01:28:29.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.677122 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:28:29.677607 systemd[1]: Finished modprobe@loop.service. May 17 01:28:29.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.686148 systemd[1]: Finished systemd-modules-load.service. May 17 01:28:29.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.695186 systemd[1]: Finished systemd-network-generator.service. May 17 01:28:29.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.704104 systemd[1]: Finished systemd-remount-fs.service. May 17 01:28:29.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.713102 systemd[1]: Finished systemd-udev-trigger.service. May 17 01:28:29.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.722631 systemd[1]: Reached target network-pre.target. May 17 01:28:29.734226 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 01:28:29.745090 systemd[1]: Mounting sys-kernel-config.mount... May 17 01:28:29.752586 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 01:28:29.755899 systemd[1]: Starting systemd-hwdb-update.service... May 17 01:28:29.763089 systemd[1]: Starting systemd-journal-flush.service... May 17 01:28:29.766614 systemd-journald[1251]: Time spent on flushing to /var/log/journal/5d2cc95817984887b756255e50eb416f is 15.390ms for 1605 entries. May 17 01:28:29.766614 systemd-journald[1251]: System Journal (/var/log/journal/5d2cc95817984887b756255e50eb416f) is 8.0M, max 195.6M, 187.6M free. May 17 01:28:29.810088 systemd-journald[1251]: Received client request to flush runtime journal. May 17 01:28:29.779423 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:28:29.779925 systemd[1]: Starting systemd-random-seed.service... May 17 01:28:29.790425 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 01:28:29.790933 systemd[1]: Starting systemd-sysctl.service... May 17 01:28:29.797918 systemd[1]: Starting systemd-sysusers.service... May 17 01:28:29.804916 systemd[1]: Starting systemd-udev-settle.service... May 17 01:28:29.812592 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 01:28:29.820452 systemd[1]: Mounted sys-kernel-config.mount. May 17 01:28:29.828510 systemd[1]: Finished systemd-journal-flush.service. May 17 01:28:29.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.836540 systemd[1]: Finished systemd-random-seed.service. May 17 01:28:29.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.844527 systemd[1]: Finished systemd-sysctl.service. May 17 01:28:29.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.852516 systemd[1]: Finished systemd-sysusers.service. May 17 01:28:29.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:29.861490 systemd[1]: Reached target first-boot-complete.target. May 17 01:28:29.869637 udevadm[1267]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 01:28:30.062063 systemd[1]: Finished systemd-hwdb-update.service. May 17 01:28:30.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:30.070000 audit: BPF prog-id=18 op=LOAD May 17 01:28:30.070000 audit: BPF prog-id=19 op=LOAD May 17 01:28:30.070000 audit: BPF prog-id=7 op=UNLOAD May 17 01:28:30.070000 audit: BPF prog-id=8 op=UNLOAD May 17 01:28:30.072654 systemd[1]: Starting systemd-udevd.service... May 17 01:28:30.085043 systemd-udevd[1268]: Using default interface naming scheme 'v252'. May 17 01:28:30.102953 systemd[1]: Started systemd-udevd.service. May 17 01:28:30.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:30.114354 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. May 17 01:28:30.113000 audit: BPF prog-id=20 op=LOAD May 17 01:28:30.115708 systemd[1]: Starting systemd-networkd.service... May 17 01:28:30.134000 audit: BPF prog-id=21 op=LOAD May 17 01:28:30.148896 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 May 17 01:28:30.148954 kernel: ACPI: button: Sleep Button [SLPB] May 17 01:28:30.148972 kernel: mousedev: PS/2 mouse device common for all mice May 17 01:28:30.148991 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.245078 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 01:28:30.245097 kernel: ACPI: button: Power Button [PWRF] May 17 01:28:30.245111 kernel: IPMI message handler: version 39.2 May 17 01:28:30.245121 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.297161 kernel: ipmi device interface May 17 01:28:30.297190 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.297316 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.162000 audit: BPF prog-id=22 op=LOAD May 17 01:28:30.195000 audit: BPF prog-id=23 op=LOAD May 17 01:28:30.145000 audit[1336]: AVC avc: denied { confidentiality } for pid=1336 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 01:28:30.145000 audit[1336]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ff652e36c0 a1=4d9cc a2=7fb2588cabc5 a3=5 items=42 ppid=1268 pid=1336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:28:30.145000 audit: CWD cwd="/" May 17 01:28:30.145000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=1 name=(null) inode=19600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=2 name=(null) inode=19600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=3 name=(null) inode=19601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=4 name=(null) inode=19600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=5 name=(null) inode=19602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=6 name=(null) inode=19600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=7 name=(null) inode=19603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=8 name=(null) inode=19603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=9 name=(null) inode=19604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=10 name=(null) inode=19603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=11 name=(null) inode=19605 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=12 name=(null) inode=19603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=13 name=(null) inode=19606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=14 name=(null) inode=19603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=15 name=(null) inode=19607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=16 name=(null) inode=19603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=17 name=(null) inode=19608 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=18 name=(null) inode=19600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.298321 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface May 17 01:28:30.330942 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface May 17 01:28:30.145000 audit: PATH item=19 name=(null) inode=19609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=20 name=(null) inode=19609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=21 name=(null) inode=19610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=22 name=(null) inode=19609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=23 name=(null) inode=19611 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=24 name=(null) inode=19609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=25 name=(null) inode=19612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=26 name=(null) inode=19609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=27 name=(null) inode=19613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=28 name=(null) inode=19609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=29 name=(null) inode=19614 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=30 name=(null) inode=19600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=31 name=(null) inode=19615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=32 name=(null) inode=19615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=33 name=(null) inode=19616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=34 name=(null) inode=19615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=35 name=(null) inode=19617 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=36 name=(null) inode=19615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=37 name=(null) inode=19618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=38 name=(null) inode=19615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=39 name=(null) inode=19619 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=40 name=(null) inode=19615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PATH item=41 name=(null) inode=19620 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:28:30.145000 audit: PROCTITLE proctitle="(udev-worker)" May 17 01:28:30.197162 systemd[1]: Starting systemd-userdbd.service... May 17 01:28:30.227589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 01:28:30.347546 systemd[1]: Started systemd-userdbd.service. May 17 01:28:30.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:30.374310 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set May 17 01:28:30.422130 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt May 17 01:28:30.422244 kernel: ipmi_si: IPMI System Interface driver May 17 01:28:30.422260 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) May 17 01:28:30.422366 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS May 17 01:28:30.486409 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.504146 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 May 17 01:28:30.504165 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine May 17 01:28:30.504177 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.504248 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI May 17 01:28:30.593000 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 May 17 01:28:30.593087 kernel: iTCO_vendor_support: vendor-support=0 May 17 01:28:30.593101 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI May 17 01:28:30.593160 kernel: ipmi_si: Adding ACPI-specified kcs state machine May 17 01:28:30.593175 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.611608 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 May 17 01:28:30.652305 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) May 17 01:28:30.704284 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. May 17 01:28:30.704371 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.782917 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) May 17 01:28:30.783008 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.783105 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) May 17 01:28:30.783177 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.784912 systemd-networkd[1323]: bond0: netdev ready May 17 01:28:30.787162 systemd-networkd[1323]: lo: Link UP May 17 01:28:30.787165 systemd-networkd[1323]: lo: Gained carrier May 17 01:28:30.787655 systemd-networkd[1323]: Enumeration completed May 17 01:28:30.787710 systemd[1]: Started systemd-networkd.service. May 17 01:28:30.787943 systemd-networkd[1323]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 17 01:28:30.793714 systemd-networkd[1323]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:f8:2d.network. May 17 01:28:30.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:30.830940 kernel: intel_rapl_common: Found RAPL domain package May 17 01:28:30.830979 kernel: intel_rapl_common: Found RAPL domain core May 17 01:28:30.830994 kernel: intel_rapl_common: Found RAPL domain dram May 17 01:28:30.848301 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:30.867867 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized May 17 01:28:30.909301 kernel: ipmi_ssif: IPMI SSIF Interface driver May 17 01:28:30.911563 systemd[1]: Finished systemd-udev-settle.service. May 17 01:28:30.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:30.919997 systemd[1]: Starting lvm2-activation-early.service... May 17 01:28:30.935370 lvm[1373]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 01:28:30.970698 systemd[1]: Finished lvm2-activation-early.service. May 17 01:28:30.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:30.979376 systemd[1]: Reached target cryptsetup.target. May 17 01:28:30.988907 systemd[1]: Starting lvm2-activation.service... May 17 01:28:30.991031 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 01:28:31.022709 systemd[1]: Finished lvm2-activation.service. May 17 01:28:31.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.031375 systemd[1]: Reached target local-fs-pre.target. May 17 01:28:31.040332 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 01:28:31.040348 systemd[1]: Reached target local-fs.target. May 17 01:28:31.049331 systemd[1]: Reached target machines.target. May 17 01:28:31.058954 systemd[1]: Starting ldconfig.service... May 17 01:28:31.065911 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:28:31.065933 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:28:31.066460 systemd[1]: Starting systemd-boot-update.service... May 17 01:28:31.074760 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 01:28:31.085891 systemd[1]: Starting systemd-machine-id-commit.service... May 17 01:28:31.086588 systemd[1]: Starting systemd-sysext.service... May 17 01:28:31.086863 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1376 (bootctl) May 17 01:28:31.087497 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 01:28:31.097302 systemd[1]: Unmounting usr-share-oem.mount... May 17 01:28:31.107455 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 01:28:31.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.107678 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 01:28:31.107788 systemd[1]: Unmounted usr-share-oem.mount. May 17 01:28:31.142327 kernel: loop0: detected capacity change from 0 to 224512 May 17 01:28:31.284376 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 01:28:31.308301 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link May 17 01:28:31.310194 systemd-networkd[1323]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:f8:2c.network. May 17 01:28:31.333303 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:28:31.379850 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 01:28:31.380327 systemd[1]: Finished systemd-machine-id-commit.service. May 17 01:28:31.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.407342 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 01:28:31.419344 systemd-fsck[1385]: fsck.fat 4.2 (2021-01-31) May 17 01:28:31.419344 systemd-fsck[1385]: /dev/sdb1: 790 files, 120726/258078 clusters May 17 01:28:31.420106 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 01:28:31.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.430179 systemd[1]: Mounting boot.mount... May 17 01:28:31.436160 systemd[1]: Mounted boot.mount. May 17 01:28:31.455338 kernel: loop1: detected capacity change from 0 to 224512 May 17 01:28:31.455383 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:28:31.460435 systemd[1]: Finished systemd-boot-update.service. May 17 01:28:31.470914 (sd-sysext)[1390]: Using extensions 'kubernetes'. May 17 01:28:31.471093 (sd-sysext)[1390]: Merged extensions into '/usr'. May 17 01:28:31.475340 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 01:28:31.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.516348 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link May 17 01:28:31.516380 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready May 17 01:28:31.517328 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:28:31.518026 systemd[1]: Mounting usr-share-oem.mount... May 17 01:28:31.534981 systemd-networkd[1323]: bond0: Link UP May 17 01:28:31.535175 systemd-networkd[1323]: enp1s0f1np1: Link UP May 17 01:28:31.535312 systemd-networkd[1323]: enp1s0f1np1: Gained carrier May 17 01:28:31.536365 systemd-networkd[1323]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:f8:2c.network. May 17 01:28:31.540498 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:28:31.541154 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:28:31.553862 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:28:31.565299 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.579914 systemd[1]: Starting modprobe@loop.service... May 17 01:28:31.586300 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.600413 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:28:31.600481 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:28:31.600544 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:28:31.602114 systemd[1]: Mounted usr-share-oem.mount. May 17 01:28:31.607300 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.616123 ldconfig[1375]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 01:28:31.621573 systemd[1]: Finished ldconfig.service. May 17 01:28:31.627337 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.640575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:28:31.640640 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:28:31.648346 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.662563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:28:31.662625 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:28:31.668327 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.683564 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:28:31.683627 systemd[1]: Finished modprobe@loop.service. May 17 01:28:31.687300 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.702620 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:28:31.702681 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 01:28:31.703179 systemd[1]: Finished systemd-sysext.service. May 17 01:28:31.706299 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.722446 systemd[1]: Starting ensure-sysext.service... May 17 01:28:31.725337 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.739879 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 01:28:31.744344 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.752551 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 01:28:31.755258 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 01:28:31.759193 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 01:28:31.760540 systemd[1]: Reloading. May 17 01:28:31.764347 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.780926 /usr/lib/systemd/system-generators/torcx-generator[1417]: time="2025-05-17T01:28:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 01:28:31.780942 /usr/lib/systemd/system-generators/torcx-generator[1417]: time="2025-05-17T01:28:31Z" level=info msg="torcx already run" May 17 01:28:31.782301 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.801350 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.820308 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.839303 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.839587 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:28:31.839595 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:28:31.850488 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:28:31.857353 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.874352 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.875148 systemd-networkd[1323]: enp1s0f0np0: Link UP May 17 01:28:31.875341 systemd-networkd[1323]: bond0: Gained carrier May 17 01:28:31.875432 systemd-networkd[1323]: enp1s0f0np0: Gained carrier May 17 01:28:31.907182 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms May 17 01:28:31.907211 kernel: bond0: (slave enp1s0f1np1): link status definitely down, disabling slave May 17 01:28:31.907230 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:28:31.923000 audit: BPF prog-id=24 op=LOAD May 17 01:28:31.923000 audit: BPF prog-id=20 op=UNLOAD May 17 01:28:31.923000 audit: BPF prog-id=25 op=LOAD May 17 01:28:31.923000 audit: BPF prog-id=15 op=UNLOAD May 17 01:28:31.924000 audit: BPF prog-id=26 op=LOAD May 17 01:28:31.924000 audit: BPF prog-id=27 op=LOAD May 17 01:28:31.924000 audit: BPF prog-id=16 op=UNLOAD May 17 01:28:31.924000 audit: BPF prog-id=17 op=UNLOAD May 17 01:28:31.954920 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex May 17 01:28:31.954943 kernel: bond0: active interface up! May 17 01:28:31.953000 audit: BPF prog-id=28 op=LOAD May 17 01:28:31.953000 audit: BPF prog-id=29 op=LOAD May 17 01:28:31.953000 audit: BPF prog-id=18 op=UNLOAD May 17 01:28:31.953000 audit: BPF prog-id=19 op=UNLOAD May 17 01:28:31.954000 audit: BPF prog-id=30 op=LOAD May 17 01:28:31.954000 audit: BPF prog-id=21 op=UNLOAD May 17 01:28:31.954000 audit: BPF prog-id=31 op=LOAD May 17 01:28:31.954000 audit: BPF prog-id=32 op=LOAD May 17 01:28:31.954000 audit: BPF prog-id=22 op=UNLOAD May 17 01:28:31.954000 audit: BPF prog-id=23 op=UNLOAD May 17 01:28:31.956598 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 01:28:31.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:28:31.965536 systemd-networkd[1323]: enp1s0f1np1: Link DOWN May 17 01:28:31.965538 systemd-networkd[1323]: enp1s0f1np1: Lost carrier May 17 01:28:31.967181 systemd[1]: Starting audit-rules.service... May 17 01:28:31.974923 systemd[1]: Starting clean-ca-certificates.service... May 17 01:28:31.984024 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 01:28:31.983000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 01:28:31.983000 audit[1494]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff6561d510 a2=420 a3=0 items=0 ppid=1478 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:28:31.983000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 01:28:31.984893 augenrules[1494]: No rules May 17 01:28:31.994327 systemd[1]: Starting systemd-resolved.service... May 17 01:28:32.003354 systemd[1]: Starting systemd-timesyncd.service... May 17 01:28:32.011863 systemd[1]: Starting systemd-update-utmp.service... May 17 01:28:32.019648 systemd[1]: Finished audit-rules.service. May 17 01:28:32.027453 systemd[1]: Finished clean-ca-certificates.service. May 17 01:28:32.036445 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 01:28:32.050499 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:28:32.051203 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:28:32.058981 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:28:32.066911 systemd[1]: Starting modprobe@loop.service... May 17 01:28:32.073406 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:28:32.073527 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:28:32.076521 systemd[1]: Starting systemd-update-done.service... May 17 01:28:32.081125 systemd-resolved[1500]: Positive Trust Anchors: May 17 01:28:32.081132 systemd-resolved[1500]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:28:32.081151 systemd-resolved[1500]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 01:28:32.083397 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:28:32.084338 systemd[1]: Started systemd-timesyncd.service. May 17 01:28:32.085063 systemd-resolved[1500]: Using system hostname 'ci-3510.3.7-n-2b1b6103b5'. May 17 01:28:32.092819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:28:32.092891 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:28:32.106670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:28:32.106737 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:28:32.109300 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 01:28:32.111774 systemd-networkd[1323]: enp1s0f1np1: Link UP May 17 01:28:32.113553 systemd-networkd[1323]: enp1s0f1np1: Gained carrier May 17 01:28:32.118589 systemd[1]: Started systemd-resolved.service. May 17 01:28:32.126666 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:28:32.126730 systemd[1]: Finished modprobe@loop.service. May 17 01:28:32.134683 systemd[1]: Finished systemd-update-done.service. May 17 01:28:32.143509 systemd[1]: Finished systemd-update-utmp.service. May 17 01:28:32.151911 systemd[1]: Reached target network.target. May 17 01:28:32.160438 systemd[1]: Reached target nss-lookup.target. May 17 01:28:32.176345 kernel: bond0: (slave enp1s0f1np1): link status up, enabling it in 200 ms May 17 01:28:32.176369 kernel: bond0: (slave enp1s0f1np1): invalid new link 3 on slave May 17 01:28:32.198433 systemd[1]: Reached target time-set.target. May 17 01:28:32.206413 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:28:32.206558 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:28:32.207241 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:28:32.214923 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:28:32.221867 systemd[1]: Starting modprobe@loop.service... May 17 01:28:32.228390 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:28:32.228478 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:28:32.228579 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:28:32.228645 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:28:32.229223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:28:32.229323 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:28:32.237559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:28:32.237619 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:28:32.245543 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:28:32.245601 systemd[1]: Finished modprobe@loop.service. May 17 01:28:32.254609 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:28:32.254759 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:28:32.255292 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:28:32.262850 systemd[1]: Starting modprobe@drm.service... May 17 01:28:32.269866 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:28:32.276862 systemd[1]: Starting modprobe@loop.service... May 17 01:28:32.283419 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:28:32.283505 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:28:32.284106 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 01:28:32.292374 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:28:32.292436 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:28:32.293083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:28:32.293165 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:28:32.301551 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 01:28:32.301610 systemd[1]: Finished modprobe@drm.service. May 17 01:28:32.309548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:28:32.309606 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:28:32.317545 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:28:32.317604 systemd[1]: Finished modprobe@loop.service. May 17 01:28:32.325622 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:28:32.325691 systemd[1]: Reached target sysinit.target. May 17 01:28:32.333416 systemd[1]: Started motdgen.path. May 17 01:28:32.340516 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 01:28:32.350459 systemd[1]: Started logrotate.timer. May 17 01:28:32.357406 systemd[1]: Started mdadm.timer. May 17 01:28:32.364369 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 01:28:32.372364 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 01:28:32.372385 systemd[1]: Reached target paths.target. May 17 01:28:32.379372 systemd[1]: Reached target timers.target. May 17 01:28:32.394626 systemd[1]: Listening on dbus.socket. May 17 01:28:32.403300 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex May 17 01:28:32.409823 systemd[1]: Starting docker.socket... May 17 01:28:32.417811 systemd[1]: Listening on sshd.socket. May 17 01:28:32.424432 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:28:32.424457 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 01:28:32.424752 systemd[1]: Finished ensure-sysext.service. May 17 01:28:32.433503 systemd[1]: Listening on docker.socket. May 17 01:28:32.440813 systemd[1]: Reached target sockets.target. May 17 01:28:32.449380 systemd[1]: Reached target basic.target. May 17 01:28:32.456402 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 01:28:32.456415 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 01:28:32.456873 systemd[1]: Starting containerd.service... May 17 01:28:32.463833 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 01:28:32.472918 systemd[1]: Starting coreos-metadata.service... May 17 01:28:32.480104 systemd[1]: Starting dbus.service... May 17 01:28:32.486070 systemd[1]: Starting enable-oem-cloudinit.service... May 17 01:28:32.490597 jq[1527]: false May 17 01:28:32.492763 coreos-metadata[1520]: May 17 01:28:32.492 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:28:32.493272 systemd[1]: Starting extend-filesystems.service... May 17 01:28:32.498471 dbus-daemon[1526]: [system] SELinux support is enabled May 17 01:28:32.500417 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 01:28:32.501126 extend-filesystems[1529]: Found loop1 May 17 01:28:32.509445 extend-filesystems[1529]: Found sda May 17 01:28:32.509445 extend-filesystems[1529]: Found sdb May 17 01:28:32.509445 extend-filesystems[1529]: Found sdb1 May 17 01:28:32.509445 extend-filesystems[1529]: Found sdb2 May 17 01:28:32.509445 extend-filesystems[1529]: Found sdb3 May 17 01:28:32.509445 extend-filesystems[1529]: Found usr May 17 01:28:32.509445 extend-filesystems[1529]: Found sdb4 May 17 01:28:32.509445 extend-filesystems[1529]: Found sdb6 May 17 01:28:32.509445 extend-filesystems[1529]: Found sdb7 May 17 01:28:32.509445 extend-filesystems[1529]: Found sdb9 May 17 01:28:32.509445 extend-filesystems[1529]: Checking size of /dev/sdb9 May 17 01:28:32.509445 extend-filesystems[1529]: Resized partition /dev/sdb9 May 17 01:28:32.640400 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks May 17 01:28:32.501464 systemd[1]: Starting motdgen.service... May 17 01:28:32.640483 coreos-metadata[1523]: May 17 01:28:32.503 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:28:32.640617 extend-filesystems[1544]: resize2fs 1.46.5 (30-Dec-2021) May 17 01:28:32.524097 systemd[1]: Starting prepare-helm.service... May 17 01:28:32.533031 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 01:28:32.557007 systemd[1]: Starting sshd-keygen.service... May 17 01:28:32.575671 systemd[1]: Starting systemd-logind.service... May 17 01:28:32.595383 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:28:32.595932 systemd[1]: Starting tcsd.service... May 17 01:28:32.598763 systemd-logind[1555]: Watching system buttons on /dev/input/event3 (Power Button) May 17 01:28:32.656941 jq[1558]: true May 17 01:28:32.598773 systemd-logind[1555]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 01:28:32.598782 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (HID 0557:2419) May 17 01:28:32.598888 systemd-logind[1555]: New seat seat0. May 17 01:28:32.613687 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 01:28:32.614078 systemd[1]: Starting update-engine.service... May 17 01:28:32.632921 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 01:28:32.648628 systemd[1]: Started dbus.service. May 17 01:28:32.664474 update_engine[1557]: I0517 01:28:32.664023 1557 main.cc:92] Flatcar Update Engine starting May 17 01:28:32.665132 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 01:28:32.665229 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 01:28:32.665408 systemd[1]: motdgen.service: Deactivated successfully. May 17 01:28:32.665497 systemd[1]: Finished motdgen.service. May 17 01:28:32.667132 update_engine[1557]: I0517 01:28:32.667094 1557 update_check_scheduler.cc:74] Next update check in 5m32s May 17 01:28:32.672765 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 01:28:32.672853 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 01:28:32.683963 jq[1562]: true May 17 01:28:32.684895 dbus-daemon[1526]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 01:28:32.685619 tar[1560]: linux-amd64/LICENSE May 17 01:28:32.685752 tar[1560]: linux-amd64/helm May 17 01:28:32.690582 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. May 17 01:28:32.690706 systemd[1]: Condition check resulted in tcsd.service being skipped. May 17 01:28:32.691983 systemd[1]: Started update-engine.service. May 17 01:28:32.693101 env[1563]: time="2025-05-17T01:28:32.693077102Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 01:28:32.701613 env[1563]: time="2025-05-17T01:28:32.701594329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 01:28:32.702504 env[1563]: time="2025-05-17T01:28:32.702489993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 01:28:32.703178 env[1563]: time="2025-05-17T01:28:32.703158618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 01:28:32.703221 env[1563]: time="2025-05-17T01:28:32.703177068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 01:28:32.703329 env[1563]: time="2025-05-17T01:28:32.703315534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:28:32.703368 env[1563]: time="2025-05-17T01:28:32.703327996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 01:28:32.703368 env[1563]: time="2025-05-17T01:28:32.703339601Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 01:28:32.703368 env[1563]: time="2025-05-17T01:28:32.703349908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 01:28:32.705073 env[1563]: time="2025-05-17T01:28:32.705058640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 01:28:32.705218 systemd[1]: Started systemd-logind.service. May 17 01:28:32.705283 env[1563]: time="2025-05-17T01:28:32.705232414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 01:28:32.705350 env[1563]: time="2025-05-17T01:28:32.705336530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:28:32.707079 env[1563]: time="2025-05-17T01:28:32.705349251Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 01:28:32.707117 env[1563]: time="2025-05-17T01:28:32.707087673Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 01:28:32.707117 env[1563]: time="2025-05-17T01:28:32.707105398Z" level=info msg="metadata content store policy set" policy=shared May 17 01:28:32.715001 systemd[1]: Started locksmithd.service. May 17 01:28:32.715397 bash[1591]: Updated "/home/core/.ssh/authorized_keys" May 17 01:28:32.717541 env[1563]: time="2025-05-17T01:28:32.717528416Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 01:28:32.717609 env[1563]: time="2025-05-17T01:28:32.717599466Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 01:28:32.717637 env[1563]: time="2025-05-17T01:28:32.717616835Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 01:28:32.717787 env[1563]: time="2025-05-17T01:28:32.717766953Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 01:28:32.717828 env[1563]: time="2025-05-17T01:28:32.717799027Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 01:28:32.717828 env[1563]: time="2025-05-17T01:28:32.717820380Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 01:28:32.717862 env[1563]: time="2025-05-17T01:28:32.717837440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 01:28:32.717884 env[1563]: time="2025-05-17T01:28:32.717854614Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 01:28:32.717912 env[1563]: time="2025-05-17T01:28:32.717893656Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 01:28:32.717990 env[1563]: time="2025-05-17T01:28:32.717978899Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 01:28:32.718011 env[1563]: time="2025-05-17T01:28:32.717994812Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 01:28:32.718011 env[1563]: time="2025-05-17T01:28:32.718003574Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 01:28:32.718066 env[1563]: time="2025-05-17T01:28:32.718058632Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 01:28:32.718116 env[1563]: time="2025-05-17T01:28:32.718108233Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 01:28:32.718256 env[1563]: time="2025-05-17T01:28:32.718245916Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 01:28:32.718291 env[1563]: time="2025-05-17T01:28:32.718264645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718291 env[1563]: time="2025-05-17T01:28:32.718276659Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 01:28:32.718351 env[1563]: time="2025-05-17T01:28:32.718316558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718351 env[1563]: time="2025-05-17T01:28:32.718328567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718351 env[1563]: time="2025-05-17T01:28:32.718337359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718351 env[1563]: time="2025-05-17T01:28:32.718347095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718436 env[1563]: time="2025-05-17T01:28:32.718359134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718436 env[1563]: time="2025-05-17T01:28:32.718366814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718436 env[1563]: time="2025-05-17T01:28:32.718373295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718436 env[1563]: time="2025-05-17T01:28:32.718382838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718436 env[1563]: time="2025-05-17T01:28:32.718391922Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 01:28:32.718526 env[1563]: time="2025-05-17T01:28:32.718484708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718526 env[1563]: time="2025-05-17T01:28:32.718496087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718526 env[1563]: time="2025-05-17T01:28:32.718504331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718526 env[1563]: time="2025-05-17T01:28:32.718511024Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 01:28:32.718526 env[1563]: time="2025-05-17T01:28:32.718519456Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 01:28:32.718626 env[1563]: time="2025-05-17T01:28:32.718528208Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 01:28:32.718626 env[1563]: time="2025-05-17T01:28:32.718542436Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 01:28:32.718626 env[1563]: time="2025-05-17T01:28:32.718563384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 01:28:32.718714 env[1563]: time="2025-05-17T01:28:32.718689159Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 01:28:32.718714 env[1563]: time="2025-05-17T01:28:32.718722213Z" level=info msg="Connect containerd service" May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.718746632Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.719042107Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.719126093Z" level=info msg="Start subscribing containerd event" May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.719155335Z" level=info msg="Start recovering state" May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.719165620Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.719188696Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.719190893Z" level=info msg="Start event monitor" May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.719200733Z" level=info msg="Start snapshots syncer" May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.719207506Z" level=info msg="Start cni network conf syncer for default" May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.719214699Z" level=info msg="containerd successfully booted in 0.026468s" May 17 01:28:32.720640 env[1563]: time="2025-05-17T01:28:32.719214736Z" level=info msg="Start streaming server" May 17 01:28:32.721563 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 01:28:32.721696 systemd[1]: Reached target system-config.target. May 17 01:28:32.729410 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 01:28:32.729522 systemd[1]: Reached target user-config.target. May 17 01:28:32.739941 systemd[1]: Started containerd.service. May 17 01:28:32.746631 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 01:28:32.771311 locksmithd[1597]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 01:28:32.906908 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 01:28:32.918672 systemd[1]: Finished sshd-keygen.service. May 17 01:28:32.921399 systemd-networkd[1323]: bond0: Gained IPv6LL May 17 01:28:32.926317 systemd[1]: Starting issuegen.service... May 17 01:28:32.934552 systemd[1]: issuegen.service: Deactivated successfully. May 17 01:28:32.934643 systemd[1]: Finished issuegen.service. May 17 01:28:32.943231 systemd[1]: Starting systemd-user-sessions.service... May 17 01:28:32.951655 systemd[1]: Finished systemd-user-sessions.service. May 17 01:28:32.961305 systemd[1]: Started getty@tty1.service. May 17 01:28:32.963127 tar[1560]: linux-amd64/README.md May 17 01:28:32.969085 systemd[1]: Started serial-getty@ttyS1.service. May 17 01:28:32.977438 systemd[1]: Reached target getty.target. May 17 01:28:32.986623 systemd[1]: Finished prepare-helm.service. May 17 01:28:33.006299 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 May 17 01:28:33.035892 extend-filesystems[1544]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required May 17 01:28:33.035892 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 56 May 17 01:28:33.035892 extend-filesystems[1544]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. May 17 01:28:33.074334 extend-filesystems[1529]: Resized filesystem in /dev/sdb9 May 17 01:28:33.036344 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 01:28:33.036425 systemd[1]: Finished extend-filesystems.service. May 17 01:28:33.561520 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 01:28:33.571569 systemd[1]: Reached target network-online.target. May 17 01:28:33.580285 systemd[1]: Starting kubelet.service... May 17 01:28:34.188370 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 May 17 01:28:34.280338 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) May 17 01:28:34.419400 systemd[1]: Started kubelet.service. May 17 01:28:34.983304 kubelet[1634]: E0517 01:28:34.983247 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:28:34.984432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:28:34.984517 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:28:37.709780 coreos-metadata[1523]: May 17 01:28:37.709 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution May 17 01:28:37.710538 coreos-metadata[1520]: May 17 01:28:37.709 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution May 17 01:28:38.137265 login[1621]: pam_lastlog(login:session): file /var/log/lastlog is locked/write May 17 01:28:38.143928 login[1622]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 01:28:38.173723 systemd-logind[1555]: New session 2 of user core. May 17 01:28:38.176278 systemd[1]: Created slice user-500.slice. May 17 01:28:38.179222 systemd[1]: Starting user-runtime-dir@500.service... May 17 01:28:38.190249 systemd[1]: Finished user-runtime-dir@500.service. May 17 01:28:38.190975 systemd[1]: Starting user@500.service... May 17 01:28:38.193061 (systemd)[1649]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 01:28:38.261822 systemd[1649]: Queued start job for default target default.target. May 17 01:28:38.262054 systemd[1649]: Reached target paths.target. May 17 01:28:38.262065 systemd[1649]: Reached target sockets.target. May 17 01:28:38.262073 systemd[1649]: Reached target timers.target. May 17 01:28:38.262081 systemd[1649]: Reached target basic.target. May 17 01:28:38.262100 systemd[1649]: Reached target default.target. May 17 01:28:38.262115 systemd[1649]: Startup finished in 65ms. May 17 01:28:38.262159 systemd[1]: Started user@500.service. May 17 01:28:38.262705 systemd[1]: Started session-2.scope. May 17 01:28:38.710322 coreos-metadata[1523]: May 17 01:28:38.710 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 17 01:28:38.711087 coreos-metadata[1520]: May 17 01:28:38.710 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 17 01:28:38.714980 coreos-metadata[1523]: May 17 01:28:38.714 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution May 17 01:28:38.716306 coreos-metadata[1520]: May 17 01:28:38.716 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution May 17 01:28:39.138273 login[1621]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 01:28:39.149627 systemd-logind[1555]: New session 1 of user core. May 17 01:28:39.152083 systemd[1]: Started session-1.scope. May 17 01:28:39.695348 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 May 17 01:28:39.695515 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 May 17 01:28:39.740597 systemd[1]: Created slice system-sshd.slice. May 17 01:28:39.741676 systemd[1]: Started sshd@0-145.40.90.133:22-139.178.89.65:34828.service. May 17 01:28:39.788250 sshd[1670]: Accepted publickey for core from 139.178.89.65 port 34828 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:28:39.789151 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:28:39.791921 systemd-logind[1555]: New session 3 of user core. May 17 01:28:39.792905 systemd[1]: Started session-3.scope. May 17 01:28:39.845089 systemd[1]: Started sshd@1-145.40.90.133:22-139.178.89.65:34836.service. May 17 01:28:39.872010 sshd[1675]: Accepted publickey for core from 139.178.89.65 port 34836 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:28:39.872765 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:28:39.875058 systemd-logind[1555]: New session 4 of user core. May 17 01:28:39.875773 systemd[1]: Started session-4.scope. May 17 01:28:39.925604 sshd[1675]: pam_unix(sshd:session): session closed for user core May 17 01:28:39.928062 systemd[1]: sshd@1-145.40.90.133:22-139.178.89.65:34836.service: Deactivated successfully. May 17 01:28:39.928633 systemd[1]: session-4.scope: Deactivated successfully. May 17 01:28:39.929145 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. May 17 01:28:39.929991 systemd[1]: Started sshd@2-145.40.90.133:22-139.178.89.65:34852.service. May 17 01:28:39.930713 systemd-logind[1555]: Removed session 4. May 17 01:28:39.959533 sshd[1681]: Accepted publickey for core from 139.178.89.65 port 34852 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:28:39.960457 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:28:39.963494 systemd-logind[1555]: New session 5 of user core. May 17 01:28:39.964514 systemd[1]: Started session-5.scope. May 17 01:28:40.019031 sshd[1681]: pam_unix(sshd:session): session closed for user core May 17 01:28:40.020195 systemd[1]: sshd@2-145.40.90.133:22-139.178.89.65:34852.service: Deactivated successfully. May 17 01:28:40.020621 systemd[1]: session-5.scope: Deactivated successfully. May 17 01:28:40.020975 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. May 17 01:28:40.021388 systemd-logind[1555]: Removed session 5. May 17 01:28:40.715112 coreos-metadata[1523]: May 17 01:28:40.714 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 17 01:28:40.716566 coreos-metadata[1520]: May 17 01:28:40.716 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 17 01:28:41.675523 coreos-metadata[1520]: May 17 01:28:41.675 INFO Fetch successful May 17 01:28:41.726158 coreos-metadata[1523]: May 17 01:28:41.726 INFO Fetch successful May 17 01:28:41.726154 unknown[1520]: wrote ssh authorized keys file for user: core May 17 01:28:41.745759 update-ssh-keys[1687]: Updated "/home/core/.ssh/authorized_keys" May 17 01:28:41.746035 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 01:28:41.760738 systemd[1]: Finished coreos-metadata.service. May 17 01:28:41.761536 systemd[1]: Started packet-phone-home.service. May 17 01:28:41.761664 systemd[1]: Reached target multi-user.target. May 17 01:28:41.762353 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 01:28:41.766862 curl[1690]: % Total % Received % Xferd Average Speed Time Time Time Current May 17 01:28:41.767039 curl[1690]: Dload Upload Total Spent Left Speed May 17 01:28:41.766918 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 01:28:41.767003 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 01:28:41.767115 systemd[1]: Startup finished in 1.866s (kernel) + 24.635s (initrd) + 16.293s (userspace) = 42.795s. May 17 01:28:42.229895 curl[1690]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 May 17 01:28:42.231910 systemd[1]: packet-phone-home.service: Deactivated successfully. May 17 01:28:45.207375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 01:28:45.207907 systemd[1]: Stopped kubelet.service. May 17 01:28:45.209394 systemd[1]: Starting kubelet.service... May 17 01:28:45.472341 systemd[1]: Started kubelet.service. May 17 01:28:45.497346 kubelet[1696]: E0517 01:28:45.497304 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:28:45.499593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:28:45.499679 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:28:50.028214 systemd[1]: Started sshd@3-145.40.90.133:22-139.178.89.65:55290.service. May 17 01:28:50.056239 sshd[1713]: Accepted publickey for core from 139.178.89.65 port 55290 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:28:50.057125 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:28:50.060121 systemd-logind[1555]: New session 6 of user core. May 17 01:28:50.061115 systemd[1]: Started session-6.scope. May 17 01:28:50.116835 sshd[1713]: pam_unix(sshd:session): session closed for user core May 17 01:28:50.118684 systemd[1]: sshd@3-145.40.90.133:22-139.178.89.65:55290.service: Deactivated successfully. May 17 01:28:50.119038 systemd[1]: session-6.scope: Deactivated successfully. May 17 01:28:50.119411 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. May 17 01:28:50.119970 systemd[1]: Started sshd@4-145.40.90.133:22-139.178.89.65:55302.service. May 17 01:28:50.120458 systemd-logind[1555]: Removed session 6. May 17 01:28:50.147720 sshd[1719]: Accepted publickey for core from 139.178.89.65 port 55302 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:28:50.148562 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:28:50.151386 systemd-logind[1555]: New session 7 of user core. May 17 01:28:50.152327 systemd[1]: Started session-7.scope. May 17 01:28:50.204432 sshd[1719]: pam_unix(sshd:session): session closed for user core May 17 01:28:50.206408 systemd[1]: sshd@4-145.40.90.133:22-139.178.89.65:55302.service: Deactivated successfully. May 17 01:28:50.206788 systemd[1]: session-7.scope: Deactivated successfully. May 17 01:28:50.207140 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. May 17 01:28:50.207736 systemd[1]: Started sshd@5-145.40.90.133:22-139.178.89.65:55316.service. May 17 01:28:50.208233 systemd-logind[1555]: Removed session 7. May 17 01:28:50.235870 sshd[1725]: Accepted publickey for core from 139.178.89.65 port 55316 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:28:50.236847 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:28:50.240194 systemd-logind[1555]: New session 8 of user core. May 17 01:28:50.241361 systemd[1]: Started session-8.scope. May 17 01:28:50.297867 sshd[1725]: pam_unix(sshd:session): session closed for user core May 17 01:28:50.299656 systemd[1]: sshd@5-145.40.90.133:22-139.178.89.65:55316.service: Deactivated successfully. May 17 01:28:50.300006 systemd[1]: session-8.scope: Deactivated successfully. May 17 01:28:50.300361 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. May 17 01:28:50.300932 systemd[1]: Started sshd@6-145.40.90.133:22-139.178.89.65:55326.service. May 17 01:28:50.301424 systemd-logind[1555]: Removed session 8. May 17 01:28:50.329315 sshd[1731]: Accepted publickey for core from 139.178.89.65 port 55326 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:28:50.330291 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:28:50.333730 systemd-logind[1555]: New session 9 of user core. May 17 01:28:50.334907 systemd[1]: Started session-9.scope. May 17 01:28:50.419005 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 01:28:50.419722 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 01:28:50.470232 systemd[1]: Starting docker.service... May 17 01:28:50.494960 env[1749]: time="2025-05-17T01:28:50.494897789Z" level=info msg="Starting up" May 17 01:28:50.495614 env[1749]: time="2025-05-17T01:28:50.495571900Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 01:28:50.495614 env[1749]: time="2025-05-17T01:28:50.495582842Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 01:28:50.495614 env[1749]: time="2025-05-17T01:28:50.495596640Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 01:28:50.495614 env[1749]: time="2025-05-17T01:28:50.495604531Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 01:28:50.496613 env[1749]: time="2025-05-17T01:28:50.496566707Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 01:28:50.496613 env[1749]: time="2025-05-17T01:28:50.496581027Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 01:28:50.496613 env[1749]: time="2025-05-17T01:28:50.496592389Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 01:28:50.496613 env[1749]: time="2025-05-17T01:28:50.496599344Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 01:28:50.511310 env[1749]: time="2025-05-17T01:28:50.511265079Z" level=info msg="Loading containers: start." May 17 01:28:50.696362 kernel: Initializing XFRM netlink socket May 17 01:28:50.740346 env[1749]: time="2025-05-17T01:28:50.740292683Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 01:28:50.741254 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. May 17 01:28:50.805721 systemd-networkd[1323]: docker0: Link UP May 17 01:28:50.810495 systemd-timesyncd[1501]: Contacted time server [2607:f298:5:101d:f816:3eff:fefd:8817]:123 (2.flatcar.pool.ntp.org). May 17 01:28:50.810566 systemd-timesyncd[1501]: Initial clock synchronization to Sat 2025-05-17 01:28:50.622025 UTC. May 17 01:28:50.832743 env[1749]: time="2025-05-17T01:28:50.832627938Z" level=info msg="Loading containers: done." May 17 01:28:50.853509 env[1749]: time="2025-05-17T01:28:50.853400199Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 01:28:50.853855 env[1749]: time="2025-05-17T01:28:50.853796934Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 01:28:50.854114 env[1749]: time="2025-05-17T01:28:50.854034009Z" level=info msg="Daemon has completed initialization" May 17 01:28:50.861989 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1837687781-merged.mount: Deactivated successfully. May 17 01:28:50.879156 systemd[1]: Started docker.service. May 17 01:28:50.894697 env[1749]: time="2025-05-17T01:28:50.894566632Z" level=info msg="API listen on /run/docker.sock" May 17 01:28:51.866006 env[1563]: time="2025-05-17T01:28:51.865863782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 01:28:52.635069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123266620.mount: Deactivated successfully. May 17 01:28:53.698063 env[1563]: time="2025-05-17T01:28:53.698013793Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:53.698686 env[1563]: time="2025-05-17T01:28:53.698626515Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:53.699996 env[1563]: time="2025-05-17T01:28:53.699888994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:53.701487 env[1563]: time="2025-05-17T01:28:53.701473472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:53.702034 env[1563]: time="2025-05-17T01:28:53.702019527Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 01:28:53.702506 env[1563]: time="2025-05-17T01:28:53.702474650Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 01:28:54.981562 env[1563]: time="2025-05-17T01:28:54.981512230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:54.982158 env[1563]: time="2025-05-17T01:28:54.982117147Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:54.983193 env[1563]: time="2025-05-17T01:28:54.983149619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:54.984165 env[1563]: time="2025-05-17T01:28:54.984126551Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:54.984600 env[1563]: time="2025-05-17T01:28:54.984548519Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 01:28:54.985006 env[1563]: time="2025-05-17T01:28:54.984972020Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 01:28:55.706057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 01:28:55.706306 systemd[1]: Stopped kubelet.service. May 17 01:28:55.707632 systemd[1]: Starting kubelet.service... May 17 01:28:55.946286 systemd[1]: Started kubelet.service. May 17 01:28:55.968566 kubelet[1905]: E0517 01:28:55.968440 1905 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:28:55.969501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:28:55.969567 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:28:56.099046 env[1563]: time="2025-05-17T01:28:56.098987694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:56.099574 env[1563]: time="2025-05-17T01:28:56.099534328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:56.100752 env[1563]: time="2025-05-17T01:28:56.100710431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:56.101492 env[1563]: time="2025-05-17T01:28:56.101446163Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:56.102324 env[1563]: time="2025-05-17T01:28:56.102291667Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 01:28:56.102595 env[1563]: time="2025-05-17T01:28:56.102539951Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 01:28:56.973907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3043034113.mount: Deactivated successfully. May 17 01:28:57.372044 env[1563]: time="2025-05-17T01:28:57.371971782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:57.372611 env[1563]: time="2025-05-17T01:28:57.372586085Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:57.373161 env[1563]: time="2025-05-17T01:28:57.373151534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:57.373812 env[1563]: time="2025-05-17T01:28:57.373764698Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:57.374424 env[1563]: time="2025-05-17T01:28:57.374380437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 01:28:57.374792 env[1563]: time="2025-05-17T01:28:57.374763859Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 01:28:57.865232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756161404.mount: Deactivated successfully. May 17 01:28:58.639607 env[1563]: time="2025-05-17T01:28:58.639538090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:58.641128 env[1563]: time="2025-05-17T01:28:58.641092519Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:58.643991 env[1563]: time="2025-05-17T01:28:58.643938713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:58.646554 env[1563]: time="2025-05-17T01:28:58.646485175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:58.647755 env[1563]: time="2025-05-17T01:28:58.647692161Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 01:28:58.648213 env[1563]: time="2025-05-17T01:28:58.648158033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 01:28:59.117606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197136826.mount: Deactivated successfully. May 17 01:28:59.118846 env[1563]: time="2025-05-17T01:28:59.118801607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:59.119363 env[1563]: time="2025-05-17T01:28:59.119335740Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:59.120228 env[1563]: time="2025-05-17T01:28:59.120175904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:59.121072 env[1563]: time="2025-05-17T01:28:59.121022564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:28:59.121749 env[1563]: time="2025-05-17T01:28:59.121714528Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 01:28:59.122043 env[1563]: time="2025-05-17T01:28:59.121989630Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 01:28:59.705267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892381597.mount: Deactivated successfully. May 17 01:29:01.322670 env[1563]: time="2025-05-17T01:29:01.322644245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:01.323324 env[1563]: time="2025-05-17T01:29:01.323311444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:01.324501 env[1563]: time="2025-05-17T01:29:01.324484278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:01.325668 env[1563]: time="2025-05-17T01:29:01.325639650Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:01.326194 env[1563]: time="2025-05-17T01:29:01.326180462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 01:29:03.422891 systemd[1]: Stopped kubelet.service. May 17 01:29:03.424361 systemd[1]: Starting kubelet.service... May 17 01:29:03.440131 systemd[1]: Reloading. May 17 01:29:03.474555 /usr/lib/systemd/system-generators/torcx-generator[1990]: time="2025-05-17T01:29:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 01:29:03.474571 /usr/lib/systemd/system-generators/torcx-generator[1990]: time="2025-05-17T01:29:03Z" level=info msg="torcx already run" May 17 01:29:03.530125 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:29:03.530135 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:29:03.542468 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:29:03.627108 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 01:29:03.627231 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 01:29:03.627610 systemd[1]: Stopped kubelet.service. May 17 01:29:03.630098 systemd[1]: Starting kubelet.service... May 17 01:29:03.870694 systemd[1]: Started kubelet.service. May 17 01:29:03.891465 kubelet[2055]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:29:03.891465 kubelet[2055]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 01:29:03.891465 kubelet[2055]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:29:03.891688 kubelet[2055]: I0517 01:29:03.891477 2055 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 01:29:04.135667 kubelet[2055]: I0517 01:29:04.135598 2055 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 01:29:04.135667 kubelet[2055]: I0517 01:29:04.135628 2055 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 01:29:04.135816 kubelet[2055]: I0517 01:29:04.135781 2055 server.go:954] "Client rotation is on, will bootstrap in background" May 17 01:29:04.165291 kubelet[2055]: E0517 01:29:04.165256 2055 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://145.40.90.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 145.40.90.133:6443: connect: connection refused" logger="UnhandledError" May 17 01:29:04.169553 kubelet[2055]: I0517 01:29:04.169543 2055 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:29:04.175013 kubelet[2055]: E0517 01:29:04.174995 2055 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 01:29:04.175050 kubelet[2055]: I0517 01:29:04.175015 2055 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 01:29:04.195090 kubelet[2055]: I0517 01:29:04.195044 2055 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 01:29:04.196210 kubelet[2055]: I0517 01:29:04.196159 2055 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 01:29:04.196365 kubelet[2055]: I0517 01:29:04.196186 2055 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-2b1b6103b5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 01:29:04.196365 kubelet[2055]: I0517 01:29:04.196355 2055 topology_manager.go:138] "Creating topology manager with none policy" May 17 01:29:04.196365 kubelet[2055]: I0517 01:29:04.196364 2055 container_manager_linux.go:304] "Creating device plugin manager" May 17 01:29:04.196536 kubelet[2055]: I0517 01:29:04.196461 2055 state_mem.go:36] "Initialized new in-memory state store" May 17 01:29:04.200305 kubelet[2055]: I0517 01:29:04.200259 2055 kubelet.go:446] "Attempting to sync node with API server" May 17 01:29:04.200305 kubelet[2055]: I0517 01:29:04.200278 2055 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 01:29:04.200395 kubelet[2055]: I0517 01:29:04.200348 2055 kubelet.go:352] "Adding apiserver pod source" May 17 01:29:04.200395 kubelet[2055]: I0517 01:29:04.200359 2055 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 01:29:04.206803 kubelet[2055]: W0517 01:29:04.206755 2055 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.90.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-2b1b6103b5&limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 17 01:29:04.206892 kubelet[2055]: E0517 01:29:04.206844 2055 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://145.40.90.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-2b1b6103b5&limit=500&resourceVersion=0\": dial tcp 145.40.90.133:6443: connect: connection refused" logger="UnhandledError" May 17 01:29:04.207268 kubelet[2055]: W0517 01:29:04.207239 2055 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://145.40.90.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 17 01:29:04.207330 kubelet[2055]: E0517 01:29:04.207281 2055 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://145.40.90.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 145.40.90.133:6443: connect: connection refused" logger="UnhandledError" May 17 01:29:04.211542 kubelet[2055]: I0517 01:29:04.211520 2055 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 01:29:04.212073 kubelet[2055]: I0517 01:29:04.212030 2055 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 01:29:04.212145 kubelet[2055]: W0517 01:29:04.212095 2055 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 01:29:04.227048 kubelet[2055]: I0517 01:29:04.226993 2055 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 01:29:04.227048 kubelet[2055]: I0517 01:29:04.227040 2055 server.go:1287] "Started kubelet" May 17 01:29:04.227228 kubelet[2055]: I0517 01:29:04.227126 2055 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 01:29:04.228911 kubelet[2055]: I0517 01:29:04.228861 2055 server.go:479] "Adding debug handlers to kubelet server" May 17 01:29:04.235779 kubelet[2055]: E0517 01:29:04.235731 2055 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 01:29:04.240056 kubelet[2055]: E0517 01:29:04.238707 2055 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.90.133:6443/api/v1/namespaces/default/events\": dial tcp 145.40.90.133:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-2b1b6103b5.18402c483f0fb3ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-2b1b6103b5,UID:ci-3510.3.7-n-2b1b6103b5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-2b1b6103b5,},FirstTimestamp:2025-05-17 01:29:04.227013614 +0000 UTC m=+0.353030682,LastTimestamp:2025-05-17 01:29:04.227013614 +0000 UTC m=+0.353030682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-2b1b6103b5,}" May 17 01:29:04.240721 kubelet[2055]: I0517 01:29:04.240691 2055 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 01:29:04.240811 kubelet[2055]: I0517 01:29:04.240805 2055 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 01:29:04.247521 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 01:29:04.247557 kubelet[2055]: I0517 01:29:04.247509 2055 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 01:29:04.247614 kubelet[2055]: I0517 01:29:04.247560 2055 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 01:29:04.247651 kubelet[2055]: E0517 01:29:04.247638 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:04.247688 kubelet[2055]: I0517 01:29:04.247674 2055 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 01:29:04.247715 kubelet[2055]: I0517 01:29:04.247708 2055 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 01:29:04.247746 kubelet[2055]: I0517 01:29:04.247727 2055 reconciler.go:26] "Reconciler: start to sync state" May 17 01:29:04.247806 kubelet[2055]: E0517 01:29:04.247787 2055 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-2b1b6103b5?timeout=10s\": dial tcp 145.40.90.133:6443: connect: connection refused" interval="200ms" May 17 01:29:04.247897 kubelet[2055]: W0517 01:29:04.247873 2055 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.90.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 17 01:29:04.247931 kubelet[2055]: E0517 01:29:04.247907 2055 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://145.40.90.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 145.40.90.133:6443: connect: connection refused" logger="UnhandledError" May 17 01:29:04.247931 kubelet[2055]: I0517 01:29:04.247927 2055 factory.go:221] Registration of the systemd container factory successfully May 17 01:29:04.248031 kubelet[2055]: I0517 01:29:04.247984 2055 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 01:29:04.248472 kubelet[2055]: I0517 01:29:04.248464 2055 factory.go:221] Registration of the containerd container factory successfully May 17 01:29:04.256367 kubelet[2055]: I0517 01:29:04.256305 2055 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 01:29:04.256823 kubelet[2055]: I0517 01:29:04.256808 2055 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 01:29:04.256823 kubelet[2055]: I0517 01:29:04.256825 2055 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 01:29:04.256883 kubelet[2055]: I0517 01:29:04.256836 2055 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 01:29:04.256883 kubelet[2055]: I0517 01:29:04.256841 2055 kubelet.go:2382] "Starting kubelet main sync loop" May 17 01:29:04.256883 kubelet[2055]: E0517 01:29:04.256868 2055 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 01:29:04.257078 kubelet[2055]: W0517 01:29:04.257066 2055 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.90.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 17 01:29:04.257129 kubelet[2055]: E0517 01:29:04.257085 2055 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://145.40.90.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.90.133:6443: connect: connection refused" logger="UnhandledError" May 17 01:29:04.258506 kubelet[2055]: I0517 01:29:04.258498 2055 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 01:29:04.258506 kubelet[2055]: I0517 01:29:04.258505 2055 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 01:29:04.258579 kubelet[2055]: I0517 01:29:04.258513 2055 state_mem.go:36] "Initialized new in-memory state store" May 17 01:29:04.259346 kubelet[2055]: I0517 01:29:04.259340 2055 policy_none.go:49] "None policy: Start" May 17 01:29:04.259374 kubelet[2055]: I0517 01:29:04.259348 2055 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 01:29:04.259374 kubelet[2055]: I0517 01:29:04.259354 2055 state_mem.go:35] "Initializing new in-memory state store" May 17 01:29:04.261545 systemd[1]: Created slice kubepods.slice. May 17 01:29:04.263625 systemd[1]: Created slice kubepods-burstable.slice. May 17 01:29:04.264973 systemd[1]: Created slice kubepods-besteffort.slice. May 17 01:29:04.276846 kubelet[2055]: I0517 01:29:04.276836 2055 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 01:29:04.276945 kubelet[2055]: I0517 01:29:04.276938 2055 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 01:29:04.276979 kubelet[2055]: I0517 01:29:04.276950 2055 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 01:29:04.277036 kubelet[2055]: I0517 01:29:04.277028 2055 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 01:29:04.277364 kubelet[2055]: E0517 01:29:04.277356 2055 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 01:29:04.277402 kubelet[2055]: E0517 01:29:04.277375 2055 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:04.377768 systemd[1]: Created slice kubepods-burstable-pod540f59fcf717dc4f739ffcc5db6011cf.slice. May 17 01:29:04.380275 kubelet[2055]: I0517 01:29:04.380231 2055 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.380996 kubelet[2055]: E0517 01:29:04.380943 2055 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.133:6443/api/v1/nodes\": dial tcp 145.40.90.133:6443: connect: connection refused" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.395101 kubelet[2055]: E0517 01:29:04.394929 2055 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.403104 systemd[1]: Created slice kubepods-burstable-pod3d16367638affcb66ad683f0ad004ac1.slice. May 17 01:29:04.407362 kubelet[2055]: E0517 01:29:04.407276 2055 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.411353 systemd[1]: Created slice kubepods-burstable-podff6f0d0e631be09d029d5ed219a03409.slice. May 17 01:29:04.414985 kubelet[2055]: E0517 01:29:04.414910 2055 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.449686 kubelet[2055]: E0517 01:29:04.449577 2055 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-2b1b6103b5?timeout=10s\": dial tcp 145.40.90.133:6443: connect: connection refused" interval="400ms" May 17 01:29:04.549207 kubelet[2055]: I0517 01:29:04.549075 2055 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff6f0d0e631be09d029d5ed219a03409-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-2b1b6103b5\" (UID: \"ff6f0d0e631be09d029d5ed219a03409\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.549207 kubelet[2055]: I0517 01:29:04.549171 2055 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/540f59fcf717dc4f739ffcc5db6011cf-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-2b1b6103b5\" (UID: \"540f59fcf717dc4f739ffcc5db6011cf\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.549640 kubelet[2055]: I0517 01:29:04.549223 2055 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d16367638affcb66ad683f0ad004ac1-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" (UID: \"3d16367638affcb66ad683f0ad004ac1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.549640 kubelet[2055]: I0517 01:29:04.549280 2055 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d16367638affcb66ad683f0ad004ac1-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" (UID: \"3d16367638affcb66ad683f0ad004ac1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.549640 kubelet[2055]: I0517 01:29:04.549354 2055 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d16367638affcb66ad683f0ad004ac1-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" (UID: \"3d16367638affcb66ad683f0ad004ac1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.549640 kubelet[2055]: I0517 01:29:04.549418 2055 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d16367638affcb66ad683f0ad004ac1-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" (UID: \"3d16367638affcb66ad683f0ad004ac1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.549640 kubelet[2055]: I0517 01:29:04.549467 2055 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d16367638affcb66ad683f0ad004ac1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" (UID: \"3d16367638affcb66ad683f0ad004ac1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.550165 kubelet[2055]: I0517 01:29:04.549525 2055 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/540f59fcf717dc4f739ffcc5db6011cf-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-2b1b6103b5\" (UID: \"540f59fcf717dc4f739ffcc5db6011cf\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.550165 kubelet[2055]: I0517 01:29:04.549573 2055 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/540f59fcf717dc4f739ffcc5db6011cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-2b1b6103b5\" (UID: \"540f59fcf717dc4f739ffcc5db6011cf\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.585191 kubelet[2055]: I0517 01:29:04.585098 2055 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.585960 kubelet[2055]: E0517 01:29:04.585860 2055 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.133:6443/api/v1/nodes\": dial tcp 145.40.90.133:6443: connect: connection refused" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.697472 env[1563]: time="2025-05-17T01:29:04.697239493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-2b1b6103b5,Uid:540f59fcf717dc4f739ffcc5db6011cf,Namespace:kube-system,Attempt:0,}" May 17 01:29:04.709588 env[1563]: time="2025-05-17T01:29:04.709521056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-2b1b6103b5,Uid:3d16367638affcb66ad683f0ad004ac1,Namespace:kube-system,Attempt:0,}" May 17 01:29:04.717072 env[1563]: time="2025-05-17T01:29:04.716992576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-2b1b6103b5,Uid:ff6f0d0e631be09d029d5ed219a03409,Namespace:kube-system,Attempt:0,}" May 17 01:29:04.851045 kubelet[2055]: E0517 01:29:04.850965 2055 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-2b1b6103b5?timeout=10s\": dial tcp 145.40.90.133:6443: connect: connection refused" interval="800ms" May 17 01:29:04.990328 kubelet[2055]: I0517 01:29:04.990224 2055 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:04.991061 kubelet[2055]: E0517 01:29:04.990903 2055 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.133:6443/api/v1/nodes\": dial tcp 145.40.90.133:6443: connect: connection refused" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:05.081967 kubelet[2055]: W0517 01:29:05.081811 2055 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.90.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-2b1b6103b5&limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 17 01:29:05.081967 kubelet[2055]: E0517 01:29:05.081959 2055 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://145.40.90.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-2b1b6103b5&limit=500&resourceVersion=0\": dial tcp 145.40.90.133:6443: connect: connection refused" logger="UnhandledError" May 17 01:29:05.189703 kubelet[2055]: W0517 01:29:05.189594 2055 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.90.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.133:6443: connect: connection refused May 17 01:29:05.189703 kubelet[2055]: E0517 01:29:05.189683 2055 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://145.40.90.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.90.133:6443: connect: connection refused" logger="UnhandledError" May 17 01:29:05.238938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862000399.mount: Deactivated successfully. May 17 01:29:05.239923 env[1563]: time="2025-05-17T01:29:05.239904185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.240810 env[1563]: time="2025-05-17T01:29:05.240773601Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.241376 env[1563]: time="2025-05-17T01:29:05.241349568Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.242669 env[1563]: time="2025-05-17T01:29:05.242645938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.243576 env[1563]: time="2025-05-17T01:29:05.243544839Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.245014 env[1563]: time="2025-05-17T01:29:05.245001469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.246796 env[1563]: time="2025-05-17T01:29:05.246781440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.248108 env[1563]: time="2025-05-17T01:29:05.248094724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.248653 env[1563]: time="2025-05-17T01:29:05.248641822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.249092 env[1563]: time="2025-05-17T01:29:05.249078221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.250287 env[1563]: time="2025-05-17T01:29:05.250273049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.250803 env[1563]: time="2025-05-17T01:29:05.250759245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:05.254557 env[1563]: time="2025-05-17T01:29:05.254521583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:29:05.254557 env[1563]: time="2025-05-17T01:29:05.254544439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:29:05.254557 env[1563]: time="2025-05-17T01:29:05.254551387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:29:05.254715 env[1563]: time="2025-05-17T01:29:05.254628454Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e7684e8a3bbf13c3689deea5a4c0c784890fd29f86ad9f930afc479a2a185 pid=2105 runtime=io.containerd.runc.v2 May 17 01:29:05.256160 env[1563]: time="2025-05-17T01:29:05.256128843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:29:05.256160 env[1563]: time="2025-05-17T01:29:05.256149771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:29:05.256160 env[1563]: time="2025-05-17T01:29:05.256158024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:29:05.256299 env[1563]: time="2025-05-17T01:29:05.256246841Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a91b1c134b8076e5726043a9bc749eba675ad6c9acddf36b07c91e0c1c38f9d4 pid=2122 runtime=io.containerd.runc.v2 May 17 01:29:05.257241 env[1563]: time="2025-05-17T01:29:05.257215103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:29:05.257241 env[1563]: time="2025-05-17T01:29:05.257232160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:29:05.257319 env[1563]: time="2025-05-17T01:29:05.257240220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:29:05.257369 env[1563]: time="2025-05-17T01:29:05.257319727Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2866e935f158e7fe2bbafef5ea379ab27cc9fa468427827edf2414b0aa8931e pid=2133 runtime=io.containerd.runc.v2 May 17 01:29:05.260972 systemd[1]: Started cri-containerd-771e7684e8a3bbf13c3689deea5a4c0c784890fd29f86ad9f930afc479a2a185.scope. May 17 01:29:05.262878 systemd[1]: Started cri-containerd-a2866e935f158e7fe2bbafef5ea379ab27cc9fa468427827edf2414b0aa8931e.scope. May 17 01:29:05.263499 systemd[1]: Started cri-containerd-a91b1c134b8076e5726043a9bc749eba675ad6c9acddf36b07c91e0c1c38f9d4.scope. May 17 01:29:05.284334 env[1563]: time="2025-05-17T01:29:05.284305518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-2b1b6103b5,Uid:540f59fcf717dc4f739ffcc5db6011cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"771e7684e8a3bbf13c3689deea5a4c0c784890fd29f86ad9f930afc479a2a185\"" May 17 01:29:05.285287 env[1563]: time="2025-05-17T01:29:05.285266136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-2b1b6103b5,Uid:ff6f0d0e631be09d029d5ed219a03409,Namespace:kube-system,Attempt:0,} returns sandbox id \"a91b1c134b8076e5726043a9bc749eba675ad6c9acddf36b07c91e0c1c38f9d4\"" May 17 01:29:05.285799 env[1563]: time="2025-05-17T01:29:05.285785361Z" level=info msg="CreateContainer within sandbox \"771e7684e8a3bbf13c3689deea5a4c0c784890fd29f86ad9f930afc479a2a185\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 01:29:05.286113 env[1563]: time="2025-05-17T01:29:05.286101277Z" level=info msg="CreateContainer within sandbox \"a91b1c134b8076e5726043a9bc749eba675ad6c9acddf36b07c91e0c1c38f9d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 01:29:05.287060 env[1563]: time="2025-05-17T01:29:05.287042026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-2b1b6103b5,Uid:3d16367638affcb66ad683f0ad004ac1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2866e935f158e7fe2bbafef5ea379ab27cc9fa468427827edf2414b0aa8931e\"" May 17 01:29:05.287952 env[1563]: time="2025-05-17T01:29:05.287934829Z" level=info msg="CreateContainer within sandbox \"a2866e935f158e7fe2bbafef5ea379ab27cc9fa468427827edf2414b0aa8931e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 01:29:05.291985 env[1563]: time="2025-05-17T01:29:05.291943214Z" level=info msg="CreateContainer within sandbox \"771e7684e8a3bbf13c3689deea5a4c0c784890fd29f86ad9f930afc479a2a185\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5e90cce46e5b1dda2e45d8eba62b3a2cfdb824671e034da42532af6e8bebb8ec\"" May 17 01:29:05.292247 env[1563]: time="2025-05-17T01:29:05.292235378Z" level=info msg="StartContainer for \"5e90cce46e5b1dda2e45d8eba62b3a2cfdb824671e034da42532af6e8bebb8ec\"" May 17 01:29:05.294615 env[1563]: time="2025-05-17T01:29:05.294573081Z" level=info msg="CreateContainer within sandbox \"a91b1c134b8076e5726043a9bc749eba675ad6c9acddf36b07c91e0c1c38f9d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f761708ac44e732600fe2ab26b92b2b49ff8b82609a7d8d066e1e616c0f26778\"" May 17 01:29:05.294868 env[1563]: time="2025-05-17T01:29:05.294839889Z" level=info msg="StartContainer for \"f761708ac44e732600fe2ab26b92b2b49ff8b82609a7d8d066e1e616c0f26778\"" May 17 01:29:05.295307 env[1563]: time="2025-05-17T01:29:05.295285823Z" level=info msg="CreateContainer within sandbox \"a2866e935f158e7fe2bbafef5ea379ab27cc9fa468427827edf2414b0aa8931e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"43e48771318d711fb157e82c48b4794d3e8b2e90a18e4f37ba1eab6b24f3664f\"" May 17 01:29:05.295558 env[1563]: time="2025-05-17T01:29:05.295542822Z" level=info msg="StartContainer for \"43e48771318d711fb157e82c48b4794d3e8b2e90a18e4f37ba1eab6b24f3664f\"" May 17 01:29:05.300662 systemd[1]: Started cri-containerd-5e90cce46e5b1dda2e45d8eba62b3a2cfdb824671e034da42532af6e8bebb8ec.scope. May 17 01:29:05.303313 systemd[1]: Started cri-containerd-43e48771318d711fb157e82c48b4794d3e8b2e90a18e4f37ba1eab6b24f3664f.scope. May 17 01:29:05.303862 systemd[1]: Started cri-containerd-f761708ac44e732600fe2ab26b92b2b49ff8b82609a7d8d066e1e616c0f26778.scope. May 17 01:29:05.327592 env[1563]: time="2025-05-17T01:29:05.327565751Z" level=info msg="StartContainer for \"5e90cce46e5b1dda2e45d8eba62b3a2cfdb824671e034da42532af6e8bebb8ec\" returns successfully" May 17 01:29:05.327710 env[1563]: time="2025-05-17T01:29:05.327690949Z" level=info msg="StartContainer for \"f761708ac44e732600fe2ab26b92b2b49ff8b82609a7d8d066e1e616c0f26778\" returns successfully" May 17 01:29:05.329130 env[1563]: time="2025-05-17T01:29:05.329113566Z" level=info msg="StartContainer for \"43e48771318d711fb157e82c48b4794d3e8b2e90a18e4f37ba1eab6b24f3664f\" returns successfully" May 17 01:29:05.793318 kubelet[2055]: I0517 01:29:05.793174 2055 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:05.793947 kubelet[2055]: E0517 01:29:05.793913 2055 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-2b1b6103b5\" not found" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:05.918779 kubelet[2055]: I0517 01:29:05.918735 2055 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:05.918779 kubelet[2055]: E0517 01:29:05.918774 2055 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.7-n-2b1b6103b5\": node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:05.924421 kubelet[2055]: E0517 01:29:05.924410 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:06.024782 kubelet[2055]: E0517 01:29:06.024675 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:06.126172 kubelet[2055]: E0517 01:29:06.125943 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:06.226349 kubelet[2055]: E0517 01:29:06.226209 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:06.269220 kubelet[2055]: E0517 01:29:06.269131 2055 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:06.271774 kubelet[2055]: E0517 01:29:06.271691 2055 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:06.274734 kubelet[2055]: E0517 01:29:06.274649 2055 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:06.327377 kubelet[2055]: E0517 01:29:06.327248 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:06.428650 kubelet[2055]: E0517 01:29:06.428416 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:06.529012 kubelet[2055]: E0517 01:29:06.528905 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:06.629545 kubelet[2055]: E0517 01:29:06.629439 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:06.730811 kubelet[2055]: E0517 01:29:06.730688 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:06.831594 kubelet[2055]: E0517 01:29:06.831524 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:06.932300 kubelet[2055]: E0517 01:29:06.932280 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:07.033266 kubelet[2055]: E0517 01:29:07.033077 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:07.134129 kubelet[2055]: E0517 01:29:07.134059 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:07.235053 kubelet[2055]: E0517 01:29:07.234953 2055 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:07.276829 kubelet[2055]: E0517 01:29:07.276742 2055 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:07.277100 kubelet[2055]: E0517 01:29:07.276883 2055 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:07.348604 kubelet[2055]: I0517 01:29:07.348409 2055 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:07.361007 kubelet[2055]: W0517 01:29:07.360956 2055 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:29:07.361311 kubelet[2055]: I0517 01:29:07.361191 2055 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:07.382666 kubelet[2055]: W0517 01:29:07.381883 2055 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:29:07.382666 kubelet[2055]: I0517 01:29:07.382611 2055 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:07.391086 kubelet[2055]: W0517 01:29:07.390982 2055 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:29:08.202532 kubelet[2055]: I0517 01:29:08.202424 2055 apiserver.go:52] "Watching apiserver" May 17 01:29:08.248670 kubelet[2055]: I0517 01:29:08.248570 2055 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 01:29:08.391900 systemd[1]: Reloading. May 17 01:29:08.450112 /usr/lib/systemd/system-generators/torcx-generator[2395]: time="2025-05-17T01:29:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 01:29:08.450131 /usr/lib/systemd/system-generators/torcx-generator[2395]: time="2025-05-17T01:29:08Z" level=info msg="torcx already run" May 17 01:29:08.514768 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:29:08.514781 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:29:08.526797 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:29:08.595881 systemd[1]: Stopping kubelet.service... May 17 01:29:08.616662 systemd[1]: kubelet.service: Deactivated successfully. May 17 01:29:08.616785 systemd[1]: Stopped kubelet.service. May 17 01:29:08.617646 systemd[1]: Starting kubelet.service... May 17 01:29:08.851119 systemd[1]: Started kubelet.service. May 17 01:29:08.890062 kubelet[2459]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:29:08.890062 kubelet[2459]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 01:29:08.890062 kubelet[2459]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:29:08.890385 kubelet[2459]: I0517 01:29:08.890090 2459 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 01:29:08.894860 kubelet[2459]: I0517 01:29:08.894822 2459 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 01:29:08.894860 kubelet[2459]: I0517 01:29:08.894835 2459 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 01:29:08.895221 kubelet[2459]: I0517 01:29:08.895181 2459 server.go:954] "Client rotation is on, will bootstrap in background" May 17 01:29:08.896656 kubelet[2459]: I0517 01:29:08.896614 2459 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 01:29:08.898268 kubelet[2459]: I0517 01:29:08.898227 2459 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:29:08.900370 kubelet[2459]: E0517 01:29:08.900324 2459 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 01:29:08.900370 kubelet[2459]: I0517 01:29:08.900342 2459 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 01:29:08.936491 kubelet[2459]: I0517 01:29:08.936402 2459 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 01:29:08.937017 kubelet[2459]: I0517 01:29:08.936913 2459 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 01:29:08.937562 kubelet[2459]: I0517 01:29:08.936987 2459 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-2b1b6103b5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 01:29:08.937562 kubelet[2459]: I0517 01:29:08.937546 2459 topology_manager.go:138] "Creating topology manager with none policy" May 17 01:29:08.937562 kubelet[2459]: I0517 01:29:08.937580 2459 container_manager_linux.go:304] "Creating device plugin manager" May 17 01:29:08.938191 kubelet[2459]: I0517 01:29:08.937694 2459 state_mem.go:36] "Initialized new in-memory state store" May 17 01:29:08.938344 kubelet[2459]: I0517 01:29:08.938204 2459 kubelet.go:446] "Attempting to sync node with API server" May 17 01:29:08.938344 kubelet[2459]: I0517 01:29:08.938260 2459 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 01:29:08.938344 kubelet[2459]: I0517 01:29:08.938320 2459 kubelet.go:352] "Adding apiserver pod source" May 17 01:29:08.938344 kubelet[2459]: I0517 01:29:08.938347 2459 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 01:29:08.942244 kubelet[2459]: I0517 01:29:08.942098 2459 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 01:29:08.944653 kubelet[2459]: I0517 01:29:08.944614 2459 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 01:29:08.945462 kubelet[2459]: I0517 01:29:08.945433 2459 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 01:29:08.945596 kubelet[2459]: I0517 01:29:08.945486 2459 server.go:1287] "Started kubelet" May 17 01:29:08.945742 kubelet[2459]: I0517 01:29:08.945608 2459 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 01:29:08.945881 kubelet[2459]: I0517 01:29:08.945725 2459 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 01:29:08.946264 kubelet[2459]: I0517 01:29:08.946220 2459 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 01:29:08.948443 kubelet[2459]: I0517 01:29:08.948408 2459 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 01:29:08.948615 kubelet[2459]: I0517 01:29:08.948494 2459 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 01:29:08.948753 kubelet[2459]: I0517 01:29:08.948645 2459 server.go:479] "Adding debug handlers to kubelet server" May 17 01:29:08.948753 kubelet[2459]: E0517 01:29:08.948639 2459 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2b1b6103b5\" not found" May 17 01:29:08.949018 kubelet[2459]: I0517 01:29:08.948760 2459 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 01:29:08.949137 kubelet[2459]: I0517 01:29:08.949038 2459 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 01:29:08.949316 kubelet[2459]: I0517 01:29:08.949269 2459 reconciler.go:26] "Reconciler: start to sync state" May 17 01:29:08.949535 kubelet[2459]: E0517 01:29:08.949322 2459 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 01:29:08.949535 kubelet[2459]: I0517 01:29:08.949416 2459 factory.go:221] Registration of the systemd container factory successfully May 17 01:29:08.949771 kubelet[2459]: I0517 01:29:08.949723 2459 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 01:29:08.952444 kubelet[2459]: I0517 01:29:08.952406 2459 factory.go:221] Registration of the containerd container factory successfully May 17 01:29:08.961864 kubelet[2459]: I0517 01:29:08.961825 2459 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 01:29:08.962912 kubelet[2459]: I0517 01:29:08.962892 2459 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 01:29:08.962992 kubelet[2459]: I0517 01:29:08.962920 2459 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 01:29:08.962992 kubelet[2459]: I0517 01:29:08.962938 2459 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 01:29:08.962992 kubelet[2459]: I0517 01:29:08.962947 2459 kubelet.go:2382] "Starting kubelet main sync loop" May 17 01:29:08.963107 kubelet[2459]: E0517 01:29:08.963005 2459 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 01:29:08.977553 kubelet[2459]: I0517 01:29:08.977534 2459 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 01:29:08.977553 kubelet[2459]: I0517 01:29:08.977548 2459 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 01:29:08.977553 kubelet[2459]: I0517 01:29:08.977561 2459 state_mem.go:36] "Initialized new in-memory state store" May 17 01:29:08.977706 kubelet[2459]: I0517 01:29:08.977680 2459 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 01:29:08.977706 kubelet[2459]: I0517 01:29:08.977689 2459 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 01:29:08.977706 kubelet[2459]: I0517 01:29:08.977703 2459 policy_none.go:49] "None policy: Start" May 17 01:29:08.977814 kubelet[2459]: I0517 01:29:08.977710 2459 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 01:29:08.977814 kubelet[2459]: I0517 01:29:08.977717 2459 state_mem.go:35] "Initializing new in-memory state store" May 17 01:29:08.977814 kubelet[2459]: I0517 01:29:08.977795 2459 state_mem.go:75] "Updated machine memory state" May 17 01:29:08.980119 kubelet[2459]: I0517 01:29:08.980071 2459 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 01:29:08.980192 kubelet[2459]: I0517 01:29:08.980183 2459 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 01:29:08.980222 kubelet[2459]: I0517 01:29:08.980195 2459 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 01:29:08.980341 kubelet[2459]: I0517 01:29:08.980326 2459 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 01:29:08.980707 kubelet[2459]: E0517 01:29:08.980694 2459 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 01:29:09.064168 kubelet[2459]: I0517 01:29:09.064111 2459 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.064495 kubelet[2459]: I0517 01:29:09.064218 2459 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.064495 kubelet[2459]: I0517 01:29:09.064269 2459 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.073463 kubelet[2459]: W0517 01:29:09.073408 2459 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:29:09.073750 kubelet[2459]: E0517 01:29:09.073584 2459 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.074182 kubelet[2459]: W0517 01:29:09.074119 2459 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:29:09.074182 kubelet[2459]: W0517 01:29:09.074134 2459 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:29:09.074477 kubelet[2459]: E0517 01:29:09.074238 2459 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-2b1b6103b5\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.074477 kubelet[2459]: E0517 01:29:09.074260 2459 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-2b1b6103b5\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.086765 kubelet[2459]: I0517 01:29:09.086705 2459 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.109835 kubelet[2459]: I0517 01:29:09.109688 2459 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.110056 kubelet[2459]: I0517 01:29:09.109850 2459 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.150710 kubelet[2459]: I0517 01:29:09.150610 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/540f59fcf717dc4f739ffcc5db6011cf-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-2b1b6103b5\" (UID: \"540f59fcf717dc4f739ffcc5db6011cf\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.150710 kubelet[2459]: I0517 01:29:09.150699 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/540f59fcf717dc4f739ffcc5db6011cf-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-2b1b6103b5\" (UID: \"540f59fcf717dc4f739ffcc5db6011cf\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.151152 kubelet[2459]: I0517 01:29:09.150756 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d16367638affcb66ad683f0ad004ac1-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" (UID: \"3d16367638affcb66ad683f0ad004ac1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.151152 kubelet[2459]: I0517 01:29:09.150813 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d16367638affcb66ad683f0ad004ac1-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" (UID: \"3d16367638affcb66ad683f0ad004ac1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.151152 kubelet[2459]: I0517 01:29:09.150860 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d16367638affcb66ad683f0ad004ac1-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" (UID: \"3d16367638affcb66ad683f0ad004ac1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.151152 kubelet[2459]: I0517 01:29:09.150909 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d16367638affcb66ad683f0ad004ac1-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" (UID: \"3d16367638affcb66ad683f0ad004ac1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.151152 kubelet[2459]: I0517 01:29:09.150983 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/540f59fcf717dc4f739ffcc5db6011cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-2b1b6103b5\" (UID: \"540f59fcf717dc4f739ffcc5db6011cf\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.151793 kubelet[2459]: I0517 01:29:09.151039 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d16367638affcb66ad683f0ad004ac1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-2b1b6103b5\" (UID: \"3d16367638affcb66ad683f0ad004ac1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.151793 kubelet[2459]: I0517 01:29:09.151180 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff6f0d0e631be09d029d5ed219a03409-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-2b1b6103b5\" (UID: \"ff6f0d0e631be09d029d5ed219a03409\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.356734 sudo[2503]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 01:29:09.356865 sudo[2503]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 01:29:09.684818 sudo[2503]: pam_unix(sudo:session): session closed for user root May 17 01:29:09.939018 kubelet[2459]: I0517 01:29:09.938947 2459 apiserver.go:52] "Watching apiserver" May 17 01:29:09.949807 kubelet[2459]: I0517 01:29:09.949761 2459 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 01:29:09.968324 kubelet[2459]: I0517 01:29:09.968281 2459 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.968472 kubelet[2459]: I0517 01:29:09.968406 2459 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.972725 kubelet[2459]: W0517 01:29:09.972690 2459 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:29:09.972725 kubelet[2459]: W0517 01:29:09.972690 2459 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:29:09.972794 kubelet[2459]: E0517 01:29:09.972737 2459 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-2b1b6103b5\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.972794 kubelet[2459]: E0517 01:29:09.972737 2459 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-2b1b6103b5\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" May 17 01:29:09.985301 kubelet[2459]: I0517 01:29:09.985237 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2b1b6103b5" podStartSLOduration=2.985215034 podStartE2EDuration="2.985215034s" podCreationTimestamp="2025-05-17 01:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:29:09.985128705 +0000 UTC m=+1.131306491" watchObservedRunningTime="2025-05-17 01:29:09.985215034 +0000 UTC m=+1.131392823" May 17 01:29:09.990149 kubelet[2459]: I0517 01:29:09.990121 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2b1b6103b5" podStartSLOduration=2.990111222 podStartE2EDuration="2.990111222s" podCreationTimestamp="2025-05-17 01:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:29:09.989946206 +0000 UTC m=+1.136123993" watchObservedRunningTime="2025-05-17 01:29:09.990111222 +0000 UTC m=+1.136289005" May 17 01:29:09.995122 kubelet[2459]: I0517 01:29:09.995057 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2b1b6103b5" podStartSLOduration=2.995048734 podStartE2EDuration="2.995048734s" podCreationTimestamp="2025-05-17 01:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:29:09.994698688 +0000 UTC m=+1.140876474" watchObservedRunningTime="2025-05-17 01:29:09.995048734 +0000 UTC m=+1.141226516" May 17 01:29:11.162955 sudo[1734]: pam_unix(sudo:session): session closed for user root May 17 01:29:11.166033 sshd[1731]: pam_unix(sshd:session): session closed for user core May 17 01:29:11.171993 systemd[1]: sshd@6-145.40.90.133:22-139.178.89.65:55326.service: Deactivated successfully. May 17 01:29:11.173786 systemd[1]: session-9.scope: Deactivated successfully. May 17 01:29:11.174158 systemd[1]: session-9.scope: Consumed 3.862s CPU time. May 17 01:29:11.175561 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. May 17 01:29:11.177928 systemd-logind[1555]: Removed session 9. May 17 01:29:12.664325 kubelet[2459]: I0517 01:29:12.664202 2459 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 01:29:12.665211 env[1563]: time="2025-05-17T01:29:12.664927922Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 01:29:12.666110 kubelet[2459]: I0517 01:29:12.665467 2459 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 01:29:13.643597 systemd[1]: Created slice kubepods-besteffort-podf7fa71fe_4275_42f6_80fc_6e13bcbe921b.slice. May 17 01:29:13.667967 systemd[1]: Created slice kubepods-burstable-podd935d46a_439e_4524_ba54_e7e3061e6e3a.slice. May 17 01:29:13.684590 kubelet[2459]: I0517 01:29:13.684532 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-host-proc-sys-kernel\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.684590 kubelet[2459]: I0517 01:29:13.684564 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvmk8\" (UniqueName: \"kubernetes.io/projected/f7fa71fe-4275-42f6-80fc-6e13bcbe921b-kube-api-access-pvmk8\") pod \"kube-proxy-5jn9n\" (UID: \"f7fa71fe-4275-42f6-80fc-6e13bcbe921b\") " pod="kube-system/kube-proxy-5jn9n" May 17 01:29:13.684590 kubelet[2459]: I0517 01:29:13.684582 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-cgroup\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.684590 kubelet[2459]: I0517 01:29:13.684597 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7fa71fe-4275-42f6-80fc-6e13bcbe921b-kube-proxy\") pod \"kube-proxy-5jn9n\" (UID: \"f7fa71fe-4275-42f6-80fc-6e13bcbe921b\") " pod="kube-system/kube-proxy-5jn9n" May 17 01:29:13.684920 kubelet[2459]: I0517 01:29:13.684625 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-bpf-maps\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.684920 kubelet[2459]: I0517 01:29:13.684649 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-config-path\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.684920 kubelet[2459]: I0517 01:29:13.684675 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d935d46a-439e-4524-ba54-e7e3061e6e3a-hubble-tls\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.684920 kubelet[2459]: I0517 01:29:13.684702 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85srr\" (UniqueName: \"kubernetes.io/projected/d935d46a-439e-4524-ba54-e7e3061e6e3a-kube-api-access-85srr\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.684920 kubelet[2459]: I0517 01:29:13.684720 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7fa71fe-4275-42f6-80fc-6e13bcbe921b-xtables-lock\") pod \"kube-proxy-5jn9n\" (UID: \"f7fa71fe-4275-42f6-80fc-6e13bcbe921b\") " pod="kube-system/kube-proxy-5jn9n" May 17 01:29:13.684920 kubelet[2459]: I0517 01:29:13.684739 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-host-proc-sys-net\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.685050 kubelet[2459]: I0517 01:29:13.684752 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-lib-modules\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.685050 kubelet[2459]: I0517 01:29:13.684775 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d935d46a-439e-4524-ba54-e7e3061e6e3a-clustermesh-secrets\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.685050 kubelet[2459]: I0517 01:29:13.684801 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-hostproc\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.685050 kubelet[2459]: I0517 01:29:13.684819 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-etc-cni-netd\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.685050 kubelet[2459]: I0517 01:29:13.684831 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cni-path\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.685050 kubelet[2459]: I0517 01:29:13.684843 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-xtables-lock\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.685176 kubelet[2459]: I0517 01:29:13.684856 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-run\") pod \"cilium-t8znq\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " pod="kube-system/cilium-t8znq" May 17 01:29:13.685176 kubelet[2459]: I0517 01:29:13.684867 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7fa71fe-4275-42f6-80fc-6e13bcbe921b-lib-modules\") pod \"kube-proxy-5jn9n\" (UID: \"f7fa71fe-4275-42f6-80fc-6e13bcbe921b\") " pod="kube-system/kube-proxy-5jn9n" May 17 01:29:13.765990 systemd[1]: Created slice kubepods-besteffort-pod0f2771f2_6cd5_4e97_9053_2769d921241e.slice. May 17 01:29:13.785892 kubelet[2459]: I0517 01:29:13.785843 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-449z8\" (UniqueName: \"kubernetes.io/projected/0f2771f2-6cd5-4e97-9053-2769d921241e-kube-api-access-449z8\") pod \"cilium-operator-6c4d7847fc-xps6q\" (UID: \"0f2771f2-6cd5-4e97-9053-2769d921241e\") " pod="kube-system/cilium-operator-6c4d7847fc-xps6q" May 17 01:29:13.786190 kubelet[2459]: I0517 01:29:13.786145 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f2771f2-6cd5-4e97-9053-2769d921241e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xps6q\" (UID: \"0f2771f2-6cd5-4e97-9053-2769d921241e\") " pod="kube-system/cilium-operator-6c4d7847fc-xps6q" May 17 01:29:13.787502 kubelet[2459]: I0517 01:29:13.787420 2459 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 01:29:13.968923 env[1563]: time="2025-05-17T01:29:13.968771056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jn9n,Uid:f7fa71fe-4275-42f6-80fc-6e13bcbe921b,Namespace:kube-system,Attempt:0,}" May 17 01:29:13.971023 env[1563]: time="2025-05-17T01:29:13.970896336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8znq,Uid:d935d46a-439e-4524-ba54-e7e3061e6e3a,Namespace:kube-system,Attempt:0,}" May 17 01:29:14.008918 env[1563]: time="2025-05-17T01:29:14.008746524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:29:14.008918 env[1563]: time="2025-05-17T01:29:14.008843499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:29:14.008918 env[1563]: time="2025-05-17T01:29:14.008881761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:29:14.009557 env[1563]: time="2025-05-17T01:29:14.009237176Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5295c0f4fd3baa0ccfb222c049bb53d7278146e0b442ac8a20ae4ef29d3b4b57 pid=2621 runtime=io.containerd.runc.v2 May 17 01:29:14.009708 env[1563]: time="2025-05-17T01:29:14.009549020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:29:14.009708 env[1563]: time="2025-05-17T01:29:14.009654081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:29:14.009933 env[1563]: time="2025-05-17T01:29:14.009708804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:29:14.010196 env[1563]: time="2025-05-17T01:29:14.010099578Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc pid=2622 runtime=io.containerd.runc.v2 May 17 01:29:14.039050 systemd[1]: Started cri-containerd-2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc.scope. May 17 01:29:14.041075 systemd[1]: Started cri-containerd-5295c0f4fd3baa0ccfb222c049bb53d7278146e0b442ac8a20ae4ef29d3b4b57.scope. May 17 01:29:14.056882 env[1563]: time="2025-05-17T01:29:14.056848532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8znq,Uid:d935d46a-439e-4524-ba54-e7e3061e6e3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\"" May 17 01:29:14.057866 env[1563]: time="2025-05-17T01:29:14.057835683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jn9n,Uid:f7fa71fe-4275-42f6-80fc-6e13bcbe921b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5295c0f4fd3baa0ccfb222c049bb53d7278146e0b442ac8a20ae4ef29d3b4b57\"" May 17 01:29:14.059095 env[1563]: time="2025-05-17T01:29:14.059063083Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 01:29:14.060179 env[1563]: time="2025-05-17T01:29:14.060155893Z" level=info msg="CreateContainer within sandbox \"5295c0f4fd3baa0ccfb222c049bb53d7278146e0b442ac8a20ae4ef29d3b4b57\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 01:29:14.067113 env[1563]: time="2025-05-17T01:29:14.067062586Z" level=info msg="CreateContainer within sandbox \"5295c0f4fd3baa0ccfb222c049bb53d7278146e0b442ac8a20ae4ef29d3b4b57\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aee63d0532aa56a7e979f8eba8dac4417739e593d844f35f84d648cbafc7105c\"" May 17 01:29:14.067420 env[1563]: time="2025-05-17T01:29:14.067370024Z" level=info msg="StartContainer for \"aee63d0532aa56a7e979f8eba8dac4417739e593d844f35f84d648cbafc7105c\"" May 17 01:29:14.069909 env[1563]: time="2025-05-17T01:29:14.069885992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xps6q,Uid:0f2771f2-6cd5-4e97-9053-2769d921241e,Namespace:kube-system,Attempt:0,}" May 17 01:29:14.077034 env[1563]: time="2025-05-17T01:29:14.076991794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:29:14.077034 env[1563]: time="2025-05-17T01:29:14.077019126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:29:14.077034 env[1563]: time="2025-05-17T01:29:14.077029170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:29:14.077198 env[1563]: time="2025-05-17T01:29:14.077115437Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16 pid=2709 runtime=io.containerd.runc.v2 May 17 01:29:14.078108 systemd[1]: Started cri-containerd-aee63d0532aa56a7e979f8eba8dac4417739e593d844f35f84d648cbafc7105c.scope. May 17 01:29:14.084378 systemd[1]: Started cri-containerd-fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16.scope. May 17 01:29:14.097449 env[1563]: time="2025-05-17T01:29:14.097392560Z" level=info msg="StartContainer for \"aee63d0532aa56a7e979f8eba8dac4417739e593d844f35f84d648cbafc7105c\" returns successfully" May 17 01:29:14.115870 env[1563]: time="2025-05-17T01:29:14.115780976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xps6q,Uid:0f2771f2-6cd5-4e97-9053-2769d921241e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\"" May 17 01:29:15.004112 kubelet[2459]: I0517 01:29:15.004000 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jn9n" podStartSLOduration=2.003958888 podStartE2EDuration="2.003958888s" podCreationTimestamp="2025-05-17 01:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:29:15.003882413 +0000 UTC m=+6.150060332" watchObservedRunningTime="2025-05-17 01:29:15.003958888 +0000 UTC m=+6.150136728" May 17 01:29:17.973208 update_engine[1557]: I0517 01:29:17.973157 1557 update_attempter.cc:509] Updating boot flags... May 17 01:29:18.965691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3832052593.mount: Deactivated successfully. May 17 01:29:20.665484 env[1563]: time="2025-05-17T01:29:20.665419882Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:20.666125 env[1563]: time="2025-05-17T01:29:20.666082628Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:20.666887 env[1563]: time="2025-05-17T01:29:20.666845695Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:20.667200 env[1563]: time="2025-05-17T01:29:20.667152364Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 01:29:20.668141 env[1563]: time="2025-05-17T01:29:20.668092897Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 01:29:20.668822 env[1563]: time="2025-05-17T01:29:20.668777970Z" level=info msg="CreateContainer within sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 01:29:20.673786 env[1563]: time="2025-05-17T01:29:20.673741493Z" level=info msg="CreateContainer within sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\"" May 17 01:29:20.674165 env[1563]: time="2025-05-17T01:29:20.674103170Z" level=info msg="StartContainer for \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\"" May 17 01:29:20.696848 systemd[1]: Started cri-containerd-66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38.scope. May 17 01:29:20.708420 env[1563]: time="2025-05-17T01:29:20.708365807Z" level=info msg="StartContainer for \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\" returns successfully" May 17 01:29:20.713917 systemd[1]: cri-containerd-66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38.scope: Deactivated successfully. May 17 01:29:21.676393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38-rootfs.mount: Deactivated successfully. May 17 01:29:21.887076 env[1563]: time="2025-05-17T01:29:21.886975755Z" level=info msg="shim disconnected" id=66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38 May 17 01:29:21.887898 env[1563]: time="2025-05-17T01:29:21.887078578Z" level=warning msg="cleaning up after shim disconnected" id=66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38 namespace=k8s.io May 17 01:29:21.887898 env[1563]: time="2025-05-17T01:29:21.887108470Z" level=info msg="cleaning up dead shim" May 17 01:29:21.902583 env[1563]: time="2025-05-17T01:29:21.902520235Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:29:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2975 runtime=io.containerd.runc.v2\n" May 17 01:29:22.006256 env[1563]: time="2025-05-17T01:29:22.006170299Z" level=info msg="CreateContainer within sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 01:29:22.016687 env[1563]: time="2025-05-17T01:29:22.016620088Z" level=info msg="CreateContainer within sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\"" May 17 01:29:22.016980 env[1563]: time="2025-05-17T01:29:22.016929622Z" level=info msg="StartContainer for \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\"" May 17 01:29:22.025549 systemd[1]: Started cri-containerd-e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0.scope. May 17 01:29:22.036523 env[1563]: time="2025-05-17T01:29:22.036499578Z" level=info msg="StartContainer for \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\" returns successfully" May 17 01:29:22.042671 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 01:29:22.042808 systemd[1]: Stopped systemd-sysctl.service. May 17 01:29:22.042900 systemd[1]: Stopping systemd-sysctl.service... May 17 01:29:22.043752 systemd[1]: Starting systemd-sysctl.service... May 17 01:29:22.044391 systemd[1]: cri-containerd-e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0.scope: Deactivated successfully. May 17 01:29:22.047760 systemd[1]: Finished systemd-sysctl.service. May 17 01:29:22.053755 env[1563]: time="2025-05-17T01:29:22.053729854Z" level=info msg="shim disconnected" id=e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0 May 17 01:29:22.053842 env[1563]: time="2025-05-17T01:29:22.053757347Z" level=warning msg="cleaning up after shim disconnected" id=e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0 namespace=k8s.io May 17 01:29:22.053842 env[1563]: time="2025-05-17T01:29:22.053764638Z" level=info msg="cleaning up dead shim" May 17 01:29:22.057084 env[1563]: time="2025-05-17T01:29:22.057067779Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:29:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3040 runtime=io.containerd.runc.v2\n" May 17 01:29:22.677045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0-rootfs.mount: Deactivated successfully. May 17 01:29:23.007583 env[1563]: time="2025-05-17T01:29:23.007497716Z" level=info msg="CreateContainer within sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 01:29:23.016546 env[1563]: time="2025-05-17T01:29:23.016492828Z" level=info msg="CreateContainer within sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\"" May 17 01:29:23.017015 env[1563]: time="2025-05-17T01:29:23.016943964Z" level=info msg="StartContainer for \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\"" May 17 01:29:23.027590 systemd[1]: Started cri-containerd-23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c.scope. May 17 01:29:23.041727 env[1563]: time="2025-05-17T01:29:23.041704259Z" level=info msg="StartContainer for \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\" returns successfully" May 17 01:29:23.041933 systemd[1]: cri-containerd-23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c.scope: Deactivated successfully. May 17 01:29:23.105662 env[1563]: time="2025-05-17T01:29:23.105590000Z" level=info msg="shim disconnected" id=23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c May 17 01:29:23.105662 env[1563]: time="2025-05-17T01:29:23.105616813Z" level=warning msg="cleaning up after shim disconnected" id=23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c namespace=k8s.io May 17 01:29:23.105662 env[1563]: time="2025-05-17T01:29:23.105623971Z" level=info msg="cleaning up dead shim" May 17 01:29:23.109291 env[1563]: time="2025-05-17T01:29:23.109271989Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:29:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3096 runtime=io.containerd.runc.v2\n" May 17 01:29:23.563818 env[1563]: time="2025-05-17T01:29:23.563758799Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:23.564413 env[1563]: time="2025-05-17T01:29:23.564380774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:23.565192 env[1563]: time="2025-05-17T01:29:23.565152754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:29:23.565554 env[1563]: time="2025-05-17T01:29:23.565501828Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 01:29:23.566597 env[1563]: time="2025-05-17T01:29:23.566582848Z" level=info msg="CreateContainer within sandbox \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 01:29:23.571184 env[1563]: time="2025-05-17T01:29:23.571132853Z" level=info msg="CreateContainer within sandbox \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5\"" May 17 01:29:23.571547 env[1563]: time="2025-05-17T01:29:23.571492580Z" level=info msg="StartContainer for \"2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5\"" May 17 01:29:23.579807 systemd[1]: Started cri-containerd-2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5.scope. May 17 01:29:23.592855 env[1563]: time="2025-05-17T01:29:23.592830778Z" level=info msg="StartContainer for \"2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5\" returns successfully" May 17 01:29:23.679583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c-rootfs.mount: Deactivated successfully. May 17 01:29:24.019487 env[1563]: time="2025-05-17T01:29:24.019372934Z" level=info msg="CreateContainer within sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 01:29:24.036591 env[1563]: time="2025-05-17T01:29:24.036446172Z" level=info msg="CreateContainer within sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\"" May 17 01:29:24.037637 env[1563]: time="2025-05-17T01:29:24.037560098Z" level=info msg="StartContainer for \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\"" May 17 01:29:24.055788 kubelet[2459]: I0517 01:29:24.055734 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xps6q" podStartSLOduration=1.606177761 podStartE2EDuration="11.055714002s" podCreationTimestamp="2025-05-17 01:29:13 +0000 UTC" firstStartedPulling="2025-05-17 01:29:14.116455517 +0000 UTC m=+5.262633301" lastFinishedPulling="2025-05-17 01:29:23.56599176 +0000 UTC m=+14.712169542" observedRunningTime="2025-05-17 01:29:24.028234286 +0000 UTC m=+15.174412135" watchObservedRunningTime="2025-05-17 01:29:24.055714002 +0000 UTC m=+15.201891798" May 17 01:29:24.063490 systemd[1]: Started cri-containerd-d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e.scope. May 17 01:29:24.078746 systemd[1]: cri-containerd-d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e.scope: Deactivated successfully. May 17 01:29:24.092763 env[1563]: time="2025-05-17T01:29:24.092700239Z" level=info msg="StartContainer for \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\" returns successfully" May 17 01:29:24.246994 env[1563]: time="2025-05-17T01:29:24.246887350Z" level=info msg="shim disconnected" id=d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e May 17 01:29:24.247441 env[1563]: time="2025-05-17T01:29:24.246994534Z" level=warning msg="cleaning up after shim disconnected" id=d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e namespace=k8s.io May 17 01:29:24.247441 env[1563]: time="2025-05-17T01:29:24.247028300Z" level=info msg="cleaning up dead shim" May 17 01:29:24.263427 env[1563]: time="2025-05-17T01:29:24.263281236Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:29:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3200 runtime=io.containerd.runc.v2\n" May 17 01:29:24.677539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e-rootfs.mount: Deactivated successfully. May 17 01:29:25.030754 env[1563]: time="2025-05-17T01:29:25.030659623Z" level=info msg="CreateContainer within sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 01:29:25.043730 env[1563]: time="2025-05-17T01:29:25.043704841Z" level=info msg="CreateContainer within sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\"" May 17 01:29:25.043977 env[1563]: time="2025-05-17T01:29:25.043965050Z" level=info msg="StartContainer for \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\"" May 17 01:29:25.052709 systemd[1]: Started cri-containerd-75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0.scope. May 17 01:29:25.064760 env[1563]: time="2025-05-17T01:29:25.064733270Z" level=info msg="StartContainer for \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\" returns successfully" May 17 01:29:25.118407 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 17 01:29:25.141610 kubelet[2459]: I0517 01:29:25.141595 2459 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 01:29:25.158601 systemd[1]: Created slice kubepods-burstable-pod11abd5b4_41c0_45de_8a1a_94efa44c715b.slice. May 17 01:29:25.160418 systemd[1]: Created slice kubepods-burstable-pod69605852_495a_4d5c_be9e_c938bf6f26f1.slice. May 17 01:29:25.268009 kubelet[2459]: I0517 01:29:25.267957 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69605852-495a-4d5c-be9e-c938bf6f26f1-config-volume\") pod \"coredns-668d6bf9bc-4j58f\" (UID: \"69605852-495a-4d5c-be9e-c938bf6f26f1\") " pod="kube-system/coredns-668d6bf9bc-4j58f" May 17 01:29:25.268009 kubelet[2459]: I0517 01:29:25.267982 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11abd5b4-41c0-45de-8a1a-94efa44c715b-config-volume\") pod \"coredns-668d6bf9bc-d29zr\" (UID: \"11abd5b4-41c0-45de-8a1a-94efa44c715b\") " pod="kube-system/coredns-668d6bf9bc-d29zr" May 17 01:29:25.268009 kubelet[2459]: I0517 01:29:25.267992 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8wkx\" (UniqueName: \"kubernetes.io/projected/11abd5b4-41c0-45de-8a1a-94efa44c715b-kube-api-access-t8wkx\") pod \"coredns-668d6bf9bc-d29zr\" (UID: \"11abd5b4-41c0-45de-8a1a-94efa44c715b\") " pod="kube-system/coredns-668d6bf9bc-d29zr" May 17 01:29:25.268009 kubelet[2459]: I0517 01:29:25.268004 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-875kp\" (UniqueName: \"kubernetes.io/projected/69605852-495a-4d5c-be9e-c938bf6f26f1-kube-api-access-875kp\") pod \"coredns-668d6bf9bc-4j58f\" (UID: \"69605852-495a-4d5c-be9e-c938bf6f26f1\") " pod="kube-system/coredns-668d6bf9bc-4j58f" May 17 01:29:25.269384 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 17 01:29:25.461000 env[1563]: time="2025-05-17T01:29:25.460909634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d29zr,Uid:11abd5b4-41c0-45de-8a1a-94efa44c715b,Namespace:kube-system,Attempt:0,}" May 17 01:29:25.463223 env[1563]: time="2025-05-17T01:29:25.463143889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4j58f,Uid:69605852-495a-4d5c-be9e-c938bf6f26f1,Namespace:kube-system,Attempt:0,}" May 17 01:29:26.060749 kubelet[2459]: I0517 01:29:26.060688 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t8znq" podStartSLOduration=6.450555821 podStartE2EDuration="13.06067598s" podCreationTimestamp="2025-05-17 01:29:13 +0000 UTC" firstStartedPulling="2025-05-17 01:29:14.057829285 +0000 UTC m=+5.204007076" lastFinishedPulling="2025-05-17 01:29:20.667949453 +0000 UTC m=+11.814127235" observedRunningTime="2025-05-17 01:29:26.06052989 +0000 UTC m=+17.206707676" watchObservedRunningTime="2025-05-17 01:29:26.06067598 +0000 UTC m=+17.206853978" May 17 01:29:26.859010 systemd-networkd[1323]: cilium_host: Link UP May 17 01:29:26.859115 systemd-networkd[1323]: cilium_net: Link UP May 17 01:29:26.866190 systemd-networkd[1323]: cilium_net: Gained carrier May 17 01:29:26.873374 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 01:29:26.873448 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 01:29:26.873551 systemd-networkd[1323]: cilium_host: Gained carrier May 17 01:29:26.919316 systemd-networkd[1323]: cilium_vxlan: Link UP May 17 01:29:26.919319 systemd-networkd[1323]: cilium_vxlan: Gained carrier May 17 01:29:26.976446 systemd-networkd[1323]: cilium_net: Gained IPv6LL May 17 01:29:27.051353 kernel: NET: Registered PF_ALG protocol family May 17 01:29:27.576503 systemd-networkd[1323]: cilium_host: Gained IPv6LL May 17 01:29:27.591638 systemd-networkd[1323]: lxc_health: Link UP May 17 01:29:27.613239 systemd-networkd[1323]: lxc_health: Gained carrier May 17 01:29:27.613377 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 01:29:28.023973 systemd-networkd[1323]: lxc79e3d978cd02: Link UP May 17 01:29:28.041367 kernel: eth0: renamed from tmp0d8ad May 17 01:29:28.069282 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 01:29:28.069346 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc79e3d978cd02: link becomes ready May 17 01:29:28.069337 systemd-networkd[1323]: lxc79e3d978cd02: Gained carrier May 17 01:29:28.069461 systemd-networkd[1323]: lxcdd0ad57e8a7c: Link UP May 17 01:29:28.089308 kernel: eth0: renamed from tmp019f9 May 17 01:29:28.110363 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdd0ad57e8a7c: link becomes ready May 17 01:29:28.110356 systemd-networkd[1323]: lxcdd0ad57e8a7c: Gained carrier May 17 01:29:28.792424 systemd-networkd[1323]: cilium_vxlan: Gained IPv6LL May 17 01:29:29.039207 kubelet[2459]: I0517 01:29:29.039165 2459 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:29:29.048431 systemd-networkd[1323]: lxc_health: Gained IPv6LL May 17 01:29:29.496463 systemd-networkd[1323]: lxcdd0ad57e8a7c: Gained IPv6LL May 17 01:29:29.816433 systemd-networkd[1323]: lxc79e3d978cd02: Gained IPv6LL May 17 01:29:30.332284 env[1563]: time="2025-05-17T01:29:30.332231784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:29:30.332284 env[1563]: time="2025-05-17T01:29:30.332253658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:29:30.332284 env[1563]: time="2025-05-17T01:29:30.332264445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:29:30.332549 env[1563]: time="2025-05-17T01:29:30.332339326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d8ad8bb57129dc5c93410b457ad115a1537325b3cbbf565f96a7ab1ad32d49e pid=3885 runtime=io.containerd.runc.v2 May 17 01:29:30.332638 env[1563]: time="2025-05-17T01:29:30.332596523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:29:30.332638 env[1563]: time="2025-05-17T01:29:30.332613723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:29:30.332638 env[1563]: time="2025-05-17T01:29:30.332620476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:29:30.332755 env[1563]: time="2025-05-17T01:29:30.332676922Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/019f972dd105f289a1a22e46eb6fe14daea36957f831b47a19e106e35768dbef pid=3890 runtime=io.containerd.runc.v2 May 17 01:29:30.340644 systemd[1]: Started cri-containerd-019f972dd105f289a1a22e46eb6fe14daea36957f831b47a19e106e35768dbef.scope. May 17 01:29:30.341254 systemd[1]: Started cri-containerd-0d8ad8bb57129dc5c93410b457ad115a1537325b3cbbf565f96a7ab1ad32d49e.scope. May 17 01:29:30.362729 env[1563]: time="2025-05-17T01:29:30.362699907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4j58f,Uid:69605852-495a-4d5c-be9e-c938bf6f26f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"019f972dd105f289a1a22e46eb6fe14daea36957f831b47a19e106e35768dbef\"" May 17 01:29:30.362827 env[1563]: time="2025-05-17T01:29:30.362740196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d29zr,Uid:11abd5b4-41c0-45de-8a1a-94efa44c715b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d8ad8bb57129dc5c93410b457ad115a1537325b3cbbf565f96a7ab1ad32d49e\"" May 17 01:29:30.363842 env[1563]: time="2025-05-17T01:29:30.363826184Z" level=info msg="CreateContainer within sandbox \"019f972dd105f289a1a22e46eb6fe14daea36957f831b47a19e106e35768dbef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 01:29:30.363881 env[1563]: time="2025-05-17T01:29:30.363850366Z" level=info msg="CreateContainer within sandbox \"0d8ad8bb57129dc5c93410b457ad115a1537325b3cbbf565f96a7ab1ad32d49e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 01:29:30.369149 env[1563]: time="2025-05-17T01:29:30.369103223Z" level=info msg="CreateContainer within sandbox \"019f972dd105f289a1a22e46eb6fe14daea36957f831b47a19e106e35768dbef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c573b7fb52fb0ae1b1c20c6124caa6d303c410888cee252d2ee90ef74a020f5\"" May 17 01:29:30.369419 env[1563]: time="2025-05-17T01:29:30.369361721Z" level=info msg="StartContainer for \"1c573b7fb52fb0ae1b1c20c6124caa6d303c410888cee252d2ee90ef74a020f5\"" May 17 01:29:30.370003 env[1563]: time="2025-05-17T01:29:30.369961776Z" level=info msg="CreateContainer within sandbox \"0d8ad8bb57129dc5c93410b457ad115a1537325b3cbbf565f96a7ab1ad32d49e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f932afd13dabcaf51182e36233995f78ec82a1853c2816fb3b9f6faf6f3ca9a\"" May 17 01:29:30.370136 env[1563]: time="2025-05-17T01:29:30.370124789Z" level=info msg="StartContainer for \"0f932afd13dabcaf51182e36233995f78ec82a1853c2816fb3b9f6faf6f3ca9a\"" May 17 01:29:30.377303 systemd[1]: Started cri-containerd-0f932afd13dabcaf51182e36233995f78ec82a1853c2816fb3b9f6faf6f3ca9a.scope. May 17 01:29:30.377889 systemd[1]: Started cri-containerd-1c573b7fb52fb0ae1b1c20c6124caa6d303c410888cee252d2ee90ef74a020f5.scope. May 17 01:29:30.389573 env[1563]: time="2025-05-17T01:29:30.389509834Z" level=info msg="StartContainer for \"1c573b7fb52fb0ae1b1c20c6124caa6d303c410888cee252d2ee90ef74a020f5\" returns successfully" May 17 01:29:30.389669 env[1563]: time="2025-05-17T01:29:30.389620953Z" level=info msg="StartContainer for \"0f932afd13dabcaf51182e36233995f78ec82a1853c2816fb3b9f6faf6f3ca9a\" returns successfully" May 17 01:29:31.069965 kubelet[2459]: I0517 01:29:31.069887 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d29zr" podStartSLOduration=18.069871037 podStartE2EDuration="18.069871037s" podCreationTimestamp="2025-05-17 01:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:29:31.069551207 +0000 UTC m=+22.215729002" watchObservedRunningTime="2025-05-17 01:29:31.069871037 +0000 UTC m=+22.216048823" May 17 01:29:31.070291 kubelet[2459]: I0517 01:29:31.069974 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4j58f" podStartSLOduration=18.06996752 podStartE2EDuration="18.06996752s" podCreationTimestamp="2025-05-17 01:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:29:31.062594455 +0000 UTC m=+22.208772242" watchObservedRunningTime="2025-05-17 01:29:31.06996752 +0000 UTC m=+22.216145304" May 17 01:29:32.434501 systemd[1]: Started sshd@7-145.40.90.133:22-218.92.0.157:10466.service. May 17 01:29:33.518832 sshd[4048]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:29:35.550584 sshd[4048]: Failed password for root from 218.92.0.157 port 10466 ssh2 May 17 01:29:37.694518 sshd[4048]: Failed password for root from 218.92.0.157 port 10466 ssh2 May 17 01:29:39.839537 sshd[4048]: Failed password for root from 218.92.0.157 port 10466 ssh2 May 17 01:29:40.134099 sshd[4048]: Received disconnect from 218.92.0.157 port 10466:11: [preauth] May 17 01:29:40.134099 sshd[4048]: Disconnected from authenticating user root 218.92.0.157 port 10466 [preauth] May 17 01:29:40.134602 sshd[4048]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:29:40.136723 systemd[1]: sshd@7-145.40.90.133:22-218.92.0.157:10466.service: Deactivated successfully. May 17 01:29:43.121068 kubelet[2459]: I0517 01:29:43.120958 2459 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:31:28.638141 systemd[1]: Started sshd@8-145.40.90.133:22-218.92.0.157:21083.service. May 17 01:31:29.600329 sshd[4070]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:31:31.888398 sshd[4070]: Failed password for root from 218.92.0.157 port 21083 ssh2 May 17 01:31:35.663870 sshd[4070]: Failed password for root from 218.92.0.157 port 21083 ssh2 May 17 01:31:37.928727 sshd[4070]: Failed password for root from 218.92.0.157 port 21083 ssh2 May 17 01:31:38.197089 sshd[4070]: Received disconnect from 218.92.0.157 port 21083:11: [preauth] May 17 01:31:38.197089 sshd[4070]: Disconnected from authenticating user root 218.92.0.157 port 21083 [preauth] May 17 01:31:38.197547 sshd[4070]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:31:38.199721 systemd[1]: sshd@8-145.40.90.133:22-218.92.0.157:21083.service: Deactivated successfully. May 17 01:33:24.598631 systemd[1]: Started sshd@9-145.40.90.133:22-218.92.0.157:15386.service. May 17 01:33:25.563907 sshd[4092]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:33:27.113699 sshd[4092]: Failed password for root from 218.92.0.157 port 15386 ssh2 May 17 01:33:27.749761 sshd[4092]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked May 17 01:33:29.907058 sshd[4092]: Failed password for root from 218.92.0.157 port 15386 ssh2 May 17 01:33:31.976659 sshd[4092]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked May 17 01:33:33.682133 sshd[4092]: Failed password for root from 218.92.0.157 port 15386 ssh2 May 17 01:33:34.161948 sshd[4092]: Received disconnect from 218.92.0.157 port 15386:11: [preauth] May 17 01:33:34.161948 sshd[4092]: Disconnected from authenticating user root 218.92.0.157 port 15386 [preauth] May 17 01:33:34.162561 sshd[4092]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:33:34.164690 systemd[1]: sshd@9-145.40.90.133:22-218.92.0.157:15386.service: Deactivated successfully. May 17 01:34:05.056675 update_engine[1557]: I0517 01:34:05.056562 1557 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 01:34:05.056675 update_engine[1557]: I0517 01:34:05.056641 1557 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 01:34:05.058269 update_engine[1557]: I0517 01:34:05.058195 1557 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 01:34:05.059254 update_engine[1557]: I0517 01:34:05.059181 1557 omaha_request_params.cc:62] Current group set to lts May 17 01:34:05.059591 update_engine[1557]: I0517 01:34:05.059499 1557 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 01:34:05.059591 update_engine[1557]: I0517 01:34:05.059526 1557 update_attempter.cc:643] Scheduling an action processor start. May 17 01:34:05.059591 update_engine[1557]: I0517 01:34:05.059569 1557 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 01:34:05.059995 update_engine[1557]: I0517 01:34:05.059638 1557 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 01:34:05.059995 update_engine[1557]: I0517 01:34:05.059782 1557 omaha_request_action.cc:270] Posting an Omaha request to disabled May 17 01:34:05.059995 update_engine[1557]: I0517 01:34:05.059798 1557 omaha_request_action.cc:271] Request: May 17 01:34:05.059995 update_engine[1557]: May 17 01:34:05.059995 update_engine[1557]: May 17 01:34:05.059995 update_engine[1557]: May 17 01:34:05.059995 update_engine[1557]: May 17 01:34:05.059995 update_engine[1557]: May 17 01:34:05.059995 update_engine[1557]: May 17 01:34:05.059995 update_engine[1557]: May 17 01:34:05.059995 update_engine[1557]: May 17 01:34:05.059995 update_engine[1557]: I0517 01:34:05.059814 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:34:05.061159 locksmithd[1597]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 01:34:05.063047 update_engine[1557]: I0517 01:34:05.062964 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:34:05.063264 update_engine[1557]: E0517 01:34:05.063190 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:34:05.063419 update_engine[1557]: I0517 01:34:05.063386 1557 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 01:34:06.716496 systemd[1]: Started sshd@10-145.40.90.133:22-123.57.65.198:45528.service. May 17 01:34:14.977598 update_engine[1557]: I0517 01:34:14.977483 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:34:14.978528 update_engine[1557]: I0517 01:34:14.977970 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:34:14.978528 update_engine[1557]: E0517 01:34:14.978170 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:34:14.978528 update_engine[1557]: I0517 01:34:14.978360 1557 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 01:34:24.981600 update_engine[1557]: I0517 01:34:24.981483 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:34:24.982591 update_engine[1557]: I0517 01:34:24.981971 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:34:24.982591 update_engine[1557]: E0517 01:34:24.982177 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:34:24.982591 update_engine[1557]: I0517 01:34:24.982370 1557 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 01:34:34.981701 update_engine[1557]: I0517 01:34:34.981499 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:34:34.982614 update_engine[1557]: I0517 01:34:34.982039 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:34:34.982614 update_engine[1557]: E0517 01:34:34.982255 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:34:34.982614 update_engine[1557]: I0517 01:34:34.982438 1557 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 01:34:34.982614 update_engine[1557]: I0517 01:34:34.982456 1557 omaha_request_action.cc:621] Omaha request response: May 17 01:34:34.982614 update_engine[1557]: E0517 01:34:34.982597 1557 omaha_request_action.cc:640] Omaha request network transfer failed. May 17 01:34:34.983117 update_engine[1557]: I0517 01:34:34.982625 1557 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 01:34:34.983117 update_engine[1557]: I0517 01:34:34.982636 1557 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:34:34.983117 update_engine[1557]: I0517 01:34:34.982645 1557 update_attempter.cc:306] Processing Done. May 17 01:34:34.983117 update_engine[1557]: E0517 01:34:34.982671 1557 update_attempter.cc:619] Update failed. May 17 01:34:34.983117 update_engine[1557]: I0517 01:34:34.982681 1557 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 01:34:34.983117 update_engine[1557]: I0517 01:34:34.982688 1557 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 01:34:34.983117 update_engine[1557]: I0517 01:34:34.982698 1557 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 01:34:34.983117 update_engine[1557]: I0517 01:34:34.982852 1557 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 01:34:34.983117 update_engine[1557]: I0517 01:34:34.982905 1557 omaha_request_action.cc:270] Posting an Omaha request to disabled May 17 01:34:34.983117 update_engine[1557]: I0517 01:34:34.982914 1557 omaha_request_action.cc:271] Request: May 17 01:34:34.983117 update_engine[1557]: May 17 01:34:34.983117 update_engine[1557]: May 17 01:34:34.983117 update_engine[1557]: May 17 01:34:34.983117 update_engine[1557]: May 17 01:34:34.983117 update_engine[1557]: May 17 01:34:34.983117 update_engine[1557]: May 17 01:34:34.983117 update_engine[1557]: I0517 01:34:34.982925 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:34:34.984767 update_engine[1557]: I0517 01:34:34.983270 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:34:34.984767 update_engine[1557]: E0517 01:34:34.983523 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:34:34.984767 update_engine[1557]: I0517 01:34:34.983706 1557 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 01:34:34.984767 update_engine[1557]: I0517 01:34:34.983726 1557 omaha_request_action.cc:621] Omaha request response: May 17 01:34:34.984767 update_engine[1557]: I0517 01:34:34.983737 1557 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:34:34.984767 update_engine[1557]: I0517 01:34:34.983745 1557 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:34:34.984767 update_engine[1557]: I0517 01:34:34.983752 1557 update_attempter.cc:306] Processing Done. May 17 01:34:34.984767 update_engine[1557]: I0517 01:34:34.983762 1557 update_attempter.cc:310] Error event sent. May 17 01:34:34.984767 update_engine[1557]: I0517 01:34:34.983783 1557 update_check_scheduler.cc:74] Next update check in 42m10s May 17 01:34:34.985633 locksmithd[1597]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 01:34:34.985633 locksmithd[1597]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 01:35:20.374934 systemd[1]: Started sshd@11-145.40.90.133:22-218.92.0.157:11569.service. May 17 01:35:20.467610 systemd[1]: Started sshd@12-145.40.90.133:22-139.178.89.65:55604.service. May 17 01:35:20.527664 sshd[4117]: Accepted publickey for core from 139.178.89.65 port 55604 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:20.528941 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:20.533235 systemd-logind[1555]: New session 10 of user core. May 17 01:35:20.534146 systemd[1]: Started session-10.scope. May 17 01:35:20.691435 sshd[4117]: pam_unix(sshd:session): session closed for user core May 17 01:35:20.693311 systemd[1]: sshd@12-145.40.90.133:22-139.178.89.65:55604.service: Deactivated successfully. May 17 01:35:20.693843 systemd[1]: session-10.scope: Deactivated successfully. May 17 01:35:20.694286 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. May 17 01:35:20.695039 systemd-logind[1555]: Removed session 10. May 17 01:35:21.337522 sshd[4114]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:35:23.143847 sshd[4114]: Failed password for root from 218.92.0.157 port 11569 ssh2 May 17 01:35:23.522348 sshd[4114]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked May 17 01:35:25.701481 systemd[1]: Started sshd@13-145.40.90.133:22-139.178.89.65:55606.service. May 17 01:35:25.729474 sshd[4150]: Accepted publickey for core from 139.178.89.65 port 55606 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:25.730429 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:25.733755 systemd-logind[1555]: New session 11 of user core. May 17 01:35:25.734489 systemd[1]: Started session-11.scope. May 17 01:35:25.823262 sshd[4150]: pam_unix(sshd:session): session closed for user core May 17 01:35:25.824865 systemd[1]: sshd@13-145.40.90.133:22-139.178.89.65:55606.service: Deactivated successfully. May 17 01:35:25.825381 systemd[1]: session-11.scope: Deactivated successfully. May 17 01:35:25.825796 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. May 17 01:35:25.826195 systemd-logind[1555]: Removed session 11. May 17 01:35:25.934539 sshd[4114]: Failed password for root from 218.92.0.157 port 11569 ssh2 May 17 01:35:29.378310 sshd[4114]: Failed password for root from 218.92.0.157 port 11569 ssh2 May 17 01:35:29.932576 sshd[4114]: Received disconnect from 218.92.0.157 port 11569:11: [preauth] May 17 01:35:29.932576 sshd[4114]: Disconnected from authenticating user root 218.92.0.157 port 11569 [preauth] May 17 01:35:29.933171 sshd[4114]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:35:29.935189 systemd[1]: sshd@11-145.40.90.133:22-218.92.0.157:11569.service: Deactivated successfully. May 17 01:35:30.833177 systemd[1]: Started sshd@14-145.40.90.133:22-139.178.89.65:35588.service. May 17 01:35:30.861350 sshd[4176]: Accepted publickey for core from 139.178.89.65 port 35588 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:30.862380 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:30.865837 systemd-logind[1555]: New session 12 of user core. May 17 01:35:30.866574 systemd[1]: Started session-12.scope. May 17 01:35:30.955289 sshd[4176]: pam_unix(sshd:session): session closed for user core May 17 01:35:30.956926 systemd[1]: sshd@14-145.40.90.133:22-139.178.89.65:35588.service: Deactivated successfully. May 17 01:35:30.957386 systemd[1]: session-12.scope: Deactivated successfully. May 17 01:35:30.957822 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. May 17 01:35:30.958292 systemd-logind[1555]: Removed session 12. May 17 01:35:35.966704 systemd[1]: Started sshd@15-145.40.90.133:22-139.178.89.65:35590.service. May 17 01:35:35.998187 sshd[4202]: Accepted publickey for core from 139.178.89.65 port 35590 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:35.999049 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:36.001970 systemd-logind[1555]: New session 13 of user core. May 17 01:35:36.002586 systemd[1]: Started session-13.scope. May 17 01:35:36.091707 sshd[4202]: pam_unix(sshd:session): session closed for user core May 17 01:35:36.093759 systemd[1]: sshd@15-145.40.90.133:22-139.178.89.65:35590.service: Deactivated successfully. May 17 01:35:36.094137 systemd[1]: session-13.scope: Deactivated successfully. May 17 01:35:36.094519 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. May 17 01:35:36.095167 systemd[1]: Started sshd@16-145.40.90.133:22-139.178.89.65:35598.service. May 17 01:35:36.095625 systemd-logind[1555]: Removed session 13. May 17 01:35:36.124398 sshd[4227]: Accepted publickey for core from 139.178.89.65 port 35598 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:36.125371 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:36.128544 systemd-logind[1555]: New session 14 of user core. May 17 01:35:36.129279 systemd[1]: Started session-14.scope. May 17 01:35:36.283462 sshd[4227]: pam_unix(sshd:session): session closed for user core May 17 01:35:36.286239 systemd[1]: sshd@16-145.40.90.133:22-139.178.89.65:35598.service: Deactivated successfully. May 17 01:35:36.286791 systemd[1]: session-14.scope: Deactivated successfully. May 17 01:35:36.287229 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. May 17 01:35:36.288103 systemd[1]: Started sshd@17-145.40.90.133:22-139.178.89.65:35614.service. May 17 01:35:36.288883 systemd-logind[1555]: Removed session 14. May 17 01:35:36.318822 sshd[4250]: Accepted publickey for core from 139.178.89.65 port 35614 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:36.322316 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:36.333235 systemd-logind[1555]: New session 15 of user core. May 17 01:35:36.335917 systemd[1]: Started session-15.scope. May 17 01:35:36.492710 sshd[4250]: pam_unix(sshd:session): session closed for user core May 17 01:35:36.494434 systemd[1]: sshd@17-145.40.90.133:22-139.178.89.65:35614.service: Deactivated successfully. May 17 01:35:36.494922 systemd[1]: session-15.scope: Deactivated successfully. May 17 01:35:36.495268 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. May 17 01:35:36.495881 systemd-logind[1555]: Removed session 15. May 17 01:35:41.502972 systemd[1]: Started sshd@18-145.40.90.133:22-139.178.89.65:33872.service. May 17 01:35:41.532102 sshd[4279]: Accepted publickey for core from 139.178.89.65 port 33872 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:41.533097 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:41.536446 systemd-logind[1555]: New session 16 of user core. May 17 01:35:41.537235 systemd[1]: Started session-16.scope. May 17 01:35:41.630292 sshd[4279]: pam_unix(sshd:session): session closed for user core May 17 01:35:41.631863 systemd[1]: sshd@18-145.40.90.133:22-139.178.89.65:33872.service: Deactivated successfully. May 17 01:35:41.632292 systemd[1]: session-16.scope: Deactivated successfully. May 17 01:35:41.632728 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. May 17 01:35:41.633252 systemd-logind[1555]: Removed session 16. May 17 01:35:46.639787 systemd[1]: Started sshd@19-145.40.90.133:22-139.178.89.65:53628.service. May 17 01:35:46.667740 sshd[4304]: Accepted publickey for core from 139.178.89.65 port 53628 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:46.668654 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:46.672021 systemd-logind[1555]: New session 17 of user core. May 17 01:35:46.672640 systemd[1]: Started session-17.scope. May 17 01:35:46.758394 sshd[4304]: pam_unix(sshd:session): session closed for user core May 17 01:35:46.760117 systemd[1]: sshd@19-145.40.90.133:22-139.178.89.65:53628.service: Deactivated successfully. May 17 01:35:46.760488 systemd[1]: session-17.scope: Deactivated successfully. May 17 01:35:46.760830 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. May 17 01:35:46.761396 systemd[1]: Started sshd@20-145.40.90.133:22-139.178.89.65:53636.service. May 17 01:35:46.761889 systemd-logind[1555]: Removed session 17. May 17 01:35:46.789291 sshd[4327]: Accepted publickey for core from 139.178.89.65 port 53636 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:46.790148 sshd[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:46.793139 systemd-logind[1555]: New session 18 of user core. May 17 01:35:46.793804 systemd[1]: Started session-18.scope. May 17 01:35:46.931953 sshd[4327]: pam_unix(sshd:session): session closed for user core May 17 01:35:46.933691 systemd[1]: sshd@20-145.40.90.133:22-139.178.89.65:53636.service: Deactivated successfully. May 17 01:35:46.934044 systemd[1]: session-18.scope: Deactivated successfully. May 17 01:35:46.934493 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. May 17 01:35:46.935024 systemd[1]: Started sshd@21-145.40.90.133:22-139.178.89.65:53644.service. May 17 01:35:46.935508 systemd-logind[1555]: Removed session 18. May 17 01:35:46.962741 sshd[4349]: Accepted publickey for core from 139.178.89.65 port 53644 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:46.963497 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:46.966092 systemd-logind[1555]: New session 19 of user core. May 17 01:35:46.966902 systemd[1]: Started session-19.scope. May 17 01:35:47.820575 sshd[4349]: pam_unix(sshd:session): session closed for user core May 17 01:35:47.829727 systemd[1]: sshd@21-145.40.90.133:22-139.178.89.65:53644.service: Deactivated successfully. May 17 01:35:47.831180 systemd[1]: session-19.scope: Deactivated successfully. May 17 01:35:47.832493 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. May 17 01:35:47.834603 systemd[1]: Started sshd@22-145.40.90.133:22-139.178.89.65:53656.service. May 17 01:35:47.835977 systemd-logind[1555]: Removed session 19. May 17 01:35:47.876329 sshd[4385]: Accepted publickey for core from 139.178.89.65 port 53656 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:47.878017 sshd[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:47.882773 systemd-logind[1555]: New session 20 of user core. May 17 01:35:47.883808 systemd[1]: Started session-20.scope. May 17 01:35:48.071885 sshd[4385]: pam_unix(sshd:session): session closed for user core May 17 01:35:48.073830 systemd[1]: sshd@22-145.40.90.133:22-139.178.89.65:53656.service: Deactivated successfully. May 17 01:35:48.074181 systemd[1]: session-20.scope: Deactivated successfully. May 17 01:35:48.074581 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. May 17 01:35:48.075252 systemd[1]: Started sshd@23-145.40.90.133:22-139.178.89.65:53664.service. May 17 01:35:48.075688 systemd-logind[1555]: Removed session 20. May 17 01:35:48.104087 sshd[4409]: Accepted publickey for core from 139.178.89.65 port 53664 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:48.107663 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:48.118488 systemd-logind[1555]: New session 21 of user core. May 17 01:35:48.121088 systemd[1]: Started session-21.scope. May 17 01:35:48.267144 sshd[4409]: pam_unix(sshd:session): session closed for user core May 17 01:35:48.268621 systemd[1]: sshd@23-145.40.90.133:22-139.178.89.65:53664.service: Deactivated successfully. May 17 01:35:48.269047 systemd[1]: session-21.scope: Deactivated successfully. May 17 01:35:48.269386 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. May 17 01:35:48.269944 systemd-logind[1555]: Removed session 21. May 17 01:35:53.277028 systemd[1]: Started sshd@24-145.40.90.133:22-139.178.89.65:53672.service. May 17 01:35:53.304821 sshd[4437]: Accepted publickey for core from 139.178.89.65 port 53672 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:53.305682 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:53.308852 systemd-logind[1555]: New session 22 of user core. May 17 01:35:53.309490 systemd[1]: Started session-22.scope. May 17 01:35:53.393709 sshd[4437]: pam_unix(sshd:session): session closed for user core May 17 01:35:53.395077 systemd[1]: sshd@24-145.40.90.133:22-139.178.89.65:53672.service: Deactivated successfully. May 17 01:35:53.395499 systemd[1]: session-22.scope: Deactivated successfully. May 17 01:35:53.395910 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. May 17 01:35:53.396458 systemd-logind[1555]: Removed session 22. May 17 01:35:58.403476 systemd[1]: Started sshd@25-145.40.90.133:22-139.178.89.65:60194.service. May 17 01:35:58.431742 sshd[4460]: Accepted publickey for core from 139.178.89.65 port 60194 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:35:58.432715 sshd[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:58.436248 systemd-logind[1555]: New session 23 of user core. May 17 01:35:58.437016 systemd[1]: Started session-23.scope. May 17 01:35:58.521215 sshd[4460]: pam_unix(sshd:session): session closed for user core May 17 01:35:58.522757 systemd[1]: sshd@25-145.40.90.133:22-139.178.89.65:60194.service: Deactivated successfully. May 17 01:35:58.523176 systemd[1]: session-23.scope: Deactivated successfully. May 17 01:35:58.523533 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. May 17 01:35:58.524039 systemd-logind[1555]: Removed session 23. May 17 01:36:03.531420 systemd[1]: Started sshd@26-145.40.90.133:22-139.178.89.65:60200.service. May 17 01:36:03.559433 sshd[4482]: Accepted publickey for core from 139.178.89.65 port 60200 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:36:03.560332 sshd[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:36:03.563154 systemd-logind[1555]: New session 24 of user core. May 17 01:36:03.563787 systemd[1]: Started session-24.scope. May 17 01:36:03.651644 sshd[4482]: pam_unix(sshd:session): session closed for user core May 17 01:36:03.653306 systemd[1]: sshd@26-145.40.90.133:22-139.178.89.65:60200.service: Deactivated successfully. May 17 01:36:03.653663 systemd[1]: session-24.scope: Deactivated successfully. May 17 01:36:03.653976 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. May 17 01:36:03.654589 systemd[1]: Started sshd@27-145.40.90.133:22-139.178.89.65:60216.service. May 17 01:36:03.654972 systemd-logind[1555]: Removed session 24. May 17 01:36:03.682452 sshd[4504]: Accepted publickey for core from 139.178.89.65 port 60216 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:36:03.683250 sshd[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:36:03.685874 systemd-logind[1555]: New session 25 of user core. May 17 01:36:03.686427 systemd[1]: Started session-25.scope. May 17 01:36:05.075051 env[1563]: time="2025-05-17T01:36:05.074943270Z" level=info msg="StopContainer for \"2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5\" with timeout 30 (s)" May 17 01:36:05.076093 env[1563]: time="2025-05-17T01:36:05.075611370Z" level=info msg="Stop container \"2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5\" with signal terminated" May 17 01:36:05.101033 systemd[1]: cri-containerd-2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5.scope: Deactivated successfully. May 17 01:36:05.120913 env[1563]: time="2025-05-17T01:36:05.120851096Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 01:36:05.125288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5-rootfs.mount: Deactivated successfully. May 17 01:36:05.125663 env[1563]: time="2025-05-17T01:36:05.125638940Z" level=info msg="StopContainer for \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\" with timeout 2 (s)" May 17 01:36:05.125862 env[1563]: time="2025-05-17T01:36:05.125839406Z" level=info msg="Stop container \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\" with signal terminated" May 17 01:36:05.130156 systemd-networkd[1323]: lxc_health: Link DOWN May 17 01:36:05.130162 systemd-networkd[1323]: lxc_health: Lost carrier May 17 01:36:05.151025 env[1563]: time="2025-05-17T01:36:05.150956116Z" level=info msg="shim disconnected" id=2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5 May 17 01:36:05.151025 env[1563]: time="2025-05-17T01:36:05.150991724Z" level=warning msg="cleaning up after shim disconnected" id=2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5 namespace=k8s.io May 17 01:36:05.151025 env[1563]: time="2025-05-17T01:36:05.151001163Z" level=info msg="cleaning up dead shim" May 17 01:36:05.155759 env[1563]: time="2025-05-17T01:36:05.155707296Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4570 runtime=io.containerd.runc.v2\n" May 17 01:36:05.156777 env[1563]: time="2025-05-17T01:36:05.156731031Z" level=info msg="StopContainer for \"2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5\" returns successfully" May 17 01:36:05.157201 env[1563]: time="2025-05-17T01:36:05.157182219Z" level=info msg="StopPodSandbox for \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\"" May 17 01:36:05.157255 env[1563]: time="2025-05-17T01:36:05.157228880Z" level=info msg="Container to stop \"2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:36:05.159026 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16-shm.mount: Deactivated successfully. May 17 01:36:05.161980 systemd[1]: cri-containerd-fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16.scope: Deactivated successfully. May 17 01:36:05.175649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16-rootfs.mount: Deactivated successfully. May 17 01:36:05.191881 env[1563]: time="2025-05-17T01:36:05.191836019Z" level=info msg="shim disconnected" id=fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16 May 17 01:36:05.191881 env[1563]: time="2025-05-17T01:36:05.191879837Z" level=warning msg="cleaning up after shim disconnected" id=fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16 namespace=k8s.io May 17 01:36:05.192016 env[1563]: time="2025-05-17T01:36:05.191890128Z" level=info msg="cleaning up dead shim" May 17 01:36:05.197081 env[1563]: time="2025-05-17T01:36:05.197051313Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4603 runtime=io.containerd.runc.v2\n" May 17 01:36:05.197362 env[1563]: time="2025-05-17T01:36:05.197312406Z" level=info msg="TearDown network for sandbox \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\" successfully" May 17 01:36:05.197362 env[1563]: time="2025-05-17T01:36:05.197333701Z" level=info msg="StopPodSandbox for \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\" returns successfully" May 17 01:36:05.217916 systemd[1]: cri-containerd-75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0.scope: Deactivated successfully. May 17 01:36:05.218332 systemd[1]: cri-containerd-75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0.scope: Consumed 6.453s CPU time. May 17 01:36:05.250995 env[1563]: time="2025-05-17T01:36:05.250892378Z" level=info msg="shim disconnected" id=75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0 May 17 01:36:05.250995 env[1563]: time="2025-05-17T01:36:05.250991349Z" level=warning msg="cleaning up after shim disconnected" id=75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0 namespace=k8s.io May 17 01:36:05.251448 env[1563]: time="2025-05-17T01:36:05.251015171Z" level=info msg="cleaning up dead shim" May 17 01:36:05.252286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0-rootfs.mount: Deactivated successfully. May 17 01:36:05.264644 env[1563]: time="2025-05-17T01:36:05.264552865Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4628 runtime=io.containerd.runc.v2\n" May 17 01:36:05.266534 env[1563]: time="2025-05-17T01:36:05.266438649Z" level=info msg="StopContainer for \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\" returns successfully" May 17 01:36:05.267339 env[1563]: time="2025-05-17T01:36:05.267221766Z" level=info msg="StopPodSandbox for \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\"" May 17 01:36:05.267508 env[1563]: time="2025-05-17T01:36:05.267365599Z" level=info msg="Container to stop \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:36:05.267508 env[1563]: time="2025-05-17T01:36:05.267408424Z" level=info msg="Container to stop \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:36:05.267508 env[1563]: time="2025-05-17T01:36:05.267434436Z" level=info msg="Container to stop \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:36:05.267508 env[1563]: time="2025-05-17T01:36:05.267458536Z" level=info msg="Container to stop \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:36:05.267508 env[1563]: time="2025-05-17T01:36:05.267480981Z" level=info msg="Container to stop \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 01:36:05.278522 systemd[1]: cri-containerd-2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc.scope: Deactivated successfully. May 17 01:36:05.318837 env[1563]: time="2025-05-17T01:36:05.318778408Z" level=info msg="shim disconnected" id=2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc May 17 01:36:05.318991 env[1563]: time="2025-05-17T01:36:05.318834533Z" level=warning msg="cleaning up after shim disconnected" id=2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc namespace=k8s.io May 17 01:36:05.318991 env[1563]: time="2025-05-17T01:36:05.318850606Z" level=info msg="cleaning up dead shim" May 17 01:36:05.325050 env[1563]: time="2025-05-17T01:36:05.324996514Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4659 runtime=io.containerd.runc.v2\n" May 17 01:36:05.325321 env[1563]: time="2025-05-17T01:36:05.325255341Z" level=info msg="TearDown network for sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" successfully" May 17 01:36:05.325321 env[1563]: time="2025-05-17T01:36:05.325279062Z" level=info msg="StopPodSandbox for \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" returns successfully" May 17 01:36:05.351464 kubelet[2459]: I0517 01:36:05.351408 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f2771f2-6cd5-4e97-9053-2769d921241e-cilium-config-path\") pod \"0f2771f2-6cd5-4e97-9053-2769d921241e\" (UID: \"0f2771f2-6cd5-4e97-9053-2769d921241e\") " May 17 01:36:05.352107 kubelet[2459]: I0517 01:36:05.351485 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-449z8\" (UniqueName: \"kubernetes.io/projected/0f2771f2-6cd5-4e97-9053-2769d921241e-kube-api-access-449z8\") pod \"0f2771f2-6cd5-4e97-9053-2769d921241e\" (UID: \"0f2771f2-6cd5-4e97-9053-2769d921241e\") " May 17 01:36:05.356386 kubelet[2459]: I0517 01:36:05.356318 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f2771f2-6cd5-4e97-9053-2769d921241e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0f2771f2-6cd5-4e97-9053-2769d921241e" (UID: "0f2771f2-6cd5-4e97-9053-2769d921241e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 01:36:05.357785 kubelet[2459]: I0517 01:36:05.357680 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2771f2-6cd5-4e97-9053-2769d921241e-kube-api-access-449z8" (OuterVolumeSpecName: "kube-api-access-449z8") pod "0f2771f2-6cd5-4e97-9053-2769d921241e" (UID: "0f2771f2-6cd5-4e97-9053-2769d921241e"). InnerVolumeSpecName "kube-api-access-449z8". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 01:36:05.452577 kubelet[2459]: I0517 01:36:05.452447 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-cgroup\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.452577 kubelet[2459]: I0517 01:36:05.452563 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d935d46a-439e-4524-ba54-e7e3061e6e3a-clustermesh-secrets\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.453015 kubelet[2459]: I0517 01:36:05.452631 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-lib-modules\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.453015 kubelet[2459]: I0517 01:36:05.452630 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:05.453015 kubelet[2459]: I0517 01:36:05.452677 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cni-path\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.453015 kubelet[2459]: I0517 01:36:05.452726 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-host-proc-sys-net\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.453015 kubelet[2459]: I0517 01:36:05.452772 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-xtables-lock\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.453015 kubelet[2459]: I0517 01:36:05.452774 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cni-path" (OuterVolumeSpecName: "cni-path") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:05.453858 kubelet[2459]: I0517 01:36:05.452781 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:05.453858 kubelet[2459]: I0517 01:36:05.452818 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-run\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.453858 kubelet[2459]: I0517 01:36:05.452868 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:05.453858 kubelet[2459]: I0517 01:36:05.452886 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85srr\" (UniqueName: \"kubernetes.io/projected/d935d46a-439e-4524-ba54-e7e3061e6e3a-kube-api-access-85srr\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.453858 kubelet[2459]: I0517 01:36:05.452881 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:05.454500 kubelet[2459]: I0517 01:36:05.452936 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-etc-cni-netd\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.454500 kubelet[2459]: I0517 01:36:05.452947 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:05.454500 kubelet[2459]: I0517 01:36:05.452979 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-hostproc\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.454500 kubelet[2459]: I0517 01:36:05.452998 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:05.454500 kubelet[2459]: I0517 01:36:05.453027 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-host-proc-sys-kernel\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.455033 kubelet[2459]: I0517 01:36:05.453056 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-hostproc" (OuterVolumeSpecName: "hostproc") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:05.455033 kubelet[2459]: I0517 01:36:05.453082 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-config-path\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.455033 kubelet[2459]: I0517 01:36:05.453114 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:05.455033 kubelet[2459]: I0517 01:36:05.453135 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d935d46a-439e-4524-ba54-e7e3061e6e3a-hubble-tls\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.455033 kubelet[2459]: I0517 01:36:05.453269 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-bpf-maps\") pod \"d935d46a-439e-4524-ba54-e7e3061e6e3a\" (UID: \"d935d46a-439e-4524-ba54-e7e3061e6e3a\") " May 17 01:36:05.455588 kubelet[2459]: I0517 01:36:05.453341 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:05.455588 kubelet[2459]: I0517 01:36:05.453494 2459 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f2771f2-6cd5-4e97-9053-2769d921241e-cilium-config-path\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.455588 kubelet[2459]: I0517 01:36:05.453557 2459 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-bpf-maps\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.455588 kubelet[2459]: I0517 01:36:05.453615 2459 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-449z8\" (UniqueName: \"kubernetes.io/projected/0f2771f2-6cd5-4e97-9053-2769d921241e-kube-api-access-449z8\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.455588 kubelet[2459]: I0517 01:36:05.453674 2459 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-cgroup\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.455588 kubelet[2459]: I0517 01:36:05.453725 2459 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-lib-modules\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.455588 kubelet[2459]: I0517 01:36:05.453755 2459 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cni-path\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.456275 kubelet[2459]: I0517 01:36:05.453781 2459 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-run\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.456275 kubelet[2459]: I0517 01:36:05.453806 2459 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-host-proc-sys-net\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.456275 kubelet[2459]: I0517 01:36:05.453840 2459 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-xtables-lock\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.456275 kubelet[2459]: I0517 01:36:05.453866 2459 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-etc-cni-netd\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.456275 kubelet[2459]: I0517 01:36:05.453892 2459 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.456275 kubelet[2459]: I0517 01:36:05.453937 2459 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d935d46a-439e-4524-ba54-e7e3061e6e3a-hostproc\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.458544 kubelet[2459]: I0517 01:36:05.458439 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 01:36:05.459563 kubelet[2459]: I0517 01:36:05.459467 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d935d46a-439e-4524-ba54-e7e3061e6e3a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 01:36:05.459852 kubelet[2459]: I0517 01:36:05.459775 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d935d46a-439e-4524-ba54-e7e3061e6e3a-kube-api-access-85srr" (OuterVolumeSpecName: "kube-api-access-85srr") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "kube-api-access-85srr". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 01:36:05.460004 kubelet[2459]: I0517 01:36:05.459849 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d935d46a-439e-4524-ba54-e7e3061e6e3a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d935d46a-439e-4524-ba54-e7e3061e6e3a" (UID: "d935d46a-439e-4524-ba54-e7e3061e6e3a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 01:36:05.554443 kubelet[2459]: I0517 01:36:05.554315 2459 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d935d46a-439e-4524-ba54-e7e3061e6e3a-cilium-config-path\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.554443 kubelet[2459]: I0517 01:36:05.554397 2459 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d935d46a-439e-4524-ba54-e7e3061e6e3a-hubble-tls\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.554443 kubelet[2459]: I0517 01:36:05.554431 2459 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d935d46a-439e-4524-ba54-e7e3061e6e3a-clustermesh-secrets\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:05.554443 kubelet[2459]: I0517 01:36:05.554460 2459 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-85srr\" (UniqueName: \"kubernetes.io/projected/d935d46a-439e-4524-ba54-e7e3061e6e3a-kube-api-access-85srr\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:06.099060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc-rootfs.mount: Deactivated successfully. May 17 01:36:06.099112 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc-shm.mount: Deactivated successfully. May 17 01:36:06.099146 systemd[1]: var-lib-kubelet-pods-0f2771f2\x2d6cd5\x2d4e97\x2d9053\x2d2769d921241e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d449z8.mount: Deactivated successfully. May 17 01:36:06.099179 systemd[1]: var-lib-kubelet-pods-d935d46a\x2d439e\x2d4524\x2dba54\x2de7e3061e6e3a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d85srr.mount: Deactivated successfully. May 17 01:36:06.099213 systemd[1]: var-lib-kubelet-pods-d935d46a\x2d439e\x2d4524\x2dba54\x2de7e3061e6e3a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 01:36:06.099244 systemd[1]: var-lib-kubelet-pods-d935d46a\x2d439e\x2d4524\x2dba54\x2de7e3061e6e3a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 01:36:06.199935 kubelet[2459]: I0517 01:36:06.199820 2459 scope.go:117] "RemoveContainer" containerID="2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5" May 17 01:36:06.203192 env[1563]: time="2025-05-17T01:36:06.203093018Z" level=info msg="RemoveContainer for \"2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5\"" May 17 01:36:06.208280 env[1563]: time="2025-05-17T01:36:06.208262101Z" level=info msg="RemoveContainer for \"2e806f25cdab85ad324d5ed07ac57cd0377adefc2b386dbafb2a96d556ae69f5\" returns successfully" May 17 01:36:06.208395 kubelet[2459]: I0517 01:36:06.208382 2459 scope.go:117] "RemoveContainer" containerID="75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0" May 17 01:36:06.208602 systemd[1]: Removed slice kubepods-besteffort-pod0f2771f2_6cd5_4e97_9053_2769d921241e.slice. May 17 01:36:06.208654 systemd[1]: kubepods-besteffort-pod0f2771f2_6cd5_4e97_9053_2769d921241e.slice: Consumed 1.002s CPU time. May 17 01:36:06.208873 env[1563]: time="2025-05-17T01:36:06.208861160Z" level=info msg="RemoveContainer for \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\"" May 17 01:36:06.209891 systemd[1]: Removed slice kubepods-burstable-podd935d46a_439e_4524_ba54_e7e3061e6e3a.slice. May 17 01:36:06.209947 systemd[1]: kubepods-burstable-podd935d46a_439e_4524_ba54_e7e3061e6e3a.slice: Consumed 6.508s CPU time. May 17 01:36:06.209983 env[1563]: time="2025-05-17T01:36:06.209927602Z" level=info msg="RemoveContainer for \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\" returns successfully" May 17 01:36:06.210009 kubelet[2459]: I0517 01:36:06.209992 2459 scope.go:117] "RemoveContainer" containerID="d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e" May 17 01:36:06.210483 env[1563]: time="2025-05-17T01:36:06.210443017Z" level=info msg="RemoveContainer for \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\"" May 17 01:36:06.211454 env[1563]: time="2025-05-17T01:36:06.211438824Z" level=info msg="RemoveContainer for \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\" returns successfully" May 17 01:36:06.211507 kubelet[2459]: I0517 01:36:06.211498 2459 scope.go:117] "RemoveContainer" containerID="23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c" May 17 01:36:06.211908 env[1563]: time="2025-05-17T01:36:06.211896640Z" level=info msg="RemoveContainer for \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\"" May 17 01:36:06.212869 env[1563]: time="2025-05-17T01:36:06.212858283Z" level=info msg="RemoveContainer for \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\" returns successfully" May 17 01:36:06.212945 kubelet[2459]: I0517 01:36:06.212913 2459 scope.go:117] "RemoveContainer" containerID="e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0" May 17 01:36:06.213362 env[1563]: time="2025-05-17T01:36:06.213306695Z" level=info msg="RemoveContainer for \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\"" May 17 01:36:06.214402 env[1563]: time="2025-05-17T01:36:06.214388641Z" level=info msg="RemoveContainer for \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\" returns successfully" May 17 01:36:06.214457 kubelet[2459]: I0517 01:36:06.214448 2459 scope.go:117] "RemoveContainer" containerID="66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38" May 17 01:36:06.214888 env[1563]: time="2025-05-17T01:36:06.214872867Z" level=info msg="RemoveContainer for \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\"" May 17 01:36:06.216066 env[1563]: time="2025-05-17T01:36:06.216028445Z" level=info msg="RemoveContainer for \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\" returns successfully" May 17 01:36:06.216110 kubelet[2459]: I0517 01:36:06.216087 2459 scope.go:117] "RemoveContainer" containerID="75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0" May 17 01:36:06.216220 env[1563]: time="2025-05-17T01:36:06.216183684Z" level=error msg="ContainerStatus for \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\": not found" May 17 01:36:06.216275 kubelet[2459]: E0517 01:36:06.216266 2459 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\": not found" containerID="75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0" May 17 01:36:06.216360 kubelet[2459]: I0517 01:36:06.216281 2459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0"} err="failed to get container status \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\": rpc error: code = NotFound desc = an error occurred when try to find container \"75d9b16b16cd2831f50a3679b76e4c298ab5801a0c6c2391f5de940a75d90bf0\": not found" May 17 01:36:06.216360 kubelet[2459]: I0517 01:36:06.216328 2459 scope.go:117] "RemoveContainer" containerID="d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e" May 17 01:36:06.216417 env[1563]: time="2025-05-17T01:36:06.216396137Z" level=error msg="ContainerStatus for \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\": not found" May 17 01:36:06.216498 kubelet[2459]: E0517 01:36:06.216455 2459 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\": not found" containerID="d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e" May 17 01:36:06.216498 kubelet[2459]: I0517 01:36:06.216467 2459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e"} err="failed to get container status \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d598ecd71555eb30f6c66bd77a735fa8093a8a472333babf6bbec794e3719b4e\": not found" May 17 01:36:06.216498 kubelet[2459]: I0517 01:36:06.216477 2459 scope.go:117] "RemoveContainer" containerID="23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c" May 17 01:36:06.216619 env[1563]: time="2025-05-17T01:36:06.216564203Z" level=error msg="ContainerStatus for \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\": not found" May 17 01:36:06.216653 kubelet[2459]: E0517 01:36:06.216618 2459 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\": not found" containerID="23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c" May 17 01:36:06.216653 kubelet[2459]: I0517 01:36:06.216628 2459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c"} err="failed to get container status \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\": rpc error: code = NotFound desc = an error occurred when try to find container \"23513e28edfb66e987248bcede4b28519fa615051da8cf572acb791e62ac388c\": not found" May 17 01:36:06.216653 kubelet[2459]: I0517 01:36:06.216638 2459 scope.go:117] "RemoveContainer" containerID="e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0" May 17 01:36:06.216763 env[1563]: time="2025-05-17T01:36:06.216710546Z" level=error msg="ContainerStatus for \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\": not found" May 17 01:36:06.216787 kubelet[2459]: E0517 01:36:06.216772 2459 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\": not found" containerID="e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0" May 17 01:36:06.216806 kubelet[2459]: I0517 01:36:06.216783 2459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0"} err="failed to get container status \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\": rpc error: code = NotFound desc = an error occurred when try to find container \"e16171f618f5514b80f8ba38940880b09c513eddf00b9982f0fdfc80cc1a5fd0\": not found" May 17 01:36:06.216806 kubelet[2459]: I0517 01:36:06.216791 2459 scope.go:117] "RemoveContainer" containerID="66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38" May 17 01:36:06.216885 env[1563]: time="2025-05-17T01:36:06.216861888Z" level=error msg="ContainerStatus for \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\": not found" May 17 01:36:06.216927 kubelet[2459]: E0517 01:36:06.216915 2459 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\": not found" containerID="66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38" May 17 01:36:06.216967 kubelet[2459]: I0517 01:36:06.216930 2459 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38"} err="failed to get container status \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\": rpc error: code = NotFound desc = an error occurred when try to find container \"66ab080174a143416102ac17dd8ac35cc1202538ca8e0b1076c710ca97b5aa38\": not found" May 17 01:36:06.722507 systemd[1]: sshd@10-145.40.90.133:22-123.57.65.198:45528.service: Deactivated successfully. May 17 01:36:06.969843 kubelet[2459]: I0517 01:36:06.969727 2459 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2771f2-6cd5-4e97-9053-2769d921241e" path="/var/lib/kubelet/pods/0f2771f2-6cd5-4e97-9053-2769d921241e/volumes" May 17 01:36:06.971225 kubelet[2459]: I0517 01:36:06.971137 2459 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d935d46a-439e-4524-ba54-e7e3061e6e3a" path="/var/lib/kubelet/pods/d935d46a-439e-4524-ba54-e7e3061e6e3a/volumes" May 17 01:36:07.017756 sshd[4504]: pam_unix(sshd:session): session closed for user core May 17 01:36:07.025334 systemd[1]: sshd@27-145.40.90.133:22-139.178.89.65:60216.service: Deactivated successfully. May 17 01:36:07.026118 systemd[1]: session-25.scope: Deactivated successfully. May 17 01:36:07.026616 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. May 17 01:36:07.027147 systemd[1]: Started sshd@28-145.40.90.133:22-139.178.89.65:50710.service. May 17 01:36:07.027723 systemd-logind[1555]: Removed session 25. May 17 01:36:07.055433 sshd[4679]: Accepted publickey for core from 139.178.89.65 port 50710 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:36:07.056350 sshd[4679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:36:07.059405 systemd-logind[1555]: New session 26 of user core. May 17 01:36:07.060452 systemd[1]: Started session-26.scope. May 17 01:36:07.603989 sshd[4679]: pam_unix(sshd:session): session closed for user core May 17 01:36:07.606106 systemd[1]: sshd@28-145.40.90.133:22-139.178.89.65:50710.service: Deactivated successfully. May 17 01:36:07.606531 systemd[1]: session-26.scope: Deactivated successfully. May 17 01:36:07.606917 systemd-logind[1555]: Session 26 logged out. Waiting for processes to exit. May 17 01:36:07.607612 systemd[1]: Started sshd@29-145.40.90.133:22-139.178.89.65:50714.service. May 17 01:36:07.608113 systemd-logind[1555]: Removed session 26. May 17 01:36:07.612921 kubelet[2459]: I0517 01:36:07.612896 2459 memory_manager.go:355] "RemoveStaleState removing state" podUID="d935d46a-439e-4524-ba54-e7e3061e6e3a" containerName="cilium-agent" May 17 01:36:07.612921 kubelet[2459]: I0517 01:36:07.612915 2459 memory_manager.go:355] "RemoveStaleState removing state" podUID="0f2771f2-6cd5-4e97-9053-2769d921241e" containerName="cilium-operator" May 17 01:36:07.616473 systemd[1]: Created slice kubepods-burstable-podb7b3ad01_95be_4ec9_a200_08905e7c92bb.slice. May 17 01:36:07.637321 sshd[4704]: Accepted publickey for core from 139.178.89.65 port 50714 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:36:07.640916 sshd[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:36:07.652027 systemd-logind[1555]: New session 27 of user core. May 17 01:36:07.654558 systemd[1]: Started session-27.scope. May 17 01:36:07.771311 kubelet[2459]: I0517 01:36:07.771266 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7b3ad01-95be-4ec9-a200-08905e7c92bb-clustermesh-secrets\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.771501 kubelet[2459]: I0517 01:36:07.771329 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-config-path\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.771501 kubelet[2459]: I0517 01:36:07.771398 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-hostproc\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.771501 kubelet[2459]: I0517 01:36:07.771440 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-cgroup\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.771501 kubelet[2459]: I0517 01:36:07.771463 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-ipsec-secrets\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.771798 kubelet[2459]: I0517 01:36:07.771510 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-etc-cni-netd\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.771798 kubelet[2459]: I0517 01:36:07.771566 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-xtables-lock\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.771798 kubelet[2459]: I0517 01:36:07.771601 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7b3ad01-95be-4ec9-a200-08905e7c92bb-hubble-tls\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.771798 kubelet[2459]: I0517 01:36:07.771639 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-bpf-maps\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.771798 kubelet[2459]: I0517 01:36:07.771692 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cni-path\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.771798 kubelet[2459]: I0517 01:36:07.771731 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-lib-modules\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.772071 kubelet[2459]: I0517 01:36:07.771769 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-run\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.772071 kubelet[2459]: I0517 01:36:07.771837 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7qhs\" (UniqueName: \"kubernetes.io/projected/b7b3ad01-95be-4ec9-a200-08905e7c92bb-kube-api-access-c7qhs\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.772071 kubelet[2459]: I0517 01:36:07.771874 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-host-proc-sys-net\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.772071 kubelet[2459]: I0517 01:36:07.771898 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-host-proc-sys-kernel\") pod \"cilium-s94sr\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " pod="kube-system/cilium-s94sr" May 17 01:36:07.808889 sshd[4704]: pam_unix(sshd:session): session closed for user core May 17 01:36:07.810874 systemd[1]: sshd@29-145.40.90.133:22-139.178.89.65:50714.service: Deactivated successfully. May 17 01:36:07.811274 systemd[1]: session-27.scope: Deactivated successfully. May 17 01:36:07.811757 systemd-logind[1555]: Session 27 logged out. Waiting for processes to exit. May 17 01:36:07.812415 systemd[1]: Started sshd@30-145.40.90.133:22-139.178.89.65:50730.service. May 17 01:36:07.812984 systemd-logind[1555]: Removed session 27. May 17 01:36:07.817797 kubelet[2459]: E0517 01:36:07.817771 2459 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-c7qhs lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-s94sr" podUID="b7b3ad01-95be-4ec9-a200-08905e7c92bb" May 17 01:36:07.841211 sshd[4729]: Accepted publickey for core from 139.178.89.65 port 50730 ssh2: RSA SHA256:dr8898nfQ8VsNqyYr3tPQc6zAjTUznXmSFSYfivFPZc May 17 01:36:07.842114 sshd[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:36:07.844888 systemd-logind[1555]: New session 28 of user core. May 17 01:36:07.845426 systemd[1]: Started session-28.scope. May 17 01:36:08.378049 kubelet[2459]: I0517 01:36:08.377932 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-hostproc\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.378049 kubelet[2459]: I0517 01:36:08.378023 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-host-proc-sys-net\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.379054 kubelet[2459]: I0517 01:36:08.378068 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-hostproc" (OuterVolumeSpecName: "hostproc") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:08.379054 kubelet[2459]: I0517 01:36:08.378091 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-config-path\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.379054 kubelet[2459]: I0517 01:36:08.378128 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:08.379054 kubelet[2459]: I0517 01:36:08.378239 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7qhs\" (UniqueName: \"kubernetes.io/projected/b7b3ad01-95be-4ec9-a200-08905e7c92bb-kube-api-access-c7qhs\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.379054 kubelet[2459]: I0517 01:36:08.378332 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7b3ad01-95be-4ec9-a200-08905e7c92bb-hubble-tls\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.379610 kubelet[2459]: I0517 01:36:08.378385 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-lib-modules\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.379610 kubelet[2459]: I0517 01:36:08.378434 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-host-proc-sys-kernel\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.379610 kubelet[2459]: I0517 01:36:08.378484 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-etc-cni-netd\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.379610 kubelet[2459]: I0517 01:36:08.378529 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-xtables-lock\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.379610 kubelet[2459]: I0517 01:36:08.378565 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:08.379610 kubelet[2459]: I0517 01:36:08.378586 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7b3ad01-95be-4ec9-a200-08905e7c92bb-clustermesh-secrets\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.380215 kubelet[2459]: I0517 01:36:08.378653 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:08.380215 kubelet[2459]: I0517 01:36:08.378670 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:08.380215 kubelet[2459]: I0517 01:36:08.378706 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-bpf-maps\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.380215 kubelet[2459]: I0517 01:36:08.378752 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:08.380215 kubelet[2459]: I0517 01:36:08.378756 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:08.380760 kubelet[2459]: I0517 01:36:08.378846 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-cgroup\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.380760 kubelet[2459]: I0517 01:36:08.378953 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-ipsec-secrets\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.380760 kubelet[2459]: I0517 01:36:08.378943 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:08.380760 kubelet[2459]: I0517 01:36:08.379046 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cni-path\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.380760 kubelet[2459]: I0517 01:36:08.379109 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cni-path" (OuterVolumeSpecName: "cni-path") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:08.380760 kubelet[2459]: I0517 01:36:08.379182 2459 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-run\") pod \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\" (UID: \"b7b3ad01-95be-4ec9-a200-08905e7c92bb\") " May 17 01:36:08.381368 kubelet[2459]: I0517 01:36:08.379251 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 01:36:08.381368 kubelet[2459]: I0517 01:36:08.379319 2459 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-host-proc-sys-net\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.381368 kubelet[2459]: I0517 01:36:08.379359 2459 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-lib-modules\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.381368 kubelet[2459]: I0517 01:36:08.379390 2459 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.381368 kubelet[2459]: I0517 01:36:08.379418 2459 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-etc-cni-netd\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.381368 kubelet[2459]: I0517 01:36:08.379446 2459 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-xtables-lock\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.381368 kubelet[2459]: I0517 01:36:08.379471 2459 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-bpf-maps\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.382051 kubelet[2459]: I0517 01:36:08.379495 2459 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-cgroup\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.382051 kubelet[2459]: I0517 01:36:08.379518 2459 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cni-path\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.382051 kubelet[2459]: I0517 01:36:08.379543 2459 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-hostproc\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.382838 kubelet[2459]: I0517 01:36:08.382742 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 01:36:08.383675 kubelet[2459]: I0517 01:36:08.383659 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b3ad01-95be-4ec9-a200-08905e7c92bb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 01:36:08.383675 kubelet[2459]: I0517 01:36:08.383672 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b3ad01-95be-4ec9-a200-08905e7c92bb-kube-api-access-c7qhs" (OuterVolumeSpecName: "kube-api-access-c7qhs") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "kube-api-access-c7qhs". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 01:36:08.383785 kubelet[2459]: I0517 01:36:08.383681 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b3ad01-95be-4ec9-a200-08905e7c92bb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 01:36:08.383785 kubelet[2459]: I0517 01:36:08.383708 2459 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b7b3ad01-95be-4ec9-a200-08905e7c92bb" (UID: "b7b3ad01-95be-4ec9-a200-08905e7c92bb"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 01:36:08.384619 systemd[1]: var-lib-kubelet-pods-b7b3ad01\x2d95be\x2d4ec9\x2da200\x2d08905e7c92bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc7qhs.mount: Deactivated successfully. May 17 01:36:08.384674 systemd[1]: var-lib-kubelet-pods-b7b3ad01\x2d95be\x2d4ec9\x2da200\x2d08905e7c92bb-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 01:36:08.384709 systemd[1]: var-lib-kubelet-pods-b7b3ad01\x2d95be\x2d4ec9\x2da200\x2d08905e7c92bb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 01:36:08.384741 systemd[1]: var-lib-kubelet-pods-b7b3ad01\x2d95be\x2d4ec9\x2da200\x2d08905e7c92bb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 01:36:08.479939 kubelet[2459]: I0517 01:36:08.479814 2459 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7b3ad01-95be-4ec9-a200-08905e7c92bb-clustermesh-secrets\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.479939 kubelet[2459]: I0517 01:36:08.479894 2459 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.479939 kubelet[2459]: I0517 01:36:08.479948 2459 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-run\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.480484 kubelet[2459]: I0517 01:36:08.479980 2459 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7b3ad01-95be-4ec9-a200-08905e7c92bb-cilium-config-path\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.480484 kubelet[2459]: I0517 01:36:08.480010 2459 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c7qhs\" (UniqueName: \"kubernetes.io/projected/b7b3ad01-95be-4ec9-a200-08905e7c92bb-kube-api-access-c7qhs\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.480484 kubelet[2459]: I0517 01:36:08.480038 2459 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7b3ad01-95be-4ec9-a200-08905e7c92bb-hubble-tls\") on node \"ci-3510.3.7-n-2b1b6103b5\" DevicePath \"\"" May 17 01:36:08.975906 env[1563]: time="2025-05-17T01:36:08.975842095Z" level=info msg="StopPodSandbox for \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\"" May 17 01:36:08.976236 env[1563]: time="2025-05-17T01:36:08.975948820Z" level=info msg="TearDown network for sandbox \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\" successfully" May 17 01:36:08.976236 env[1563]: time="2025-05-17T01:36:08.975968449Z" level=info msg="StopPodSandbox for \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\" returns successfully" May 17 01:36:08.976280 env[1563]: time="2025-05-17T01:36:08.976253068Z" level=info msg="RemovePodSandbox for \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\"" May 17 01:36:08.976302 env[1563]: time="2025-05-17T01:36:08.976273410Z" level=info msg="Forcibly stopping sandbox \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\"" May 17 01:36:08.976409 env[1563]: time="2025-05-17T01:36:08.976329374Z" level=info msg="TearDown network for sandbox \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\" successfully" May 17 01:36:08.978017 systemd[1]: Removed slice kubepods-burstable-podb7b3ad01_95be_4ec9_a200_08905e7c92bb.slice. May 17 01:36:08.978205 env[1563]: time="2025-05-17T01:36:08.978155006Z" level=info msg="RemovePodSandbox \"fe4d0adc21b19044a0a0d1e4af3bfaa4e1c16cf4549bf9d11616937153db5f16\" returns successfully" May 17 01:36:08.978372 env[1563]: time="2025-05-17T01:36:08.978341820Z" level=info msg="StopPodSandbox for \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\"" May 17 01:36:08.978415 env[1563]: time="2025-05-17T01:36:08.978396587Z" level=info msg="TearDown network for sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" successfully" May 17 01:36:08.978415 env[1563]: time="2025-05-17T01:36:08.978413461Z" level=info msg="StopPodSandbox for \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" returns successfully" May 17 01:36:08.978593 env[1563]: time="2025-05-17T01:36:08.978566318Z" level=info msg="RemovePodSandbox for \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\"" May 17 01:36:08.978618 env[1563]: time="2025-05-17T01:36:08.978595664Z" level=info msg="Forcibly stopping sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\"" May 17 01:36:08.978654 env[1563]: time="2025-05-17T01:36:08.978624609Z" level=info msg="TearDown network for sandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" successfully" May 17 01:36:08.979697 env[1563]: time="2025-05-17T01:36:08.979686050Z" level=info msg="RemovePodSandbox \"2229d1b31162483fe7c01540cb4f7cc7d2e0ed9dce190857b102764caf75d7bc\" returns successfully" May 17 01:36:09.109100 kubelet[2459]: E0517 01:36:09.108995 2459 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 01:36:09.255551 systemd[1]: Created slice kubepods-burstable-pod59654d32_8188_46d6_8097_9e280b257686.slice. May 17 01:36:09.388779 kubelet[2459]: I0517 01:36:09.388653 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59654d32-8188-46d6-8097-9e280b257686-bpf-maps\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.388779 kubelet[2459]: I0517 01:36:09.388754 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59654d32-8188-46d6-8097-9e280b257686-host-proc-sys-net\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.389769 kubelet[2459]: I0517 01:36:09.388816 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59654d32-8188-46d6-8097-9e280b257686-lib-modules\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.389769 kubelet[2459]: I0517 01:36:09.388868 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59654d32-8188-46d6-8097-9e280b257686-host-proc-sys-kernel\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.389769 kubelet[2459]: I0517 01:36:09.388915 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59654d32-8188-46d6-8097-9e280b257686-hubble-tls\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.389769 kubelet[2459]: I0517 01:36:09.388959 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59654d32-8188-46d6-8097-9e280b257686-cni-path\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.389769 kubelet[2459]: I0517 01:36:09.389008 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59654d32-8188-46d6-8097-9e280b257686-xtables-lock\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.389769 kubelet[2459]: I0517 01:36:09.389055 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59654d32-8188-46d6-8097-9e280b257686-cilium-run\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.390408 kubelet[2459]: I0517 01:36:09.389104 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59654d32-8188-46d6-8097-9e280b257686-hostproc\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.390408 kubelet[2459]: I0517 01:36:09.389153 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/59654d32-8188-46d6-8097-9e280b257686-cilium-ipsec-secrets\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.390408 kubelet[2459]: I0517 01:36:09.389200 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vrzn\" (UniqueName: \"kubernetes.io/projected/59654d32-8188-46d6-8097-9e280b257686-kube-api-access-5vrzn\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.390408 kubelet[2459]: I0517 01:36:09.389259 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59654d32-8188-46d6-8097-9e280b257686-clustermesh-secrets\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.390408 kubelet[2459]: I0517 01:36:09.389318 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59654d32-8188-46d6-8097-9e280b257686-cilium-config-path\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.390910 kubelet[2459]: I0517 01:36:09.389369 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59654d32-8188-46d6-8097-9e280b257686-cilium-cgroup\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.390910 kubelet[2459]: I0517 01:36:09.389412 2459 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59654d32-8188-46d6-8097-9e280b257686-etc-cni-netd\") pod \"cilium-wzz7m\" (UID: \"59654d32-8188-46d6-8097-9e280b257686\") " pod="kube-system/cilium-wzz7m" May 17 01:36:09.557657 env[1563]: time="2025-05-17T01:36:09.557501031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wzz7m,Uid:59654d32-8188-46d6-8097-9e280b257686,Namespace:kube-system,Attempt:0,}" May 17 01:36:09.575180 env[1563]: time="2025-05-17T01:36:09.575008858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:09.575180 env[1563]: time="2025-05-17T01:36:09.575098844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:09.575180 env[1563]: time="2025-05-17T01:36:09.575133669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:09.575650 env[1563]: time="2025-05-17T01:36:09.575471782Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c pid=4771 runtime=io.containerd.runc.v2 May 17 01:36:09.607179 systemd[1]: Started cri-containerd-e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c.scope. May 17 01:36:09.651305 env[1563]: time="2025-05-17T01:36:09.651184933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wzz7m,Uid:59654d32-8188-46d6-8097-9e280b257686,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\"" May 17 01:36:09.655887 env[1563]: time="2025-05-17T01:36:09.655806924Z" level=info msg="CreateContainer within sandbox \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 01:36:09.670385 env[1563]: time="2025-05-17T01:36:09.670260758Z" level=info msg="CreateContainer within sandbox \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0f01143e6fdacc04eac9efd77de35aef6b6e1382621a349ff344a69446dab660\"" May 17 01:36:09.671147 env[1563]: time="2025-05-17T01:36:09.671071193Z" level=info msg="StartContainer for \"0f01143e6fdacc04eac9efd77de35aef6b6e1382621a349ff344a69446dab660\"" May 17 01:36:09.703538 systemd[1]: Started cri-containerd-0f01143e6fdacc04eac9efd77de35aef6b6e1382621a349ff344a69446dab660.scope. May 17 01:36:09.755005 env[1563]: time="2025-05-17T01:36:09.754908486Z" level=info msg="StartContainer for \"0f01143e6fdacc04eac9efd77de35aef6b6e1382621a349ff344a69446dab660\" returns successfully" May 17 01:36:09.775900 systemd[1]: cri-containerd-0f01143e6fdacc04eac9efd77de35aef6b6e1382621a349ff344a69446dab660.scope: Deactivated successfully. May 17 01:36:09.835625 env[1563]: time="2025-05-17T01:36:09.835353365Z" level=info msg="shim disconnected" id=0f01143e6fdacc04eac9efd77de35aef6b6e1382621a349ff344a69446dab660 May 17 01:36:09.835625 env[1563]: time="2025-05-17T01:36:09.835487263Z" level=warning msg="cleaning up after shim disconnected" id=0f01143e6fdacc04eac9efd77de35aef6b6e1382621a349ff344a69446dab660 namespace=k8s.io May 17 01:36:09.835625 env[1563]: time="2025-05-17T01:36:09.835522762Z" level=info msg="cleaning up dead shim" May 17 01:36:09.852181 env[1563]: time="2025-05-17T01:36:09.852097558Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4857 runtime=io.containerd.runc.v2\n" May 17 01:36:10.229052 env[1563]: time="2025-05-17T01:36:10.228885325Z" level=info msg="CreateContainer within sandbox \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 01:36:10.243974 env[1563]: time="2025-05-17T01:36:10.243837633Z" level=info msg="CreateContainer within sandbox \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3a08b61c5466757f00791a1a51dafae7a9f87fb693259afcc183d4f7cdf90e2\"" May 17 01:36:10.244852 env[1563]: time="2025-05-17T01:36:10.244795484Z" level=info msg="StartContainer for \"f3a08b61c5466757f00791a1a51dafae7a9f87fb693259afcc183d4f7cdf90e2\"" May 17 01:36:10.260644 systemd[1]: Started cri-containerd-f3a08b61c5466757f00791a1a51dafae7a9f87fb693259afcc183d4f7cdf90e2.scope. May 17 01:36:10.280285 systemd[1]: cri-containerd-f3a08b61c5466757f00791a1a51dafae7a9f87fb693259afcc183d4f7cdf90e2.scope: Deactivated successfully. May 17 01:36:10.291081 env[1563]: time="2025-05-17T01:36:10.291030967Z" level=info msg="StartContainer for \"f3a08b61c5466757f00791a1a51dafae7a9f87fb693259afcc183d4f7cdf90e2\" returns successfully" May 17 01:36:10.319736 env[1563]: time="2025-05-17T01:36:10.319674143Z" level=info msg="shim disconnected" id=f3a08b61c5466757f00791a1a51dafae7a9f87fb693259afcc183d4f7cdf90e2 May 17 01:36:10.319736 env[1563]: time="2025-05-17T01:36:10.319706047Z" level=warning msg="cleaning up after shim disconnected" id=f3a08b61c5466757f00791a1a51dafae7a9f87fb693259afcc183d4f7cdf90e2 namespace=k8s.io May 17 01:36:10.319736 env[1563]: time="2025-05-17T01:36:10.319714608Z" level=info msg="cleaning up dead shim" May 17 01:36:10.324425 env[1563]: time="2025-05-17T01:36:10.324378616Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4918 runtime=io.containerd.runc.v2\n" May 17 01:36:10.970168 kubelet[2459]: I0517 01:36:10.970049 2459 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b3ad01-95be-4ec9-a200-08905e7c92bb" path="/var/lib/kubelet/pods/b7b3ad01-95be-4ec9-a200-08905e7c92bb/volumes" May 17 01:36:11.225592 env[1563]: time="2025-05-17T01:36:11.225504914Z" level=info msg="CreateContainer within sandbox \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 01:36:11.230923 env[1563]: time="2025-05-17T01:36:11.230902611Z" level=info msg="CreateContainer within sandbox \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2e5a27277fddd712dcf2c1f9a885143ab68f1edab3d82f3caad5e49daf7f0f58\"" May 17 01:36:11.231187 env[1563]: time="2025-05-17T01:36:11.231171592Z" level=info msg="StartContainer for \"2e5a27277fddd712dcf2c1f9a885143ab68f1edab3d82f3caad5e49daf7f0f58\"" May 17 01:36:11.231864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069620429.mount: Deactivated successfully. May 17 01:36:11.241277 systemd[1]: Started cri-containerd-2e5a27277fddd712dcf2c1f9a885143ab68f1edab3d82f3caad5e49daf7f0f58.scope. May 17 01:36:11.254968 env[1563]: time="2025-05-17T01:36:11.254915014Z" level=info msg="StartContainer for \"2e5a27277fddd712dcf2c1f9a885143ab68f1edab3d82f3caad5e49daf7f0f58\" returns successfully" May 17 01:36:11.256355 systemd[1]: cri-containerd-2e5a27277fddd712dcf2c1f9a885143ab68f1edab3d82f3caad5e49daf7f0f58.scope: Deactivated successfully. May 17 01:36:11.279852 env[1563]: time="2025-05-17T01:36:11.279788814Z" level=info msg="shim disconnected" id=2e5a27277fddd712dcf2c1f9a885143ab68f1edab3d82f3caad5e49daf7f0f58 May 17 01:36:11.279852 env[1563]: time="2025-05-17T01:36:11.279820522Z" level=warning msg="cleaning up after shim disconnected" id=2e5a27277fddd712dcf2c1f9a885143ab68f1edab3d82f3caad5e49daf7f0f58 namespace=k8s.io May 17 01:36:11.279852 env[1563]: time="2025-05-17T01:36:11.279827099Z" level=info msg="cleaning up dead shim" May 17 01:36:11.283946 env[1563]: time="2025-05-17T01:36:11.283893686Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4975 runtime=io.containerd.runc.v2\n" May 17 01:36:11.502980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e5a27277fddd712dcf2c1f9a885143ab68f1edab3d82f3caad5e49daf7f0f58-rootfs.mount: Deactivated successfully. May 17 01:36:12.228265 env[1563]: time="2025-05-17T01:36:12.228240981Z" level=info msg="CreateContainer within sandbox \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 01:36:12.232532 env[1563]: time="2025-05-17T01:36:12.232481559Z" level=info msg="CreateContainer within sandbox \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4206ad2f933a365e49f63722105f72439858626192edaac418841de7559d4023\"" May 17 01:36:12.232826 env[1563]: time="2025-05-17T01:36:12.232811320Z" level=info msg="StartContainer for \"4206ad2f933a365e49f63722105f72439858626192edaac418841de7559d4023\"" May 17 01:36:12.242369 systemd[1]: Started cri-containerd-4206ad2f933a365e49f63722105f72439858626192edaac418841de7559d4023.scope. May 17 01:36:12.253265 env[1563]: time="2025-05-17T01:36:12.253239512Z" level=info msg="StartContainer for \"4206ad2f933a365e49f63722105f72439858626192edaac418841de7559d4023\" returns successfully" May 17 01:36:12.253519 systemd[1]: cri-containerd-4206ad2f933a365e49f63722105f72439858626192edaac418841de7559d4023.scope: Deactivated successfully. May 17 01:36:12.262876 env[1563]: time="2025-05-17T01:36:12.262847641Z" level=info msg="shim disconnected" id=4206ad2f933a365e49f63722105f72439858626192edaac418841de7559d4023 May 17 01:36:12.262876 env[1563]: time="2025-05-17T01:36:12.262875743Z" level=warning msg="cleaning up after shim disconnected" id=4206ad2f933a365e49f63722105f72439858626192edaac418841de7559d4023 namespace=k8s.io May 17 01:36:12.262991 env[1563]: time="2025-05-17T01:36:12.262881918Z" level=info msg="cleaning up dead shim" May 17 01:36:12.266256 env[1563]: time="2025-05-17T01:36:12.266239313Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5028 runtime=io.containerd.runc.v2\n" May 17 01:36:12.502719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4206ad2f933a365e49f63722105f72439858626192edaac418841de7559d4023-rootfs.mount: Deactivated successfully. May 17 01:36:13.236140 env[1563]: time="2025-05-17T01:36:13.236118346Z" level=info msg="CreateContainer within sandbox \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 01:36:13.242050 env[1563]: time="2025-05-17T01:36:13.242025482Z" level=info msg="CreateContainer within sandbox \"e3ed7ba5b376e75b647e01631fd2f32a18cb38330c4acf1bae5faa452073252c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6b37448e69da85f89be8f461fdcb9f74d5de49433d0808f11e24e6973948d912\"" May 17 01:36:13.242355 env[1563]: time="2025-05-17T01:36:13.242318190Z" level=info msg="StartContainer for \"6b37448e69da85f89be8f461fdcb9f74d5de49433d0808f11e24e6973948d912\"" May 17 01:36:13.242966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037890428.mount: Deactivated successfully. May 17 01:36:13.250632 systemd[1]: Started cri-containerd-6b37448e69da85f89be8f461fdcb9f74d5de49433d0808f11e24e6973948d912.scope. May 17 01:36:13.263707 env[1563]: time="2025-05-17T01:36:13.263652397Z" level=info msg="StartContainer for \"6b37448e69da85f89be8f461fdcb9f74d5de49433d0808f11e24e6973948d912\" returns successfully" May 17 01:36:13.410357 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 01:36:14.252243 kubelet[2459]: I0517 01:36:14.252198 2459 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wzz7m" podStartSLOduration=5.252180288 podStartE2EDuration="5.252180288s" podCreationTimestamp="2025-05-17 01:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:36:14.252039864 +0000 UTC m=+425.398217658" watchObservedRunningTime="2025-05-17 01:36:14.252180288 +0000 UTC m=+425.398358080" May 17 01:36:16.732462 systemd-networkd[1323]: lxc_health: Link UP May 17 01:36:16.758186 systemd-networkd[1323]: lxc_health: Gained carrier May 17 01:36:16.758356 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 01:36:18.392425 systemd-networkd[1323]: lxc_health: Gained IPv6LL May 17 01:36:22.393298 sshd[4729]: pam_unix(sshd:session): session closed for user core May 17 01:36:22.394887 systemd[1]: sshd@30-145.40.90.133:22-139.178.89.65:50730.service: Deactivated successfully. May 17 01:36:22.395359 systemd[1]: session-28.scope: Deactivated successfully. May 17 01:36:22.395773 systemd-logind[1555]: Session 28 logged out. Waiting for processes to exit. May 17 01:36:22.396225 systemd-logind[1555]: Removed session 28.