Jul 2 10:23:16.569926 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 10:23:16.569938 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 10:23:16.569945 kernel: BIOS-provided physical RAM map: Jul 2 10:23:16.569949 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 2 10:23:16.569953 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 2 10:23:16.569956 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 2 10:23:16.569961 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 2 10:23:16.569965 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 2 10:23:16.569969 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b27fff] usable Jul 2 10:23:16.569973 kernel: BIOS-e820: [mem 0x0000000081b28000-0x0000000081b28fff] ACPI NVS Jul 2 10:23:16.569977 kernel: BIOS-e820: [mem 0x0000000081b29000-0x0000000081b29fff] reserved Jul 2 10:23:16.569981 kernel: BIOS-e820: [mem 0x0000000081b2a000-0x000000008afccfff] usable Jul 2 10:23:16.569985 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jul 2 10:23:16.569989 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jul 2 10:23:16.569994 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jul 2 10:23:16.569999 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jul 2 10:23:16.570003 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jul 2 10:23:16.570008 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jul 2 10:23:16.570012 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 2 10:23:16.570016 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 2 10:23:16.570020 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 2 10:23:16.570024 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 2 10:23:16.570029 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 2 10:23:16.570033 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jul 2 10:23:16.570037 kernel: NX (Execute Disable) protection: active Jul 2 10:23:16.570041 kernel: SMBIOS 3.2.1 present. Jul 2 10:23:16.570046 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Jul 2 10:23:16.570051 kernel: tsc: Detected 3400.000 MHz processor Jul 2 10:23:16.570055 kernel: tsc: Detected 3399.906 MHz TSC Jul 2 10:23:16.570059 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 10:23:16.570064 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 10:23:16.570069 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jul 2 10:23:16.570073 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 10:23:16.570077 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jul 2 10:23:16.570082 kernel: Using GB pages for direct mapping Jul 2 10:23:16.570086 kernel: ACPI: Early table checksum verification disabled Jul 2 10:23:16.570091 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 2 10:23:16.570095 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 2 10:23:16.570100 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jul 2 10:23:16.570104 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 2 10:23:16.570111 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jul 2 10:23:16.570115 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jul 2 10:23:16.570121 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jul 2 10:23:16.570126 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 2 10:23:16.570130 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 2 10:23:16.570135 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 2 10:23:16.570140 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 2 10:23:16.570144 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 2 10:23:16.570149 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 2 10:23:16.570154 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 10:23:16.570159 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 2 10:23:16.570164 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 2 10:23:16.570169 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 10:23:16.570173 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 10:23:16.570178 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 2 10:23:16.570183 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 2 10:23:16.570188 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 10:23:16.570192 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 2 10:23:16.570198 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 2 10:23:16.570202 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jul 2 10:23:16.570207 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 2 10:23:16.570212 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 2 10:23:16.570217 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 2 10:23:16.570222 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jul 2 10:23:16.570228 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 2 10:23:16.570233 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 2 10:23:16.570238 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 2 10:23:16.570243 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 2 10:23:16.570248 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 2 10:23:16.570253 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jul 2 10:23:16.570258 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jul 2 10:23:16.570262 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jul 2 10:23:16.570267 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jul 2 10:23:16.570272 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jul 2 10:23:16.570277 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jul 2 10:23:16.570282 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jul 2 10:23:16.570302 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jul 2 10:23:16.570307 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jul 2 10:23:16.570311 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jul 2 10:23:16.570316 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jul 2 10:23:16.570320 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jul 2 10:23:16.570325 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jul 2 10:23:16.570329 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jul 2 10:23:16.570334 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jul 2 10:23:16.570339 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jul 2 10:23:16.570344 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jul 2 10:23:16.570349 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jul 2 10:23:16.570353 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jul 2 10:23:16.570358 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jul 2 10:23:16.570362 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jul 2 10:23:16.570367 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jul 2 10:23:16.570371 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jul 2 10:23:16.570376 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jul 2 10:23:16.570381 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jul 2 10:23:16.570386 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jul 2 10:23:16.570390 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jul 2 10:23:16.570395 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jul 2 10:23:16.570399 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jul 2 10:23:16.570404 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jul 2 10:23:16.570409 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jul 2 10:23:16.570413 kernel: No NUMA configuration found Jul 2 10:23:16.570418 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jul 2 10:23:16.570423 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jul 2 10:23:16.570428 kernel: Zone ranges: Jul 2 10:23:16.570432 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 10:23:16.570437 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 10:23:16.570442 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jul 2 10:23:16.570446 kernel: Movable zone start for each node Jul 2 10:23:16.570451 kernel: Early memory node ranges Jul 2 10:23:16.570455 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 2 10:23:16.570460 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 2 10:23:16.570464 kernel: node 0: [mem 0x0000000040400000-0x0000000081b27fff] Jul 2 10:23:16.570470 kernel: node 0: [mem 0x0000000081b2a000-0x000000008afccfff] Jul 2 10:23:16.570474 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jul 2 10:23:16.570479 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jul 2 10:23:16.570484 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jul 2 10:23:16.570488 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jul 2 10:23:16.570493 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 10:23:16.570501 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 2 10:23:16.570506 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 2 10:23:16.570511 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 2 10:23:16.570516 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jul 2 10:23:16.570522 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jul 2 10:23:16.570527 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jul 2 10:23:16.570532 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jul 2 10:23:16.570537 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 2 10:23:16.570542 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 2 10:23:16.570547 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 2 10:23:16.570551 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 2 10:23:16.570557 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 2 10:23:16.570562 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 2 10:23:16.570567 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 2 10:23:16.570572 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 2 10:23:16.570577 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 2 10:23:16.570581 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 2 10:23:16.570586 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 2 10:23:16.570591 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 2 10:23:16.570596 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 2 10:23:16.570602 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 2 10:23:16.570606 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 2 10:23:16.570611 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 2 10:23:16.570616 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 2 10:23:16.570621 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 2 10:23:16.570626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 10:23:16.570631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 10:23:16.570636 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 10:23:16.570641 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 10:23:16.570647 kernel: TSC deadline timer available Jul 2 10:23:16.570652 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jul 2 10:23:16.570657 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jul 2 10:23:16.570662 kernel: Booting paravirtualized kernel on bare hardware Jul 2 10:23:16.570666 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 10:23:16.570672 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Jul 2 10:23:16.570676 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 2 10:23:16.570681 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 2 10:23:16.570686 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 2 10:23:16.570692 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jul 2 10:23:16.570697 kernel: Policy zone: Normal Jul 2 10:23:16.570702 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 10:23:16.570708 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 10:23:16.570713 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 2 10:23:16.570717 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 2 10:23:16.570722 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 10:23:16.570728 kernel: Memory: 32722604K/33452980K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 730116K reserved, 0K cma-reserved) Jul 2 10:23:16.570733 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 2 10:23:16.570738 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 10:23:16.570743 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 10:23:16.570748 kernel: rcu: Hierarchical RCU implementation. Jul 2 10:23:16.570754 kernel: rcu: RCU event tracing is enabled. Jul 2 10:23:16.570758 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 2 10:23:16.570763 kernel: Rude variant of Tasks RCU enabled. Jul 2 10:23:16.570768 kernel: Tracing variant of Tasks RCU enabled. Jul 2 10:23:16.570774 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 10:23:16.570779 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 2 10:23:16.570784 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 2 10:23:16.570789 kernel: random: crng init done Jul 2 10:23:16.570794 kernel: Console: colour dummy device 80x25 Jul 2 10:23:16.570799 kernel: printk: console [tty0] enabled Jul 2 10:23:16.570804 kernel: printk: console [ttyS1] enabled Jul 2 10:23:16.570808 kernel: ACPI: Core revision 20210730 Jul 2 10:23:16.570813 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jul 2 10:23:16.570818 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 10:23:16.570824 kernel: DMAR: Host address width 39 Jul 2 10:23:16.570829 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 2 10:23:16.570834 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 2 10:23:16.570839 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jul 2 10:23:16.570844 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jul 2 10:23:16.570849 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 2 10:23:16.570854 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 2 10:23:16.570858 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 2 10:23:16.570863 kernel: x2apic enabled Jul 2 10:23:16.570869 kernel: Switched APIC routing to cluster x2apic. Jul 2 10:23:16.570874 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 2 10:23:16.570879 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 2 10:23:16.570884 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 2 10:23:16.570889 kernel: process: using mwait in idle threads Jul 2 10:23:16.570894 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 10:23:16.570898 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 10:23:16.570903 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 10:23:16.570908 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 10:23:16.570914 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 10:23:16.570919 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 10:23:16.570923 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 2 10:23:16.570928 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 10:23:16.570933 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 2 10:23:16.570938 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 2 10:23:16.570943 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 10:23:16.570948 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 10:23:16.570953 kernel: TAA: Mitigation: TSX disabled Jul 2 10:23:16.570957 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 2 10:23:16.570962 kernel: SRBDS: Mitigation: Microcode Jul 2 10:23:16.570968 kernel: GDS: Vulnerable: No microcode Jul 2 10:23:16.570973 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 10:23:16.570978 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 10:23:16.570982 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 10:23:16.570987 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 10:23:16.570992 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 10:23:16.570997 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 10:23:16.571002 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 10:23:16.571007 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 10:23:16.571011 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 2 10:23:16.571016 kernel: Freeing SMP alternatives memory: 32K Jul 2 10:23:16.571022 kernel: pid_max: default: 32768 minimum: 301 Jul 2 10:23:16.571027 kernel: LSM: Security Framework initializing Jul 2 10:23:16.571032 kernel: SELinux: Initializing. Jul 2 10:23:16.571036 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 10:23:16.571041 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 10:23:16.571046 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 2 10:23:16.571051 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 2 10:23:16.571056 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 2 10:23:16.571061 kernel: ... version: 4 Jul 2 10:23:16.571066 kernel: ... bit width: 48 Jul 2 10:23:16.571071 kernel: ... generic registers: 4 Jul 2 10:23:16.571076 kernel: ... value mask: 0000ffffffffffff Jul 2 10:23:16.571081 kernel: ... max period: 00007fffffffffff Jul 2 10:23:16.571086 kernel: ... fixed-purpose events: 3 Jul 2 10:23:16.571091 kernel: ... event mask: 000000070000000f Jul 2 10:23:16.571096 kernel: signal: max sigframe size: 2032 Jul 2 10:23:16.571101 kernel: rcu: Hierarchical SRCU implementation. Jul 2 10:23:16.571106 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 2 10:23:16.571111 kernel: smp: Bringing up secondary CPUs ... Jul 2 10:23:16.571115 kernel: x86: Booting SMP configuration: Jul 2 10:23:16.571121 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Jul 2 10:23:16.571126 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 10:23:16.571131 kernel: #9 #10 #11 #12 #13 #14 #15 Jul 2 10:23:16.571136 kernel: smp: Brought up 1 node, 16 CPUs Jul 2 10:23:16.571141 kernel: smpboot: Max logical packages: 1 Jul 2 10:23:16.571146 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 2 10:23:16.571151 kernel: devtmpfs: initialized Jul 2 10:23:16.571156 kernel: x86/mm: Memory block size: 128MB Jul 2 10:23:16.571161 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b28000-0x81b28fff] (4096 bytes) Jul 2 10:23:16.571167 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jul 2 10:23:16.571171 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 10:23:16.571176 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 2 10:23:16.571181 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 10:23:16.571186 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 10:23:16.571191 kernel: audit: initializing netlink subsys (disabled) Jul 2 10:23:16.571196 kernel: audit: type=2000 audit(1719915790.041:1): state=initialized audit_enabled=0 res=1 Jul 2 10:23:16.571201 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 10:23:16.571206 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 10:23:16.571211 kernel: cpuidle: using governor menu Jul 2 10:23:16.571216 kernel: ACPI: bus type PCI registered Jul 2 10:23:16.571221 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 10:23:16.571228 kernel: dca service started, version 1.12.1 Jul 2 10:23:16.571233 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 2 10:23:16.571257 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Jul 2 10:23:16.571262 kernel: PCI: Using configuration type 1 for base access Jul 2 10:23:16.571267 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 2 10:23:16.571272 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 10:23:16.571278 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 10:23:16.571283 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 10:23:16.571288 kernel: ACPI: Added _OSI(Module Device) Jul 2 10:23:16.571307 kernel: ACPI: Added _OSI(Processor Device) Jul 2 10:23:16.571312 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 10:23:16.571317 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 10:23:16.571322 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 10:23:16.571327 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 10:23:16.571332 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 10:23:16.571337 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 2 10:23:16.571342 kernel: ACPI: Dynamic OEM Table Load: Jul 2 10:23:16.571347 kernel: ACPI: SSDT 0xFFFF9DD180220300 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 2 10:23:16.571352 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Jul 2 10:23:16.571357 kernel: ACPI: Dynamic OEM Table Load: Jul 2 10:23:16.571362 kernel: ACPI: SSDT 0xFFFF9DD181AEDC00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 2 10:23:16.571367 kernel: ACPI: Dynamic OEM Table Load: Jul 2 10:23:16.571371 kernel: ACPI: SSDT 0xFFFF9DD181A5C000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 2 10:23:16.571376 kernel: ACPI: Dynamic OEM Table Load: Jul 2 10:23:16.571382 kernel: ACPI: SSDT 0xFFFF9DD181B54000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 2 10:23:16.571387 kernel: ACPI: Dynamic OEM Table Load: Jul 2 10:23:16.571392 kernel: ACPI: SSDT 0xFFFF9DD180156000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 2 10:23:16.571396 kernel: ACPI: Dynamic OEM Table Load: Jul 2 10:23:16.571401 kernel: ACPI: SSDT 0xFFFF9DD181AE8000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 2 10:23:16.571406 kernel: ACPI: Interpreter enabled Jul 2 10:23:16.571411 kernel: ACPI: PM: (supports S0 S5) Jul 2 10:23:16.571416 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 10:23:16.571421 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 2 10:23:16.571426 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 2 10:23:16.571431 kernel: HEST: Table parsing has been initialized. Jul 2 10:23:16.571436 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 2 10:23:16.571441 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 10:23:16.571446 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 2 10:23:16.571451 kernel: ACPI: PM: Power Resource [USBC] Jul 2 10:23:16.571456 kernel: ACPI: PM: Power Resource [V0PR] Jul 2 10:23:16.571461 kernel: ACPI: PM: Power Resource [V1PR] Jul 2 10:23:16.571466 kernel: ACPI: PM: Power Resource [V2PR] Jul 2 10:23:16.571471 kernel: ACPI: PM: Power Resource [WRST] Jul 2 10:23:16.571476 kernel: ACPI: PM: Power Resource [FN00] Jul 2 10:23:16.571481 kernel: ACPI: PM: Power Resource [FN01] Jul 2 10:23:16.571486 kernel: ACPI: PM: Power Resource [FN02] Jul 2 10:23:16.571491 kernel: ACPI: PM: Power Resource [FN03] Jul 2 10:23:16.571496 kernel: ACPI: PM: Power Resource [FN04] Jul 2 10:23:16.571501 kernel: ACPI: PM: Power Resource [PIN] Jul 2 10:23:16.571505 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 2 10:23:16.571570 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 10:23:16.571617 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 2 10:23:16.571658 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 2 10:23:16.571666 kernel: PCI host bridge to bus 0000:00 Jul 2 10:23:16.571709 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 10:23:16.571747 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 10:23:16.571785 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 10:23:16.571822 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jul 2 10:23:16.571860 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 2 10:23:16.571897 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 2 10:23:16.571948 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jul 2 10:23:16.571998 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jul 2 10:23:16.572042 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 2 10:23:16.572089 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jul 2 10:23:16.572135 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jul 2 10:23:16.572180 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jul 2 10:23:16.572224 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jul 2 10:23:16.572309 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jul 2 10:23:16.572351 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jul 2 10:23:16.572395 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 2 10:23:16.572443 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jul 2 10:23:16.572485 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jul 2 10:23:16.572526 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jul 2 10:23:16.572573 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jul 2 10:23:16.572615 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 10:23:16.572663 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jul 2 10:23:16.572707 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 10:23:16.572752 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jul 2 10:23:16.572794 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jul 2 10:23:16.572835 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 2 10:23:16.572881 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jul 2 10:23:16.572923 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jul 2 10:23:16.572965 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 2 10:23:16.573012 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jul 2 10:23:16.573054 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jul 2 10:23:16.573095 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 2 10:23:16.573140 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jul 2 10:23:16.573182 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jul 2 10:23:16.573225 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jul 2 10:23:16.573293 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jul 2 10:23:16.573337 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jul 2 10:23:16.573380 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jul 2 10:23:16.573423 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jul 2 10:23:16.573466 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 2 10:23:16.573514 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jul 2 10:23:16.573557 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 2 10:23:16.573606 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jul 2 10:23:16.573651 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 2 10:23:16.573699 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jul 2 10:23:16.573742 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 2 10:23:16.573789 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jul 2 10:23:16.573832 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 2 10:23:16.573881 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jul 2 10:23:16.573925 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jul 2 10:23:16.573972 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jul 2 10:23:16.574015 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 10:23:16.574063 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jul 2 10:23:16.574110 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jul 2 10:23:16.574153 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jul 2 10:23:16.574196 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jul 2 10:23:16.574245 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jul 2 10:23:16.574288 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jul 2 10:23:16.574339 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jul 2 10:23:16.574384 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jul 2 10:23:16.574428 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jul 2 10:23:16.574473 kernel: pci 0000:01:00.0: PME# supported from D3cold Jul 2 10:23:16.574517 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 2 10:23:16.574560 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 2 10:23:16.574610 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jul 2 10:23:16.574656 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jul 2 10:23:16.574701 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jul 2 10:23:16.574745 kernel: pci 0000:01:00.1: PME# supported from D3cold Jul 2 10:23:16.574790 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 2 10:23:16.574834 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 2 10:23:16.574878 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 10:23:16.574920 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 2 10:23:16.574964 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 10:23:16.575008 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 2 10:23:16.575056 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jul 2 10:23:16.575101 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jul 2 10:23:16.575145 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jul 2 10:23:16.575190 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jul 2 10:23:16.575237 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jul 2 10:23:16.575281 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 2 10:23:16.575326 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 2 10:23:16.575370 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 2 10:23:16.575413 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 2 10:23:16.575461 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 2 10:23:16.575506 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jul 2 10:23:16.575550 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jul 2 10:23:16.575595 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jul 2 10:23:16.575641 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jul 2 10:23:16.575686 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 2 10:23:16.575729 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 2 10:23:16.575772 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 2 10:23:16.575815 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 2 10:23:16.575858 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 2 10:23:16.575905 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jul 2 10:23:16.575950 kernel: pci 0000:06:00.0: enabling Extended Tags Jul 2 10:23:16.575996 kernel: pci 0000:06:00.0: supports D1 D2 Jul 2 10:23:16.576072 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 10:23:16.576156 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 2 10:23:16.576199 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 2 10:23:16.576245 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 2 10:23:16.576294 kernel: pci_bus 0000:07: extended config space not accessible Jul 2 10:23:16.576345 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jul 2 10:23:16.576395 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jul 2 10:23:16.576442 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jul 2 10:23:16.576489 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jul 2 10:23:16.576535 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 10:23:16.576581 kernel: pci 0000:07:00.0: supports D1 D2 Jul 2 10:23:16.576627 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 10:23:16.576671 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 2 10:23:16.576718 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 2 10:23:16.576764 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 2 10:23:16.576771 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 2 10:23:16.576777 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 2 10:23:16.576783 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 2 10:23:16.576789 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 2 10:23:16.576795 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 2 10:23:16.576800 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 2 10:23:16.576806 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 2 10:23:16.576812 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 2 10:23:16.576817 kernel: iommu: Default domain type: Translated Jul 2 10:23:16.576823 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 10:23:16.576869 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jul 2 10:23:16.576914 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 10:23:16.576962 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jul 2 10:23:16.576969 kernel: vgaarb: loaded Jul 2 10:23:16.576975 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 10:23:16.576981 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 10:23:16.576987 kernel: PTP clock support registered Jul 2 10:23:16.576993 kernel: PCI: Using ACPI for IRQ routing Jul 2 10:23:16.576998 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 10:23:16.577003 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 2 10:23:16.577010 kernel: e820: reserve RAM buffer [mem 0x81b28000-0x83ffffff] Jul 2 10:23:16.577015 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jul 2 10:23:16.577020 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jul 2 10:23:16.577025 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jul 2 10:23:16.577031 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jul 2 10:23:16.577036 kernel: clocksource: Switched to clocksource tsc-early Jul 2 10:23:16.577042 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 10:23:16.577047 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 10:23:16.577053 kernel: pnp: PnP ACPI init Jul 2 10:23:16.577096 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 2 10:23:16.577139 kernel: pnp 00:02: [dma 0 disabled] Jul 2 10:23:16.577182 kernel: pnp 00:03: [dma 0 disabled] Jul 2 10:23:16.577229 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 2 10:23:16.577270 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 2 10:23:16.577312 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jul 2 10:23:16.577355 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jul 2 10:23:16.577394 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jul 2 10:23:16.577433 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jul 2 10:23:16.577473 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jul 2 10:23:16.577511 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 2 10:23:16.577550 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 2 10:23:16.577589 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 2 10:23:16.577627 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 2 10:23:16.577669 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jul 2 10:23:16.577709 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 2 10:23:16.577750 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 2 10:23:16.577788 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 2 10:23:16.577826 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 2 10:23:16.577865 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 2 10:23:16.577903 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jul 2 10:23:16.577944 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jul 2 10:23:16.577952 kernel: pnp: PnP ACPI: found 10 devices Jul 2 10:23:16.577959 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 10:23:16.577965 kernel: NET: Registered PF_INET protocol family Jul 2 10:23:16.577970 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 10:23:16.577976 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 10:23:16.577981 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 10:23:16.577987 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 10:23:16.577992 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 10:23:16.577998 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 2 10:23:16.578003 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 10:23:16.578009 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 10:23:16.578015 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 10:23:16.578020 kernel: NET: Registered PF_XDP protocol family Jul 2 10:23:16.578065 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jul 2 10:23:16.578108 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jul 2 10:23:16.578152 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jul 2 10:23:16.578198 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 2 10:23:16.578245 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 2 10:23:16.578312 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 2 10:23:16.578358 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 2 10:23:16.578402 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 10:23:16.578445 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 2 10:23:16.578488 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 10:23:16.578531 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 2 10:23:16.578575 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 2 10:23:16.578618 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 2 10:23:16.578660 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 2 10:23:16.578703 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 2 10:23:16.578747 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 2 10:23:16.578790 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 2 10:23:16.578833 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 2 10:23:16.578879 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 2 10:23:16.578923 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 2 10:23:16.578967 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 2 10:23:16.579009 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 2 10:23:16.579052 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 2 10:23:16.579094 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 2 10:23:16.579132 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 2 10:23:16.579171 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 10:23:16.579208 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 10:23:16.579271 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 10:23:16.579309 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jul 2 10:23:16.579346 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 2 10:23:16.579390 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jul 2 10:23:16.579431 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 10:23:16.579478 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jul 2 10:23:16.579520 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jul 2 10:23:16.579563 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 2 10:23:16.579603 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jul 2 10:23:16.579645 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jul 2 10:23:16.579686 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jul 2 10:23:16.579729 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 2 10:23:16.579772 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jul 2 10:23:16.579780 kernel: PCI: CLS 64 bytes, default 64 Jul 2 10:23:16.579786 kernel: DMAR: No ATSR found Jul 2 10:23:16.579792 kernel: DMAR: No SATC found Jul 2 10:23:16.579797 kernel: DMAR: dmar0: Using Queued invalidation Jul 2 10:23:16.579840 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 2 10:23:16.579885 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 2 10:23:16.579928 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jul 2 10:23:16.579971 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 2 10:23:16.580016 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 2 10:23:16.580058 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 2 10:23:16.580102 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 2 10:23:16.580143 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 2 10:23:16.580186 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 2 10:23:16.580229 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 2 10:23:16.580274 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 2 10:23:16.580315 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 2 10:23:16.580358 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 2 10:23:16.580403 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 2 10:23:16.580445 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 2 10:23:16.580488 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 2 10:23:16.580531 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jul 2 10:23:16.580574 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 2 10:23:16.580616 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 2 10:23:16.580659 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 2 10:23:16.580701 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 2 10:23:16.580747 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jul 2 10:23:16.580792 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jul 2 10:23:16.580836 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jul 2 10:23:16.580882 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jul 2 10:23:16.580925 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jul 2 10:23:16.580972 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 2 10:23:16.580980 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 2 10:23:16.580986 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 10:23:16.580993 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jul 2 10:23:16.580998 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jul 2 10:23:16.581003 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 2 10:23:16.581009 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 2 10:23:16.581014 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 2 10:23:16.581061 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 2 10:23:16.581069 kernel: Initialise system trusted keyrings Jul 2 10:23:16.581074 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 2 10:23:16.581081 kernel: Key type asymmetric registered Jul 2 10:23:16.581086 kernel: Asymmetric key parser 'x509' registered Jul 2 10:23:16.581091 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 10:23:16.581097 kernel: io scheduler mq-deadline registered Jul 2 10:23:16.581102 kernel: io scheduler kyber registered Jul 2 10:23:16.581108 kernel: io scheduler bfq registered Jul 2 10:23:16.581152 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jul 2 10:23:16.581195 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jul 2 10:23:16.581242 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jul 2 10:23:16.581287 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jul 2 10:23:16.581330 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jul 2 10:23:16.581374 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jul 2 10:23:16.581420 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 2 10:23:16.581428 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 2 10:23:16.581434 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 2 10:23:16.581439 kernel: pstore: Registered erst as persistent store backend Jul 2 10:23:16.581445 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 10:23:16.581451 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 10:23:16.581457 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 10:23:16.581462 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 10:23:16.581468 kernel: hpet_acpi_add: no address or irqs in _CRS Jul 2 10:23:16.581513 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 2 10:23:16.581521 kernel: i8042: PNP: No PS/2 controller found. Jul 2 10:23:16.581559 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 2 10:23:16.581600 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 2 10:23:16.581640 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-07-02T10:23:15 UTC (1719915795) Jul 2 10:23:16.581679 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 2 10:23:16.581687 kernel: fail to initialize ptp_kvm Jul 2 10:23:16.581692 kernel: intel_pstate: Intel P-state driver initializing Jul 2 10:23:16.581698 kernel: intel_pstate: Disabling energy efficiency optimization Jul 2 10:23:16.581703 kernel: intel_pstate: HWP enabled Jul 2 10:23:16.581708 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jul 2 10:23:16.581714 kernel: vesafb: scrolling: redraw Jul 2 10:23:16.581720 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jul 2 10:23:16.581726 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000009d37d6fd, using 768k, total 768k Jul 2 10:23:16.581731 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 10:23:16.581737 kernel: fb0: VESA VGA frame buffer device Jul 2 10:23:16.581742 kernel: NET: Registered PF_INET6 protocol family Jul 2 10:23:16.581747 kernel: Segment Routing with IPv6 Jul 2 10:23:16.581753 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 10:23:16.581758 kernel: NET: Registered PF_PACKET protocol family Jul 2 10:23:16.581763 kernel: Key type dns_resolver registered Jul 2 10:23:16.581770 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Jul 2 10:23:16.581775 kernel: microcode: Microcode Update Driver: v2.2. Jul 2 10:23:16.581780 kernel: IPI shorthand broadcast: enabled Jul 2 10:23:16.581786 kernel: sched_clock: Marking stable (1681509219, 1338831740)->(4459295829, -1438954870) Jul 2 10:23:16.581791 kernel: registered taskstats version 1 Jul 2 10:23:16.581796 kernel: Loading compiled-in X.509 certificates Jul 2 10:23:16.581802 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 10:23:16.581807 kernel: Key type .fscrypt registered Jul 2 10:23:16.581812 kernel: Key type fscrypt-provisioning registered Jul 2 10:23:16.581818 kernel: pstore: Using crash dump compression: deflate Jul 2 10:23:16.581824 kernel: ima: Allocated hash algorithm: sha1 Jul 2 10:23:16.581829 kernel: ima: No architecture policies found Jul 2 10:23:16.581834 kernel: clk: Disabling unused clocks Jul 2 10:23:16.581840 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 10:23:16.581845 kernel: Write protecting the kernel read-only data: 28672k Jul 2 10:23:16.581851 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 10:23:16.581856 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 10:23:16.581861 kernel: Run /init as init process Jul 2 10:23:16.581867 kernel: with arguments: Jul 2 10:23:16.581873 kernel: /init Jul 2 10:23:16.581878 kernel: with environment: Jul 2 10:23:16.581883 kernel: HOME=/ Jul 2 10:23:16.581888 kernel: TERM=linux Jul 2 10:23:16.581894 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 10:23:16.581900 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 10:23:16.581907 systemd[1]: Detected architecture x86-64. Jul 2 10:23:16.581914 systemd[1]: Running in initrd. Jul 2 10:23:16.581919 systemd[1]: No hostname configured, using default hostname. Jul 2 10:23:16.581925 systemd[1]: Hostname set to . Jul 2 10:23:16.581930 systemd[1]: Initializing machine ID from random generator. Jul 2 10:23:16.581936 systemd[1]: Queued start job for default target initrd.target. Jul 2 10:23:16.581941 systemd[1]: Started systemd-ask-password-console.path. Jul 2 10:23:16.581947 systemd[1]: Reached target cryptsetup.target. Jul 2 10:23:16.581952 systemd[1]: Reached target paths.target. Jul 2 10:23:16.581958 systemd[1]: Reached target slices.target. Jul 2 10:23:16.581964 systemd[1]: Reached target swap.target. Jul 2 10:23:16.581969 systemd[1]: Reached target timers.target. Jul 2 10:23:16.581975 systemd[1]: Listening on iscsid.socket. Jul 2 10:23:16.581981 systemd[1]: Listening on iscsiuio.socket. Jul 2 10:23:16.581986 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 10:23:16.581992 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 10:23:16.581998 systemd[1]: Listening on systemd-journald.socket. Jul 2 10:23:16.582004 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jul 2 10:23:16.582009 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jul 2 10:23:16.582015 kernel: clocksource: Switched to clocksource tsc Jul 2 10:23:16.582020 systemd[1]: Listening on systemd-networkd.socket. Jul 2 10:23:16.582026 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 10:23:16.582031 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 10:23:16.582037 systemd[1]: Reached target sockets.target. Jul 2 10:23:16.582042 systemd[1]: Starting kmod-static-nodes.service... Jul 2 10:23:16.582049 systemd[1]: Finished network-cleanup.service. Jul 2 10:23:16.582054 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 10:23:16.582060 systemd[1]: Starting systemd-journald.service... Jul 2 10:23:16.582065 systemd[1]: Starting systemd-modules-load.service... Jul 2 10:23:16.582073 systemd-journald[267]: Journal started Jul 2 10:23:16.582099 systemd-journald[267]: Runtime Journal (/run/log/journal/e73b221a0dc0437484b5dab3d973a375) is 8.0M, max 640.1M, 632.1M free. Jul 2 10:23:16.584462 systemd-modules-load[268]: Inserted module 'overlay' Jul 2 10:23:16.589000 audit: BPF prog-id=6 op=LOAD Jul 2 10:23:16.608286 kernel: audit: type=1334 audit(1719915796.589:2): prog-id=6 op=LOAD Jul 2 10:23:16.608305 systemd[1]: Starting systemd-resolved.service... Jul 2 10:23:16.657276 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 10:23:16.657292 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 10:23:16.690270 kernel: Bridge firewalling registered Jul 2 10:23:16.690286 systemd[1]: Started systemd-journald.service. Jul 2 10:23:16.704539 systemd-modules-load[268]: Inserted module 'br_netfilter' Jul 2 10:23:16.752171 kernel: audit: type=1130 audit(1719915796.711:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:16.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:16.707033 systemd-resolved[270]: Positive Trust Anchors: Jul 2 10:23:16.815312 kernel: SCSI subsystem initialized Jul 2 10:23:16.815323 kernel: audit: type=1130 audit(1719915796.766:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:16.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:16.707039 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 10:23:16.932315 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 10:23:16.932327 kernel: audit: type=1130 audit(1719915796.837:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:16.932335 kernel: device-mapper: uevent: version 1.0.3 Jul 2 10:23:16.932341 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 10:23:16.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:16.707059 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 10:23:17.006502 kernel: audit: type=1130 audit(1719915796.940:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:16.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:16.708623 systemd-resolved[270]: Defaulting to hostname 'linux'. Jul 2 10:23:17.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:16.712433 systemd[1]: Started systemd-resolved.service. Jul 2 10:23:17.114485 kernel: audit: type=1130 audit(1719915797.014:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:17.114497 kernel: audit: type=1130 audit(1719915797.067:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:17.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:16.767391 systemd[1]: Finished kmod-static-nodes.service. Jul 2 10:23:16.838386 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 10:23:16.933658 systemd-modules-load[268]: Inserted module 'dm_multipath' Jul 2 10:23:16.941556 systemd[1]: Finished systemd-modules-load.service. Jul 2 10:23:17.015601 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 10:23:17.068536 systemd[1]: Reached target nss-lookup.target. Jul 2 10:23:17.123840 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 10:23:17.143806 systemd[1]: Starting systemd-sysctl.service... Jul 2 10:23:17.144095 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 10:23:17.147080 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 10:23:17.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:17.147869 systemd[1]: Finished systemd-sysctl.service. Jul 2 10:23:17.196308 kernel: audit: type=1130 audit(1719915797.145:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:17.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:17.209588 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 10:23:17.275339 kernel: audit: type=1130 audit(1719915797.208:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:17.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:17.267874 systemd[1]: Starting dracut-cmdline.service... Jul 2 10:23:17.289340 dracut-cmdline[293]: dracut-dracut-053 Jul 2 10:23:17.289340 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 2 10:23:17.289340 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 10:23:17.360267 kernel: Loading iSCSI transport class v2.0-870. Jul 2 10:23:17.360280 kernel: iscsi: registered transport (tcp) Jul 2 10:23:17.414810 kernel: iscsi: registered transport (qla4xxx) Jul 2 10:23:17.414861 kernel: QLogic iSCSI HBA Driver Jul 2 10:23:17.431131 systemd[1]: Finished dracut-cmdline.service. Jul 2 10:23:17.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:17.431692 systemd[1]: Starting dracut-pre-udev.service... Jul 2 10:23:17.488273 kernel: raid6: avx2x4 gen() 47752 MB/s Jul 2 10:23:17.523303 kernel: raid6: avx2x4 xor() 21446 MB/s Jul 2 10:23:17.558266 kernel: raid6: avx2x2 gen() 53816 MB/s Jul 2 10:23:17.593307 kernel: raid6: avx2x2 xor() 32017 MB/s Jul 2 10:23:17.628308 kernel: raid6: avx2x1 gen() 45228 MB/s Jul 2 10:23:17.662231 kernel: raid6: avx2x1 xor() 27881 MB/s Jul 2 10:23:17.696231 kernel: raid6: sse2x4 gen() 21377 MB/s Jul 2 10:23:17.730268 kernel: raid6: sse2x4 xor() 11994 MB/s Jul 2 10:23:17.764266 kernel: raid6: sse2x2 gen() 21651 MB/s Jul 2 10:23:17.798269 kernel: raid6: sse2x2 xor() 13426 MB/s Jul 2 10:23:17.832270 kernel: raid6: sse2x1 gen() 18294 MB/s Jul 2 10:23:17.884191 kernel: raid6: sse2x1 xor() 8928 MB/s Jul 2 10:23:17.884206 kernel: raid6: using algorithm avx2x2 gen() 53816 MB/s Jul 2 10:23:17.884214 kernel: raid6: .... xor() 32017 MB/s, rmw enabled Jul 2 10:23:17.902424 kernel: raid6: using avx2x2 recovery algorithm Jul 2 10:23:17.948231 kernel: xor: automatically using best checksumming function avx Jul 2 10:23:18.028235 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 10:23:18.032870 systemd[1]: Finished dracut-pre-udev.service. Jul 2 10:23:18.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:18.041000 audit: BPF prog-id=7 op=LOAD Jul 2 10:23:18.041000 audit: BPF prog-id=8 op=LOAD Jul 2 10:23:18.043328 systemd[1]: Starting systemd-udevd.service... Jul 2 10:23:18.051689 systemd-udevd[472]: Using default interface naming scheme 'v252'. Jul 2 10:23:18.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:18.058770 systemd[1]: Started systemd-udevd.service. Jul 2 10:23:18.098355 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Jul 2 10:23:18.075328 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 10:23:18.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:18.103171 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 10:23:18.116896 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 10:23:18.167239 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 10:23:18.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:18.195237 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 10:23:18.213234 kernel: ACPI: bus type USB registered Jul 2 10:23:18.213258 kernel: usbcore: registered new interface driver usbfs Jul 2 10:23:18.249206 kernel: usbcore: registered new interface driver hub Jul 2 10:23:18.249241 kernel: usbcore: registered new device driver usb Jul 2 10:23:18.267234 kernel: libata version 3.00 loaded. Jul 2 10:23:18.306435 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 10:23:18.306476 kernel: AES CTR mode by8 optimization enabled Jul 2 10:23:18.342691 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 2 10:23:18.342724 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 2 10:23:18.343237 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 2 10:23:18.343352 kernel: ahci 0000:00:17.0: version 3.0 Jul 2 10:23:18.348234 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Jul 2 10:23:18.348345 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 2 10:23:18.376535 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 2 10:23:18.382238 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jul 2 10:23:18.382455 kernel: pps pps0: new PPS source ptp0 Jul 2 10:23:18.382595 kernel: igb 0000:03:00.0: added PHC on eth0 Jul 2 10:23:18.382731 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 2 10:23:18.382906 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0b:bc Jul 2 10:23:18.382988 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jul 2 10:23:18.383039 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 2 10:23:18.420230 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 2 10:23:18.420312 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 2 10:23:18.420378 kernel: pps pps1: new PPS source ptp1 Jul 2 10:23:18.420445 kernel: igb 0000:04:00.0: added PHC on eth1 Jul 2 10:23:18.420512 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 2 10:23:18.420574 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0b:bd Jul 2 10:23:18.420635 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jul 2 10:23:18.420695 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 2 10:23:18.447235 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 2 10:23:18.478695 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jul 2 10:23:18.478888 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 2 10:23:18.509935 kernel: scsi host0: ahci Jul 2 10:23:18.510214 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 2 10:23:18.510377 kernel: scsi host1: ahci Jul 2 10:23:18.545202 kernel: hub 1-0:1.0: USB hub found Jul 2 10:23:18.545472 kernel: scsi host2: ahci Jul 2 10:23:18.576080 kernel: hub 1-0:1.0: 16 ports detected Jul 2 10:23:18.576310 kernel: scsi host3: ahci Jul 2 10:23:18.593231 kernel: hub 2-0:1.0: USB hub found Jul 2 10:23:18.593321 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jul 2 10:23:18.607236 kernel: scsi host4: ahci Jul 2 10:23:18.607313 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 2 10:23:18.631466 kernel: hub 2-0:1.0: 10 ports detected Jul 2 10:23:18.631664 kernel: scsi host5: ahci Jul 2 10:23:18.847243 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 2 10:23:18.847312 kernel: scsi host6: ahci Jul 2 10:23:18.847345 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 10:23:18.915496 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Jul 2 10:23:18.915514 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Jul 2 10:23:18.932407 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Jul 2 10:23:18.949152 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Jul 2 10:23:18.965927 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Jul 2 10:23:18.982615 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Jul 2 10:23:18.999180 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Jul 2 10:23:19.060055 kernel: hub 1-14:1.0: USB hub found Jul 2 10:23:19.060255 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jul 2 10:23:19.060402 kernel: hub 1-14:1.0: 4 ports detected Jul 2 10:23:19.074234 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Jul 2 10:23:19.106879 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 2 10:23:19.341267 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 2 10:23:19.341287 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 2 10:23:19.355267 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 2 10:23:19.370289 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 2 10:23:19.370314 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 2 10:23:19.401297 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 2 10:23:19.401371 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 2 10:23:19.434263 kernel: port_module: 9 callbacks suppressed Jul 2 10:23:19.434283 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jul 2 10:23:19.434350 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 2 10:23:19.465277 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 10:23:19.465350 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 2 10:23:19.514267 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 2 10:23:19.514282 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 10:23:19.544083 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 2 10:23:19.592125 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 2 10:23:19.592144 kernel: ata2.00: Features: NCQ-prio Jul 2 10:23:19.624607 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 2 10:23:19.624651 kernel: ata1.00: Features: NCQ-prio Jul 2 10:23:19.641270 kernel: ata2.00: configured for UDMA/133 Jul 2 10:23:19.657278 kernel: ata1.00: configured for UDMA/133 Jul 2 10:23:19.675270 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 2 10:23:19.675376 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jul 2 10:23:19.675437 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 2 10:23:19.730232 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jul 2 10:23:19.761718 kernel: usbcore: registered new interface driver usbhid Jul 2 10:23:19.761735 kernel: usbhid: USB HID core driver Jul 2 10:23:19.797231 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 2 10:23:19.797288 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 10:23:19.812616 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jul 2 10:23:19.812715 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 10:23:19.812728 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 2 10:23:19.812842 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 2 10:23:19.812948 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 10:23:19.813013 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 10:23:19.813070 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 2 10:23:19.813126 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 10:23:19.813191 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 10:23:19.815232 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 10:23:19.815248 kernel: GPT:9289727 != 937703087 Jul 2 10:23:19.815258 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 10:23:19.815265 kernel: GPT:9289727 != 937703087 Jul 2 10:23:19.815272 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 10:23:19.815278 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 10:23:19.815284 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 10:23:19.815291 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 10:23:19.880036 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 2 10:23:19.880168 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jul 2 10:23:19.880247 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 2 10:23:19.910082 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jul 2 10:23:19.910170 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 2 10:23:19.944450 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 2 10:23:20.207313 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 10:23:20.246316 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 10:23:20.263320 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 10:23:20.263350 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jul 2 10:23:20.307277 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 10:23:20.361475 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (556) Jul 2 10:23:20.338496 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 10:23:20.353642 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 10:23:20.378789 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 10:23:20.396520 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 10:23:20.464370 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 10:23:20.464384 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 10:23:20.464391 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 10:23:20.408346 systemd[1]: Starting disk-uuid.service... Jul 2 10:23:20.482336 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 10:23:20.482384 disk-uuid[690]: Primary Header is updated. Jul 2 10:23:20.482384 disk-uuid[690]: Secondary Entries is updated. Jul 2 10:23:20.482384 disk-uuid[690]: Secondary Header is updated. Jul 2 10:23:20.536273 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 10:23:20.536303 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 10:23:21.508470 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 10:23:21.527818 disk-uuid[692]: The operation has completed successfully. Jul 2 10:23:21.536453 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 10:23:21.564135 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 10:23:21.661078 kernel: audit: type=1130 audit(1719915801.570:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:21.661094 kernel: audit: type=1131 audit(1719915801.570:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:21.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:21.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:21.564195 systemd[1]: Finished disk-uuid.service. Jul 2 10:23:21.690315 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 10:23:21.578339 systemd[1]: Starting verity-setup.service... Jul 2 10:23:21.723832 systemd[1]: Found device dev-mapper-usr.device. Jul 2 10:23:21.724603 systemd[1]: Mounting sysusr-usr.mount... Jul 2 10:23:21.744401 systemd[1]: Finished verity-setup.service. Jul 2 10:23:21.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:21.805235 kernel: audit: type=1130 audit(1719915801.758:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:21.832619 systemd[1]: Mounted sysusr-usr.mount. Jul 2 10:23:21.846336 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 10:23:21.839527 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 10:23:21.926267 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 10:23:21.926283 kernel: BTRFS info (device sda6): using free space tree Jul 2 10:23:21.926290 kernel: BTRFS info (device sda6): has skinny extents Jul 2 10:23:21.926297 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 10:23:21.839923 systemd[1]: Starting ignition-setup.service... Jul 2 10:23:21.859530 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 10:23:21.997265 kernel: audit: type=1130 audit(1719915801.950:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:21.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:21.934689 systemd[1]: Finished ignition-setup.service. Jul 2 10:23:22.054311 kernel: audit: type=1130 audit(1719915802.005:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:22.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:21.951591 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 10:23:22.062000 audit: BPF prog-id=9 op=LOAD Jul 2 10:23:22.006941 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 10:23:22.102456 kernel: audit: type=1334 audit(1719915802.062:24): prog-id=9 op=LOAD Jul 2 10:23:22.064096 systemd[1]: Starting systemd-networkd.service... Jul 2 10:23:22.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:22.130449 ignition[867]: Ignition 2.14.0 Jul 2 10:23:22.172352 kernel: audit: type=1130 audit(1719915802.110:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:22.100120 systemd-networkd[878]: lo: Link UP Jul 2 10:23:22.130454 ignition[867]: Stage: fetch-offline Jul 2 10:23:22.236267 kernel: audit: type=1130 audit(1719915802.185:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:22.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:22.100122 systemd-networkd[878]: lo: Gained carrier Jul 2 10:23:22.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:22.130481 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:23:22.311363 kernel: audit: type=1130 audit(1719915802.243:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:22.100429 systemd-networkd[878]: Enumeration completed Jul 2 10:23:22.130497 ignition[867]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 10:23:22.100499 systemd[1]: Started systemd-networkd.service. Jul 2 10:23:22.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:22.138698 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 10:23:22.356418 iscsid[898]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 10:23:22.356418 iscsid[898]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 10:23:22.356418 iscsid[898]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 10:23:22.356418 iscsid[898]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 10:23:22.356418 iscsid[898]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 10:23:22.356418 iscsid[898]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 10:23:22.356418 iscsid[898]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 10:23:22.505433 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 10:23:22.505527 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Jul 2 10:23:22.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:22.101162 systemd-networkd[878]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 10:23:22.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:22.138760 ignition[867]: parsed url from cmdline: "" Jul 2 10:23:22.111372 systemd[1]: Reached target network.target. Jul 2 10:23:22.138762 ignition[867]: no config URL provided Jul 2 10:23:22.142950 unknown[867]: fetched base config from "system" Jul 2 10:23:22.138765 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 10:23:22.142953 unknown[867]: fetched user config from "system" Jul 2 10:23:22.138788 ignition[867]: parsing config with SHA512: ef0e6f1a486a83d60bca8be726c1443cd74fb231fca84e691db5c6a8ad796e4ac4b7383bb3351cb1d6f94b9c62ff525efc7672bef5fb7750e634221bb1265d31 Jul 2 10:23:22.167777 systemd[1]: Starting iscsiuio.service... Jul 2 10:23:22.143238 ignition[867]: fetch-offline: fetch-offline passed Jul 2 10:23:22.179400 systemd[1]: Started iscsiuio.service. Jul 2 10:23:22.143241 ignition[867]: POST message to Packet Timeline Jul 2 10:23:22.186477 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 10:23:22.143246 ignition[867]: POST Status error: resource requires networking Jul 2 10:23:22.244370 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 10:23:22.143287 ignition[867]: Ignition finished successfully Jul 2 10:23:22.244815 systemd[1]: Starting ignition-kargs.service... Jul 2 10:23:22.299872 ignition[888]: Ignition 2.14.0 Jul 2 10:23:22.695357 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 2 10:23:22.302799 systemd[1]: Starting iscsid.service... Jul 2 10:23:22.299875 ignition[888]: Stage: kargs Jul 2 10:23:22.318620 systemd[1]: Started iscsid.service. Jul 2 10:23:22.299937 ignition[888]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:23:22.332837 systemd[1]: Starting dracut-initqueue.service... Jul 2 10:23:22.299946 ignition[888]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 10:23:22.346594 systemd[1]: Finished dracut-initqueue.service. Jul 2 10:23:22.302580 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 10:23:22.364526 systemd[1]: Reached target remote-fs-pre.target. Jul 2 10:23:22.303167 ignition[888]: kargs: kargs passed Jul 2 10:23:22.408405 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 10:23:22.303169 ignition[888]: POST message to Packet Timeline Jul 2 10:23:22.408520 systemd[1]: Reached target remote-fs.target. Jul 2 10:23:22.303179 ignition[888]: GET https://metadata.packet.net/metadata: attempt #1 Jul 2 10:23:22.446471 systemd-networkd[878]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 10:23:22.306341 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48271->[::1]:53: read: connection refused Jul 2 10:23:22.459893 systemd[1]: Starting dracut-pre-mount.service... Jul 2 10:23:22.506680 ignition[888]: GET https://metadata.packet.net/metadata: attempt #2 Jul 2 10:23:22.492586 systemd[1]: Finished dracut-pre-mount.service. Jul 2 10:23:22.506998 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54863->[::1]:53: read: connection refused Jul 2 10:23:22.691658 systemd-networkd[878]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 10:23:22.720523 systemd-networkd[878]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 10:23:22.748709 systemd-networkd[878]: enp1s0f1np1: Link UP Jul 2 10:23:22.748883 systemd-networkd[878]: enp1s0f1np1: Gained carrier Jul 2 10:23:22.907958 ignition[888]: GET https://metadata.packet.net/metadata: attempt #3 Jul 2 10:23:22.758494 systemd-networkd[878]: enp1s0f0np0: Link UP Jul 2 10:23:22.909032 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60081->[::1]:53: read: connection refused Jul 2 10:23:22.758689 systemd-networkd[878]: eno2: Link UP Jul 2 10:23:22.758868 systemd-networkd[878]: eno1: Link UP Jul 2 10:23:23.503183 systemd-networkd[878]: enp1s0f0np0: Gained carrier Jul 2 10:23:23.513510 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Jul 2 10:23:23.536558 systemd-networkd[878]: enp1s0f0np0: DHCPv4 address 147.75.203.11/31, gateway 147.75.203.10 acquired from 145.40.83.140 Jul 2 10:23:23.709559 ignition[888]: GET https://metadata.packet.net/metadata: attempt #4 Jul 2 10:23:23.710757 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59778->[::1]:53: read: connection refused Jul 2 10:23:24.696817 systemd-networkd[878]: enp1s0f1np1: Gained IPv6LL Jul 2 10:23:24.952800 systemd-networkd[878]: enp1s0f0np0: Gained IPv6LL Jul 2 10:23:25.312580 ignition[888]: GET https://metadata.packet.net/metadata: attempt #5 Jul 2 10:23:25.313894 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54943->[::1]:53: read: connection refused Jul 2 10:23:28.517195 ignition[888]: GET https://metadata.packet.net/metadata: attempt #6 Jul 2 10:23:28.560959 ignition[888]: GET result: OK Jul 2 10:23:28.789996 ignition[888]: Ignition finished successfully Jul 2 10:23:28.794174 systemd[1]: Finished ignition-kargs.service. Jul 2 10:23:28.878851 kernel: kauditd_printk_skb: 3 callbacks suppressed Jul 2 10:23:28.878869 kernel: audit: type=1130 audit(1719915808.804:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:28.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:28.814275 ignition[915]: Ignition 2.14.0 Jul 2 10:23:28.807439 systemd[1]: Starting ignition-disks.service... Jul 2 10:23:28.814279 ignition[915]: Stage: disks Jul 2 10:23:28.814371 ignition[915]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:23:28.814380 ignition[915]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 10:23:28.816654 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 10:23:28.817248 ignition[915]: disks: disks passed Jul 2 10:23:28.817251 ignition[915]: POST message to Packet Timeline Jul 2 10:23:28.817260 ignition[915]: GET https://metadata.packet.net/metadata: attempt #1 Jul 2 10:23:28.841812 ignition[915]: GET result: OK Jul 2 10:23:29.033987 ignition[915]: Ignition finished successfully Jul 2 10:23:29.037225 systemd[1]: Finished ignition-disks.service. Jul 2 10:23:29.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.051792 systemd[1]: Reached target initrd-root-device.target. Jul 2 10:23:29.136463 kernel: audit: type=1130 audit(1719915809.050:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.122414 systemd[1]: Reached target local-fs-pre.target. Jul 2 10:23:29.122455 systemd[1]: Reached target local-fs.target. Jul 2 10:23:29.144432 systemd[1]: Reached target sysinit.target. Jul 2 10:23:29.158401 systemd[1]: Reached target basic.target. Jul 2 10:23:29.159101 systemd[1]: Starting systemd-fsck-root.service... Jul 2 10:23:29.185733 systemd-fsck[932]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 10:23:29.202825 systemd[1]: Finished systemd-fsck-root.service. Jul 2 10:23:29.291619 kernel: audit: type=1130 audit(1719915809.210:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.291729 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 10:23:29.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.217119 systemd[1]: Mounting sysroot.mount... Jul 2 10:23:29.299895 systemd[1]: Mounted sysroot.mount. Jul 2 10:23:29.314537 systemd[1]: Reached target initrd-root-fs.target. Jul 2 10:23:29.322161 systemd[1]: Mounting sysroot-usr.mount... Jul 2 10:23:29.344225 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 10:23:29.358839 systemd[1]: Starting flatcar-static-network.service... Jul 2 10:23:29.374403 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 10:23:29.374445 systemd[1]: Reached target ignition-diskful.target. Jul 2 10:23:29.393632 systemd[1]: Mounted sysroot-usr.mount. Jul 2 10:23:29.416900 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 10:23:29.552422 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (946) Jul 2 10:23:29.552449 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 10:23:29.552465 kernel: BTRFS info (device sda6): using free space tree Jul 2 10:23:29.552477 kernel: BTRFS info (device sda6): has skinny extents Jul 2 10:23:29.552487 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 10:23:29.552554 coreos-metadata[941]: Jul 02 10:23:29.510 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 10:23:29.552554 coreos-metadata[941]: Jul 02 10:23:29.534 INFO Fetch successful Jul 2 10:23:29.732787 kernel: audit: type=1130 audit(1719915809.559:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.732806 kernel: audit: type=1130 audit(1719915809.621:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.732834 kernel: audit: type=1131 audit(1719915809.621:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.732921 coreos-metadata[940]: Jul 02 10:23:29.510 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 10:23:29.732921 coreos-metadata[940]: Jul 02 10:23:29.569 INFO Fetch successful Jul 2 10:23:29.732921 coreos-metadata[940]: Jul 02 10:23:29.601 INFO wrote hostname ci-3510.3.5-a-539a8ddad9 to /sysroot/etc/hostname Jul 2 10:23:29.815541 kernel: audit: type=1130 audit(1719915809.740:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.430520 systemd[1]: Starting initrd-setup-root.service... Jul 2 10:23:29.484569 systemd[1]: Finished initrd-setup-root.service. Jul 2 10:23:29.857346 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 10:23:29.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.561624 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 2 10:23:29.932480 kernel: audit: type=1130 audit(1719915809.864:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.932495 initrd-setup-root[959]: cut: /sysroot/etc/group: No such file or directory Jul 2 10:23:29.561665 systemd[1]: Finished flatcar-static-network.service. Jul 2 10:23:29.951446 initrd-setup-root[967]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 10:23:29.961425 ignition[1015]: INFO : Ignition 2.14.0 Jul 2 10:23:29.961425 ignition[1015]: INFO : Stage: mount Jul 2 10:23:29.961425 ignition[1015]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:23:29.961425 ignition[1015]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 10:23:29.961425 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 10:23:29.961425 ignition[1015]: INFO : mount: mount passed Jul 2 10:23:29.961425 ignition[1015]: INFO : POST message to Packet Timeline Jul 2 10:23:29.961425 ignition[1015]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 10:23:29.961425 ignition[1015]: INFO : GET result: OK Jul 2 10:23:29.622607 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 10:23:30.058569 initrd-setup-root[975]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 10:23:29.741532 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 10:23:30.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:30.134475 ignition[1015]: INFO : Ignition finished successfully Jul 2 10:23:30.149298 kernel: audit: type=1130 audit(1719915810.075:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:29.807830 systemd[1]: Starting ignition-mount.service... Jul 2 10:23:29.835804 systemd[1]: Starting sysroot-boot.service... Jul 2 10:23:30.244313 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1030) Jul 2 10:23:30.244342 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 10:23:30.244358 kernel: BTRFS info (device sda6): using free space tree Jul 2 10:23:30.244380 kernel: BTRFS info (device sda6): has skinny extents Jul 2 10:23:30.244395 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 10:23:29.850827 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 10:23:29.850930 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 10:23:29.855000 systemd[1]: Finished sysroot-boot.service. Jul 2 10:23:30.063979 systemd[1]: Finished ignition-mount.service. Jul 2 10:23:30.314472 ignition[1049]: INFO : Ignition 2.14.0 Jul 2 10:23:30.314472 ignition[1049]: INFO : Stage: files Jul 2 10:23:30.314472 ignition[1049]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:23:30.314472 ignition[1049]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 10:23:30.314472 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 10:23:30.314472 ignition[1049]: DEBUG : files: compiled without relabeling support, skipping Jul 2 10:23:30.314472 ignition[1049]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 10:23:30.314472 ignition[1049]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 10:23:30.314472 ignition[1049]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 10:23:30.314472 ignition[1049]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 10:23:30.314472 ignition[1049]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 10:23:30.314472 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 10:23:30.314472 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 10:23:30.078846 systemd[1]: Starting ignition-files.service... Jul 2 10:23:30.483493 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 10:23:30.143066 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 10:23:30.276594 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 10:23:30.311605 unknown[1049]: wrote ssh authorized keys file for user: core Jul 2 10:23:30.544610 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 10:23:30.561483 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 10:23:30.561483 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 10:23:31.011840 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 10:23:31.053145 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 10:23:31.079363 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem886996949" Jul 2 10:23:31.345559 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1051) Jul 2 10:23:31.345658 ignition[1049]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem886996949": device or resource busy Jul 2 10:23:31.345658 ignition[1049]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem886996949", trying btrfs: device or resource busy Jul 2 10:23:31.345658 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem886996949" Jul 2 10:23:31.345658 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem886996949" Jul 2 10:23:31.345658 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem886996949" Jul 2 10:23:31.345658 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem886996949" Jul 2 10:23:31.345658 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Jul 2 10:23:31.345658 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 10:23:31.345658 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 10:23:31.509296 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Jul 2 10:23:31.586723 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 10:23:31.586723 ignition[1049]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 10:23:31.586723 ignition[1049]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(15): [started] setting preset to enabled for "packet-phone-home.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(15): [finished] setting preset to enabled for "packet-phone-home.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 10:23:31.631412 ignition[1049]: INFO : files: files passed Jul 2 10:23:31.631412 ignition[1049]: INFO : POST message to Packet Timeline Jul 2 10:23:31.631412 ignition[1049]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 10:23:31.631412 ignition[1049]: INFO : GET result: OK Jul 2 10:23:31.942434 kernel: audit: type=1130 audit(1719915811.839:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:31.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:31.830055 systemd[1]: Finished ignition-files.service. Jul 2 10:23:31.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:31.959465 ignition[1049]: INFO : Ignition finished successfully Jul 2 10:23:31.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:31.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:31.846366 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 10:23:31.993507 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 10:23:31.909502 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 10:23:32.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:31.909903 systemd[1]: Starting ignition-quench.service... Jul 2 10:23:31.935615 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 10:23:31.952739 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 10:23:31.952807 systemd[1]: Finished ignition-quench.service. Jul 2 10:23:32.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:31.967593 systemd[1]: Reached target ignition-complete.target. Jul 2 10:23:31.984690 systemd[1]: Starting initrd-parse-etc.service... Jul 2 10:23:32.006540 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 10:23:32.006595 systemd[1]: Finished initrd-parse-etc.service. Jul 2 10:23:32.022534 systemd[1]: Reached target initrd-fs.target. Jul 2 10:23:32.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.047439 systemd[1]: Reached target initrd.target. Jul 2 10:23:32.062617 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 10:23:32.064591 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 10:23:32.077589 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 10:23:32.095871 systemd[1]: Starting initrd-cleanup.service... Jul 2 10:23:32.115793 systemd[1]: Stopped target nss-lookup.target. Jul 2 10:23:32.126482 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 10:23:32.144564 systemd[1]: Stopped target timers.target. Jul 2 10:23:32.159568 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 10:23:32.159722 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 10:23:32.175033 systemd[1]: Stopped target initrd.target. Jul 2 10:23:32.188817 systemd[1]: Stopped target basic.target. Jul 2 10:23:32.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.202835 systemd[1]: Stopped target ignition-complete.target. Jul 2 10:23:32.223812 systemd[1]: Stopped target ignition-diskful.target. Jul 2 10:23:32.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.241826 systemd[1]: Stopped target initrd-root-device.target. Jul 2 10:23:32.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.257831 systemd[1]: Stopped target remote-fs.target. Jul 2 10:23:32.273804 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 10:23:32.288849 systemd[1]: Stopped target sysinit.target. Jul 2 10:23:32.304846 systemd[1]: Stopped target local-fs.target. Jul 2 10:23:32.319807 systemd[1]: Stopped target local-fs-pre.target. Jul 2 10:23:32.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.337831 systemd[1]: Stopped target swap.target. Jul 2 10:23:32.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.351705 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 10:23:32.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.352073 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 10:23:32.546349 ignition[1096]: INFO : Ignition 2.14.0 Jul 2 10:23:32.546349 ignition[1096]: INFO : Stage: umount Jul 2 10:23:32.546349 ignition[1096]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:23:32.546349 ignition[1096]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 10:23:32.546349 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 10:23:32.546349 ignition[1096]: INFO : umount: umount passed Jul 2 10:23:32.546349 ignition[1096]: INFO : POST message to Packet Timeline Jul 2 10:23:32.546349 ignition[1096]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 10:23:32.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.677893 iscsid[898]: iscsid shutting down. Jul 2 10:23:32.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.368039 systemd[1]: Stopped target cryptsetup.target. Jul 2 10:23:32.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.706794 ignition[1096]: INFO : GET result: OK Jul 2 10:23:32.383708 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 10:23:32.384072 systemd[1]: Stopped dracut-initqueue.service. Jul 2 10:23:32.401087 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 10:23:32.767446 ignition[1096]: INFO : Ignition finished successfully Jul 2 10:23:32.401466 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 10:23:32.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.418031 systemd[1]: Stopped target paths.target. Jul 2 10:23:32.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.432696 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 10:23:32.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.436575 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 10:23:32.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.829000 audit: BPF prog-id=6 op=UNLOAD Jul 2 10:23:32.447764 systemd[1]: Stopped target slices.target. Jul 2 10:23:32.462805 systemd[1]: Stopped target sockets.target. Jul 2 10:23:32.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.477941 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 10:23:32.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.478352 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 10:23:32.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.494920 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 10:23:32.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.495286 systemd[1]: Stopped ignition-files.service. Jul 2 10:23:32.509927 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 10:23:32.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.510310 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 10:23:32.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.526975 systemd[1]: Stopping ignition-mount.service... Jul 2 10:23:32.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.539614 systemd[1]: Stopping iscsid.service... Jul 2 10:23:32.553927 systemd[1]: Stopping sysroot-boot.service... Jul 2 10:23:32.572296 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 10:23:33.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.572459 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 10:23:32.591717 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 10:23:32.591949 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 10:23:33.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.627460 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 10:23:33.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.629141 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 10:23:33.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.629404 systemd[1]: Stopped iscsid.service. Jul 2 10:23:32.638594 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 10:23:33.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.638839 systemd[1]: Closed iscsid.socket. Jul 2 10:23:33.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.652761 systemd[1]: Stopping iscsiuio.service... Jul 2 10:23:33.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.667871 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 10:23:33.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:33.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.668096 systemd[1]: Stopped iscsiuio.service. Jul 2 10:23:32.685115 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 10:23:32.685352 systemd[1]: Finished initrd-cleanup.service. Jul 2 10:23:32.701357 systemd[1]: Stopped target network.target. Jul 2 10:23:32.714484 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 10:23:32.714590 systemd[1]: Closed iscsiuio.socket. Jul 2 10:23:32.728722 systemd[1]: Stopping systemd-networkd.service... Jul 2 10:23:32.743437 systemd-networkd[878]: enp1s0f1np1: DHCPv6 lease lost Jul 2 10:23:32.743691 systemd[1]: Stopping systemd-resolved.service... Jul 2 10:23:32.752428 systemd-networkd[878]: enp1s0f0np0: DHCPv6 lease lost Jul 2 10:23:33.261000 audit: BPF prog-id=9 op=UNLOAD Jul 2 10:23:33.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:32.757597 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 10:23:32.757640 systemd[1]: Stopped systemd-resolved.service. Jul 2 10:23:32.781585 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 10:23:32.781645 systemd[1]: Stopped systemd-networkd.service. Jul 2 10:23:32.798543 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 10:23:32.798589 systemd[1]: Stopped ignition-mount.service. Jul 2 10:23:32.813534 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 10:23:32.813575 systemd[1]: Stopped sysroot-boot.service. Jul 2 10:23:32.830622 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 10:23:32.830648 systemd[1]: Closed systemd-networkd.socket. Jul 2 10:23:32.847401 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 10:23:32.847450 systemd[1]: Stopped ignition-disks.service. Jul 2 10:23:32.864494 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 10:23:32.864577 systemd[1]: Stopped ignition-kargs.service. Jul 2 10:23:32.880648 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 10:23:32.880799 systemd[1]: Stopped ignition-setup.service. Jul 2 10:23:32.899727 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 10:23:32.899874 systemd[1]: Stopped initrd-setup-root.service. Jul 2 10:23:32.918306 systemd[1]: Stopping network-cleanup.service... Jul 2 10:23:32.931438 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 10:23:32.931591 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 10:23:33.367252 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Jul 2 10:23:32.946597 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 10:23:32.946732 systemd[1]: Stopped systemd-sysctl.service. Jul 2 10:23:32.964963 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 10:23:32.965109 systemd[1]: Stopped systemd-modules-load.service. Jul 2 10:23:32.981860 systemd[1]: Stopping systemd-udevd.service... Jul 2 10:23:33.000255 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 10:23:33.001680 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 10:23:33.001739 systemd[1]: Stopped systemd-udevd.service. Jul 2 10:23:33.017684 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 10:23:33.017713 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 10:23:33.035490 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 10:23:33.035526 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 10:23:33.051468 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 10:23:33.051555 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 10:23:33.069764 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 10:23:33.069934 systemd[1]: Stopped dracut-cmdline.service. Jul 2 10:23:33.086356 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 10:23:33.086395 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 10:23:33.102863 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 10:23:33.119348 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 10:23:33.119409 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 10:23:33.135634 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 10:23:33.135717 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 10:23:33.153476 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 10:23:33.153622 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 10:23:33.172593 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 10:23:33.174240 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 10:23:33.174485 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 10:23:33.254526 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 10:23:33.254566 systemd[1]: Stopped network-cleanup.service. Jul 2 10:23:33.262584 systemd[1]: Reached target initrd-switch-root.target. Jul 2 10:23:33.285851 systemd[1]: Starting initrd-switch-root.service... Jul 2 10:23:33.312558 systemd[1]: Switching root. Jul 2 10:23:33.368843 systemd-journald[267]: Journal stopped Jul 2 10:23:37.254782 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 10:23:37.254795 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 10:23:37.254803 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 10:23:37.254809 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 10:23:37.254814 kernel: SELinux: policy capability open_perms=1 Jul 2 10:23:37.254819 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 10:23:37.254825 kernel: SELinux: policy capability always_check_network=0 Jul 2 10:23:37.254830 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 10:23:37.254836 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 10:23:37.254842 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 10:23:37.254847 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 10:23:37.254853 systemd[1]: Successfully loaded SELinux policy in 292.366ms. Jul 2 10:23:37.254860 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.704ms. Jul 2 10:23:37.254867 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 10:23:37.254875 systemd[1]: Detected architecture x86-64. Jul 2 10:23:37.254881 systemd[1]: Detected first boot. Jul 2 10:23:37.254887 systemd[1]: Hostname set to . Jul 2 10:23:37.254893 systemd[1]: Initializing machine ID from random generator. Jul 2 10:23:37.254899 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 2 10:23:37.254905 kernel: audit: type=1400 audit(1719915814.034:84): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 10:23:37.254911 kernel: audit: type=1400 audit(1719915814.106:85): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 10:23:37.254918 kernel: audit: type=1400 audit(1719915814.106:86): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 10:23:37.254923 kernel: audit: type=1334 audit(1719915814.208:87): prog-id=10 op=LOAD Jul 2 10:23:37.254929 kernel: audit: type=1334 audit(1719915814.208:88): prog-id=10 op=UNLOAD Jul 2 10:23:37.254934 kernel: audit: type=1334 audit(1719915814.230:89): prog-id=11 op=LOAD Jul 2 10:23:37.254940 kernel: audit: type=1334 audit(1719915814.230:90): prog-id=11 op=UNLOAD Jul 2 10:23:37.254945 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 10:23:37.254952 kernel: audit: type=1400 audit(1719915814.299:91): avc: denied { associate } for pid=1137 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 10:23:37.254958 kernel: audit: type=1300 audit(1719915814.299:91): arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1120 pid=1137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:23:37.254965 kernel: audit: type=1327 audit(1719915814.299:91): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 10:23:37.254971 systemd[1]: Populated /etc with preset unit settings. Jul 2 10:23:37.254977 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:23:37.254983 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:23:37.254990 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:23:37.254997 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 10:23:37.255003 systemd[1]: Stopped initrd-switch-root.service. Jul 2 10:23:37.255009 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 10:23:37.255016 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 10:23:37.255022 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 10:23:37.255028 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 10:23:37.255036 systemd[1]: Created slice system-getty.slice. Jul 2 10:23:37.255043 systemd[1]: Created slice system-modprobe.slice. Jul 2 10:23:37.255050 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 10:23:37.255056 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 10:23:37.255062 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 10:23:37.255069 systemd[1]: Created slice user.slice. Jul 2 10:23:37.255075 systemd[1]: Started systemd-ask-password-console.path. Jul 2 10:23:37.255081 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 10:23:37.255088 systemd[1]: Set up automount boot.automount. Jul 2 10:23:37.255094 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 10:23:37.255101 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 10:23:37.255107 systemd[1]: Stopped target initrd-fs.target. Jul 2 10:23:37.255114 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 10:23:37.255120 systemd[1]: Reached target integritysetup.target. Jul 2 10:23:37.255126 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 10:23:37.255133 systemd[1]: Reached target remote-fs.target. Jul 2 10:23:37.255139 systemd[1]: Reached target slices.target. Jul 2 10:23:37.255145 systemd[1]: Reached target swap.target. Jul 2 10:23:37.255152 systemd[1]: Reached target torcx.target. Jul 2 10:23:37.255159 systemd[1]: Reached target veritysetup.target. Jul 2 10:23:37.255165 systemd[1]: Listening on systemd-coredump.socket. Jul 2 10:23:37.255172 systemd[1]: Listening on systemd-initctl.socket. Jul 2 10:23:37.255178 systemd[1]: Listening on systemd-networkd.socket. Jul 2 10:23:37.255185 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 10:23:37.255192 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 10:23:37.255199 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 10:23:37.255206 systemd[1]: Mounting dev-hugepages.mount... Jul 2 10:23:37.255212 systemd[1]: Mounting dev-mqueue.mount... Jul 2 10:23:37.255219 systemd[1]: Mounting media.mount... Jul 2 10:23:37.255228 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:23:37.255239 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 10:23:37.255246 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 10:23:37.255253 systemd[1]: Mounting tmp.mount... Jul 2 10:23:37.255260 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 10:23:37.255292 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:23:37.255299 systemd[1]: Starting kmod-static-nodes.service... Jul 2 10:23:37.255322 systemd[1]: Starting modprobe@configfs.service... Jul 2 10:23:37.255328 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:23:37.255335 systemd[1]: Starting modprobe@drm.service... Jul 2 10:23:37.255341 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:23:37.255349 systemd[1]: Starting modprobe@fuse.service... Jul 2 10:23:37.255355 kernel: fuse: init (API version 7.34) Jul 2 10:23:37.255361 systemd[1]: Starting modprobe@loop.service... Jul 2 10:23:37.255367 kernel: loop: module loaded Jul 2 10:23:37.255374 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 10:23:37.255380 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 10:23:37.255387 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 10:23:37.255393 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 10:23:37.255400 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 10:23:37.255407 systemd[1]: Stopped systemd-journald.service. Jul 2 10:23:37.255414 systemd[1]: Starting systemd-journald.service... Jul 2 10:23:37.255420 systemd[1]: Starting systemd-modules-load.service... Jul 2 10:23:37.255429 systemd-journald[1248]: Journal started Jul 2 10:23:37.255454 systemd-journald[1248]: Runtime Journal (/run/log/journal/0688cd44194847a3bbffd7d172f14190) is 8.0M, max 640.1M, 632.1M free. Jul 2 10:23:33.780000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 10:23:34.034000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 10:23:34.106000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 10:23:34.106000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 10:23:34.208000 audit: BPF prog-id=10 op=LOAD Jul 2 10:23:34.208000 audit: BPF prog-id=10 op=UNLOAD Jul 2 10:23:34.230000 audit: BPF prog-id=11 op=LOAD Jul 2 10:23:34.230000 audit: BPF prog-id=11 op=UNLOAD Jul 2 10:23:34.299000 audit[1137]: AVC avc: denied { associate } for pid=1137 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 10:23:34.299000 audit[1137]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1120 pid=1137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:23:34.299000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 10:23:34.323000 audit[1137]: AVC avc: denied { associate } for pid=1137 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 10:23:34.323000 audit[1137]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a79b9 a2=1ed a3=0 items=2 ppid=1120 pid=1137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:23:34.323000 audit: CWD cwd="/" Jul 2 10:23:34.323000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:34.323000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:34.323000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 10:23:35.853000 audit: BPF prog-id=12 op=LOAD Jul 2 10:23:35.853000 audit: BPF prog-id=3 op=UNLOAD Jul 2 10:23:35.854000 audit: BPF prog-id=13 op=LOAD Jul 2 10:23:35.854000 audit: BPF prog-id=14 op=LOAD Jul 2 10:23:35.854000 audit: BPF prog-id=4 op=UNLOAD Jul 2 10:23:35.854000 audit: BPF prog-id=5 op=UNLOAD Jul 2 10:23:35.854000 audit: BPF prog-id=15 op=LOAD Jul 2 10:23:35.854000 audit: BPF prog-id=12 op=UNLOAD Jul 2 10:23:35.854000 audit: BPF prog-id=16 op=LOAD Jul 2 10:23:35.855000 audit: BPF prog-id=17 op=LOAD Jul 2 10:23:35.855000 audit: BPF prog-id=13 op=UNLOAD Jul 2 10:23:35.855000 audit: BPF prog-id=14 op=UNLOAD Jul 2 10:23:35.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:35.906000 audit: BPF prog-id=15 op=UNLOAD Jul 2 10:23:35.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:35.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.226000 audit: BPF prog-id=18 op=LOAD Jul 2 10:23:37.227000 audit: BPF prog-id=19 op=LOAD Jul 2 10:23:37.227000 audit: BPF prog-id=20 op=LOAD Jul 2 10:23:37.227000 audit: BPF prog-id=16 op=UNLOAD Jul 2 10:23:37.227000 audit: BPF prog-id=17 op=UNLOAD Jul 2 10:23:37.251000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 10:23:37.251000 audit[1248]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffcac158050 a2=4000 a3=7ffcac1580ec items=0 ppid=1 pid=1248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:23:37.251000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 10:23:35.853280 systemd[1]: Queued start job for default target multi-user.target. Jul 2 10:23:34.298825 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:23:35.856512 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 10:23:34.299354 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 10:23:34.299365 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 10:23:34.299383 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 10:23:34.299388 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 10:23:34.299404 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 10:23:34.299410 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 10:23:34.299520 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 10:23:34.299540 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 10:23:34.299547 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 10:23:34.300466 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 10:23:34.300485 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 10:23:34.300495 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 10:23:34.300503 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 10:23:34.300513 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 10:23:34.300520 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 10:23:35.510323 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:23:35.510464 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:23:35.510519 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:23:35.510614 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:23:35.510644 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 10:23:35.510683 /usr/lib/systemd/system-generators/torcx-generator[1137]: time="2024-07-02T10:23:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 10:23:37.286455 systemd[1]: Starting systemd-network-generator.service... Jul 2 10:23:37.308293 systemd[1]: Starting systemd-remount-fs.service... Jul 2 10:23:37.330278 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 10:23:37.363773 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 10:23:37.363796 systemd[1]: Stopped verity-setup.service. Jul 2 10:23:37.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.398270 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:23:37.412275 systemd[1]: Started systemd-journald.service. Jul 2 10:23:37.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.421783 systemd[1]: Mounted dev-hugepages.mount. Jul 2 10:23:37.430523 systemd[1]: Mounted dev-mqueue.mount. Jul 2 10:23:37.437509 systemd[1]: Mounted media.mount. Jul 2 10:23:37.444512 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 10:23:37.453493 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 10:23:37.462480 systemd[1]: Mounted tmp.mount. Jul 2 10:23:37.469549 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 10:23:37.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.478621 systemd[1]: Finished kmod-static-nodes.service. Jul 2 10:23:37.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.487651 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 10:23:37.487798 systemd[1]: Finished modprobe@configfs.service. Jul 2 10:23:37.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.496737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:23:37.496927 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:23:37.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.505829 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 10:23:37.506054 systemd[1]: Finished modprobe@drm.service. Jul 2 10:23:37.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.515097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:23:37.515475 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:23:37.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.524150 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 10:23:37.524571 systemd[1]: Finished modprobe@fuse.service. Jul 2 10:23:37.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.534060 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:23:37.534433 systemd[1]: Finished modprobe@loop.service. Jul 2 10:23:37.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.543175 systemd[1]: Finished systemd-modules-load.service. Jul 2 10:23:37.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.552014 systemd[1]: Finished systemd-network-generator.service. Jul 2 10:23:37.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.562006 systemd[1]: Finished systemd-remount-fs.service. Jul 2 10:23:37.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.571058 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 10:23:37.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.580750 systemd[1]: Reached target network-pre.target. Jul 2 10:23:37.592032 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 10:23:37.603032 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 10:23:37.610505 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 10:23:37.613829 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 10:23:37.622833 systemd[1]: Starting systemd-journal-flush.service... Jul 2 10:23:37.631510 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 10:23:37.633933 systemd[1]: Starting systemd-random-seed.service... Jul 2 10:23:37.634926 systemd-journald[1248]: Time spent on flushing to /var/log/journal/0688cd44194847a3bbffd7d172f14190 is 15.650ms for 1588 entries. Jul 2 10:23:37.634926 systemd-journald[1248]: System Journal (/var/log/journal/0688cd44194847a3bbffd7d172f14190) is 8.0M, max 195.6M, 187.6M free. Jul 2 10:23:37.675646 systemd-journald[1248]: Received client request to flush runtime journal. Jul 2 10:23:37.650373 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 10:23:37.650938 systemd[1]: Starting systemd-sysctl.service... Jul 2 10:23:37.658846 systemd[1]: Starting systemd-sysusers.service... Jul 2 10:23:37.665844 systemd[1]: Starting systemd-udev-settle.service... Jul 2 10:23:37.673326 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 10:23:37.681440 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 10:23:37.689467 systemd[1]: Finished systemd-journal-flush.service. Jul 2 10:23:37.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.697487 systemd[1]: Finished systemd-random-seed.service. Jul 2 10:23:37.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.705511 systemd[1]: Finished systemd-sysctl.service. Jul 2 10:23:37.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.713454 systemd[1]: Finished systemd-sysusers.service. Jul 2 10:23:37.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.722432 systemd[1]: Reached target first-boot-complete.target. Jul 2 10:23:37.731209 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 10:23:37.741786 udevadm[1264]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 10:23:37.748031 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 10:23:37.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.935661 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 10:23:37.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.943000 audit: BPF prog-id=21 op=LOAD Jul 2 10:23:37.944000 audit: BPF prog-id=22 op=LOAD Jul 2 10:23:37.944000 audit: BPF prog-id=7 op=UNLOAD Jul 2 10:23:37.944000 audit: BPF prog-id=8 op=UNLOAD Jul 2 10:23:37.945635 systemd[1]: Starting systemd-udevd.service... Jul 2 10:23:37.957898 systemd-udevd[1268]: Using default interface naming scheme 'v252'. Jul 2 10:23:37.975936 systemd[1]: Started systemd-udevd.service. Jul 2 10:23:37.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:37.986469 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Jul 2 10:23:37.986000 audit: BPF prog-id=23 op=LOAD Jul 2 10:23:37.987805 systemd[1]: Starting systemd-networkd.service... Jul 2 10:23:38.008000 audit: BPF prog-id=24 op=LOAD Jul 2 10:23:38.023039 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 2 10:23:38.023111 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 10:23:38.023141 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 10:23:38.039476 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1333) Jul 2 10:23:38.058000 audit: BPF prog-id=25 op=LOAD Jul 2 10:23:38.060348 kernel: ACPI: button: Power Button [PWRF] Jul 2 10:23:38.072000 audit: BPF prog-id=26 op=LOAD Jul 2 10:23:38.074178 systemd[1]: Starting systemd-userdbd.service... Jul 2 10:23:38.074261 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 10:23:38.006000 audit[1344]: AVC avc: denied { confidentiality } for pid=1344 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 10:23:38.105754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 10:23:38.116234 kernel: IPMI message handler: version 39.2 Jul 2 10:23:38.123408 systemd[1]: Started systemd-userdbd.service. Jul 2 10:23:38.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:38.006000 audit[1344]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7ff069649010 a1=4d8bc a2=7ff06b2e3bc5 a3=5 items=42 ppid=1268 pid=1344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:23:38.006000 audit: CWD cwd="/" Jul 2 10:23:38.006000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=1 name=(null) inode=22853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=2 name=(null) inode=22853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=3 name=(null) inode=22854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=4 name=(null) inode=22853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=5 name=(null) inode=22855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=6 name=(null) inode=22853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=7 name=(null) inode=22856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=8 name=(null) inode=22856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=9 name=(null) inode=22857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=10 name=(null) inode=22856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=11 name=(null) inode=22858 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=12 name=(null) inode=22856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=13 name=(null) inode=22859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=14 name=(null) inode=22856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=15 name=(null) inode=22860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=16 name=(null) inode=22856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=17 name=(null) inode=22861 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=18 name=(null) inode=22853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=19 name=(null) inode=22862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=20 name=(null) inode=22862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=21 name=(null) inode=22863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=22 name=(null) inode=22862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=23 name=(null) inode=22864 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=24 name=(null) inode=22862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=25 name=(null) inode=22865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=26 name=(null) inode=22862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=27 name=(null) inode=22866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=28 name=(null) inode=22862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=29 name=(null) inode=22867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=30 name=(null) inode=22853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=31 name=(null) inode=22868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=32 name=(null) inode=22868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=33 name=(null) inode=22869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=34 name=(null) inode=22868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=35 name=(null) inode=22870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=36 name=(null) inode=22868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=37 name=(null) inode=22871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=38 name=(null) inode=22868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=39 name=(null) inode=22872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=40 name=(null) inode=22868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PATH item=41 name=(null) inode=22873 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:23:38.006000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 10:23:38.151232 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 2 10:23:38.151344 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 2 10:23:38.168240 kernel: ipmi device interface Jul 2 10:23:38.238554 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 2 10:23:38.238699 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 2 10:23:38.255232 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jul 2 10:23:38.278615 systemd-networkd[1312]: bond0: netdev ready Jul 2 10:23:38.281008 systemd-networkd[1312]: lo: Link UP Jul 2 10:23:38.281011 systemd-networkd[1312]: lo: Gained carrier Jul 2 10:23:38.281556 systemd-networkd[1312]: Enumeration completed Jul 2 10:23:38.281657 systemd[1]: Started systemd-networkd.service. Jul 2 10:23:38.282251 systemd-networkd[1312]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 2 10:23:38.282847 systemd-networkd[1312]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:de:19.network. Jul 2 10:23:38.293234 kernel: ipmi_si: IPMI System Interface driver Jul 2 10:23:38.293305 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 2 10:23:38.293419 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 2 10:23:38.293437 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 2 10:23:38.293453 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 2 10:23:38.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:38.357166 kernel: iTCO_vendor_support: vendor-support=0 Jul 2 10:23:38.357214 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 2 10:23:38.414047 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 2 10:23:38.414295 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 2 10:23:38.414338 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 2 10:23:38.434443 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 10:23:38.451291 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 2 10:23:38.484281 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jul 2 10:23:38.497266 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 10:23:38.497291 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jul 2 10:23:38.547293 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Jul 2 10:23:38.547318 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jul 2 10:23:38.564619 systemd-networkd[1312]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:de:18.network. Jul 2 10:23:38.601267 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 2 10:23:38.601364 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jul 2 10:23:38.642236 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 2 10:23:38.680022 kernel: intel_rapl_common: Found RAPL domain package Jul 2 10:23:38.680068 kernel: intel_rapl_common: Found RAPL domain core Jul 2 10:23:38.680091 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 10:23:38.701312 kernel: intel_rapl_common: Found RAPL domain dram Jul 2 10:23:38.755271 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 2 10:23:38.777275 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jul 2 10:23:38.778461 systemd-networkd[1312]: bond0: Link UP Jul 2 10:23:38.778654 systemd-networkd[1312]: enp1s0f1np1: Link UP Jul 2 10:23:38.778785 systemd-networkd[1312]: enp1s0f1np1: Gained carrier Jul 2 10:23:38.779737 systemd-networkd[1312]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:de:18.network. Jul 2 10:23:38.816841 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jul 2 10:23:38.816866 kernel: bond0: active interface up! Jul 2 10:23:38.839287 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jul 2 10:23:38.944205 systemd-networkd[1312]: bond0: Gained carrier Jul 2 10:23:38.944347 systemd-networkd[1312]: enp1s0f0np0: Link UP Jul 2 10:23:38.944473 systemd-networkd[1312]: enp1s0f0np0: Gained carrier Jul 2 10:23:38.983348 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 10:23:38.983434 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Jul 2 10:23:38.992574 systemd-networkd[1312]: enp1s0f1np1: Link DOWN Jul 2 10:23:38.992577 systemd-networkd[1312]: enp1s0f1np1: Lost carrier Jul 2 10:23:39.001482 systemd[1]: Finished systemd-udev-settle.service. Jul 2 10:23:39.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.011140 systemd[1]: Starting lvm2-activation-early.service... Jul 2 10:23:39.027512 lvm[1372]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 10:23:39.058699 systemd[1]: Finished lvm2-activation-early.service. Jul 2 10:23:39.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.067402 systemd[1]: Reached target cryptsetup.target. Jul 2 10:23:39.084969 kernel: kauditd_printk_skb: 118 callbacks suppressed Jul 2 10:23:39.084995 kernel: audit: type=1130 audit(1719915819.066:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.140973 systemd[1]: Starting lvm2-activation.service... Jul 2 10:23:39.143231 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 10:23:39.177734 systemd[1]: Finished lvm2-activation.service. Jul 2 10:23:39.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.186352 systemd[1]: Reached target local-fs-pre.target. Jul 2 10:23:39.234292 kernel: audit: type=1130 audit(1719915819.185:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.242296 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 10:23:39.242311 systemd[1]: Reached target local-fs.target. Jul 2 10:23:39.251269 systemd[1]: Reached target machines.target. Jul 2 10:23:39.261028 systemd[1]: Starting ldconfig.service... Jul 2 10:23:39.267806 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:23:39.267827 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:23:39.268374 systemd[1]: Starting systemd-boot-update.service... Jul 2 10:23:39.275720 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 10:23:39.285822 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 10:23:39.286427 systemd[1]: Starting systemd-sysext.service... Jul 2 10:23:39.286622 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1376 (bootctl) Jul 2 10:23:39.287344 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 10:23:39.302538 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 10:23:39.306895 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 10:23:39.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.370233 kernel: audit: type=1130 audit(1719915819.305:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.370270 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 10:23:39.370443 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 10:23:39.370525 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 10:23:39.389262 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 10:23:39.410274 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 10:23:39.417372 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Jul 2 10:23:39.417395 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 10:23:39.417408 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 10:23:39.417420 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 10:23:39.419270 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 10:23:39.421274 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 10:23:39.424233 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 10:23:39.426290 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 10:23:39.428232 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 10:23:39.429233 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 10:23:39.449674 systemd-networkd[1312]: enp1s0f1np1: Link UP Jul 2 10:23:39.449677 systemd-networkd[1312]: enp1s0f1np1: Gained carrier Jul 2 10:23:39.488040 systemd-fsck[1385]: fsck.fat 4.2 (2021-01-31) Jul 2 10:23:39.488040 systemd-fsck[1385]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 10:23:39.514752 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 10:23:39.585232 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 10:23:39.603235 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jul 2 10:23:39.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.669700 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 10:23:39.670006 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 10:23:39.673275 ldconfig[1375]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 10:23:39.731290 kernel: audit: type=1130 audit(1719915819.668:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.739397 systemd[1]: Finished ldconfig.service. Jul 2 10:23:39.800294 kernel: audit: type=1130 audit(1719915819.738:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.801309 systemd-networkd[1312]: bond0: Gained IPv6LL Jul 2 10:23:39.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.812423 systemd[1]: Mounting boot.mount... Jul 2 10:23:39.824230 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 10:23:39.824252 kernel: audit: type=1130 audit(1719915819.810:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.835342 (sd-sysext)[1388]: Using extensions 'kubernetes'. Jul 2 10:23:39.835523 (sd-sysext)[1388]: Merged extensions into '/usr'. Jul 2 10:23:39.880958 systemd[1]: Mounted boot.mount. Jul 2 10:23:39.889348 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:23:39.890027 systemd[1]: Mounting usr-share-oem.mount... Jul 2 10:23:39.896431 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:23:39.897034 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:23:39.903841 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:23:39.910817 systemd[1]: Starting modprobe@loop.service... Jul 2 10:23:39.917353 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:23:39.917419 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:23:39.917483 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:23:39.919101 systemd[1]: Finished systemd-boot-update.service. Jul 2 10:23:39.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.927471 systemd[1]: Mounted usr-share-oem.mount. Jul 2 10:23:39.976277 kernel: audit: type=1130 audit(1719915819.926:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.983434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:23:39.983496 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:23:39.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:39.992455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:23:39.992515 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:23:39.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:40.071462 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:23:40.071520 systemd[1]: Finished modprobe@loop.service. Jul 2 10:23:40.091681 kernel: audit: type=1130 audit(1719915819.991:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:40.091699 kernel: audit: type=1131 audit(1719915819.991:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:40.091713 kernel: audit: type=1130 audit(1719915820.070:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:40.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:40.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:40.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:40.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:40.151536 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 10:23:40.151594 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 10:23:40.152048 systemd[1]: Finished systemd-sysext.service. Jul 2 10:23:40.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:40.161795 systemd[1]: Starting ensure-sysext.service... Jul 2 10:23:40.169751 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 10:23:40.175391 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 10:23:40.175922 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 10:23:40.176992 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 10:23:40.179701 systemd[1]: Reloading. Jul 2 10:23:40.201946 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2024-07-02T10:23:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:23:40.201973 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2024-07-02T10:23:40Z" level=info msg="torcx already run" Jul 2 10:23:40.254488 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:23:40.254496 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:23:40.265660 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:23:40.305000 audit: BPF prog-id=27 op=LOAD Jul 2 10:23:40.306000 audit: BPF prog-id=28 op=LOAD Jul 2 10:23:40.306000 audit: BPF prog-id=21 op=UNLOAD Jul 2 10:23:40.306000 audit: BPF prog-id=22 op=UNLOAD Jul 2 10:23:40.307000 audit: BPF prog-id=29 op=LOAD Jul 2 10:23:40.307000 audit: BPF prog-id=23 op=UNLOAD Jul 2 10:23:40.307000 audit: BPF prog-id=30 op=LOAD Jul 2 10:23:40.307000 audit: BPF prog-id=24 op=UNLOAD Jul 2 10:23:40.307000 audit: BPF prog-id=31 op=LOAD Jul 2 10:23:40.307000 audit: BPF prog-id=32 op=LOAD Jul 2 10:23:40.307000 audit: BPF prog-id=25 op=UNLOAD Jul 2 10:23:40.307000 audit: BPF prog-id=26 op=UNLOAD Jul 2 10:23:40.308000 audit: BPF prog-id=33 op=LOAD Jul 2 10:23:40.308000 audit: BPF prog-id=18 op=UNLOAD Jul 2 10:23:40.308000 audit: BPF prog-id=34 op=LOAD Jul 2 10:23:40.308000 audit: BPF prog-id=35 op=LOAD Jul 2 10:23:40.308000 audit: BPF prog-id=19 op=UNLOAD Jul 2 10:23:40.308000 audit: BPF prog-id=20 op=UNLOAD Jul 2 10:23:40.311610 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 10:23:40.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:23:40.321073 systemd[1]: Starting audit-rules.service... Jul 2 10:23:40.328785 systemd[1]: Starting clean-ca-certificates.service... Jul 2 10:23:40.337878 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 10:23:40.337000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 10:23:40.337000 audit[1493]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdba4edea0 a2=420 a3=0 items=0 ppid=1477 pid=1493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:23:40.337000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 10:23:40.339018 augenrules[1493]: No rules Jul 2 10:23:40.347266 systemd[1]: Starting systemd-resolved.service... Jul 2 10:23:40.355277 systemd[1]: Starting systemd-timesyncd.service... Jul 2 10:23:40.362810 systemd[1]: Starting systemd-update-utmp.service... Jul 2 10:23:40.369584 systemd[1]: Finished audit-rules.service. Jul 2 10:23:40.376439 systemd[1]: Finished clean-ca-certificates.service. Jul 2 10:23:40.384451 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 10:23:40.397659 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:23:40.397824 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:23:40.398429 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:23:40.405850 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:23:40.412838 systemd[1]: Starting modprobe@loop.service... Jul 2 10:23:40.419291 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:23:40.419357 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:23:40.420036 systemd[1]: Starting systemd-update-done.service... Jul 2 10:23:40.426019 systemd-resolved[1499]: Positive Trust Anchors: Jul 2 10:23:40.426026 systemd-resolved[1499]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 10:23:40.426045 systemd-resolved[1499]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 10:23:40.426331 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 10:23:40.426402 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:23:40.426982 systemd[1]: Started systemd-timesyncd.service. Jul 2 10:23:40.430104 systemd-resolved[1499]: Using system hostname 'ci-3510.3.5-a-539a8ddad9'. Jul 2 10:23:40.435540 systemd[1]: Started systemd-resolved.service. Jul 2 10:23:40.443657 systemd[1]: Finished systemd-update-utmp.service. Jul 2 10:23:40.452527 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:23:40.452593 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:23:40.460497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:23:40.460558 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:23:40.468520 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:23:40.468580 systemd[1]: Finished modprobe@loop.service. Jul 2 10:23:40.476522 systemd[1]: Finished systemd-update-done.service. Jul 2 10:23:40.485506 systemd[1]: Reached target network.target. Jul 2 10:23:40.493356 systemd[1]: Reached target nss-lookup.target. Jul 2 10:23:40.501358 systemd[1]: Reached target time-set.target. Jul 2 10:23:40.509338 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:23:40.509479 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:23:40.510113 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:23:40.517828 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:23:40.524810 systemd[1]: Starting modprobe@loop.service... Jul 2 10:23:40.531335 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:23:40.531399 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:23:40.531456 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 10:23:40.531499 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:23:40.532043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:23:40.532107 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:23:40.540535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:23:40.540597 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:23:40.548543 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:23:40.548608 systemd[1]: Finished modprobe@loop.service. Jul 2 10:23:40.556496 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 10:23:40.556567 systemd[1]: Reached target sysinit.target. Jul 2 10:23:40.564422 systemd[1]: Started motdgen.path. Jul 2 10:23:40.571385 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 10:23:40.581441 systemd[1]: Started logrotate.timer. Jul 2 10:23:40.588423 systemd[1]: Started mdadm.timer. Jul 2 10:23:40.595504 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 10:23:40.603335 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 10:23:40.603401 systemd[1]: Reached target paths.target. Jul 2 10:23:40.610344 systemd[1]: Reached target timers.target. Jul 2 10:23:40.617511 systemd[1]: Listening on dbus.socket. Jul 2 10:23:40.624835 systemd[1]: Starting docker.socket... Jul 2 10:23:40.632706 systemd[1]: Listening on sshd.socket. Jul 2 10:23:40.639391 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:23:40.639455 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 10:23:40.640144 systemd[1]: Listening on docker.socket. Jul 2 10:23:40.648154 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 10:23:40.648216 systemd[1]: Reached target sockets.target. Jul 2 10:23:40.656464 systemd[1]: Reached target basic.target. Jul 2 10:23:40.663461 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 10:23:40.663517 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 10:23:40.664081 systemd[1]: Starting containerd.service... Jul 2 10:23:40.671800 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 10:23:40.680897 systemd[1]: Starting coreos-metadata.service... Jul 2 10:23:40.687866 systemd[1]: Starting dbus.service... Jul 2 10:23:40.694061 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 10:23:40.699661 jq[1520]: false Jul 2 10:23:40.701047 systemd[1]: Starting extend-filesystems.service... Jul 2 10:23:40.702276 coreos-metadata[1513]: Jul 02 10:23:40.702 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 10:23:40.707289 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 10:23:40.708071 systemd[1]: Starting modprobe@drm.service... Jul 2 10:23:40.709052 dbus-daemon[1519]: [system] SELinux support is enabled Jul 2 10:23:40.709628 extend-filesystems[1521]: Found loop1 Jul 2 10:23:40.726437 extend-filesystems[1521]: Found sda Jul 2 10:23:40.726437 extend-filesystems[1521]: Found sda1 Jul 2 10:23:40.726437 extend-filesystems[1521]: Found sda2 Jul 2 10:23:40.726437 extend-filesystems[1521]: Found sda3 Jul 2 10:23:40.726437 extend-filesystems[1521]: Found usr Jul 2 10:23:40.726437 extend-filesystems[1521]: Found sda4 Jul 2 10:23:40.726437 extend-filesystems[1521]: Found sda6 Jul 2 10:23:40.726437 extend-filesystems[1521]: Found sda7 Jul 2 10:23:40.726437 extend-filesystems[1521]: Found sda9 Jul 2 10:23:40.726437 extend-filesystems[1521]: Checking size of /dev/sda9 Jul 2 10:23:40.726437 extend-filesystems[1521]: Resized partition /dev/sda9 Jul 2 10:23:40.860274 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jul 2 10:23:40.860300 coreos-metadata[1516]: Jul 02 10:23:40.712 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 10:23:40.715042 systemd[1]: Starting motdgen.service... Jul 2 10:23:40.860458 extend-filesystems[1532]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 10:23:40.754144 systemd[1]: Starting prepare-helm.service... Jul 2 10:23:40.772055 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 10:23:40.790918 systemd[1]: Starting sshd-keygen.service... Jul 2 10:23:40.805947 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 10:23:40.824313 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:23:40.825118 systemd[1]: Starting tcsd.service... Jul 2 10:23:40.832577 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 10:23:40.876788 jq[1552]: true Jul 2 10:23:40.832970 systemd[1]: Starting update-engine.service... Jul 2 10:23:40.851943 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 10:23:40.869848 systemd[1]: Started dbus.service. Jul 2 10:23:40.884327 update_engine[1551]: I0702 10:23:40.883638 1551 main.cc:92] Flatcar Update Engine starting Jul 2 10:23:40.885229 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 10:23:40.885327 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 10:23:40.885567 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 10:23:40.885629 systemd[1]: Finished modprobe@drm.service. Jul 2 10:23:40.887354 update_engine[1551]: I0702 10:23:40.887344 1551 update_check_scheduler.cc:74] Next update check in 2m40s Jul 2 10:23:40.893526 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 10:23:40.893609 systemd[1]: Finished motdgen.service. Jul 2 10:23:40.901952 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 10:23:40.902048 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 10:23:40.911519 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 10:23:40.924955 jq[1556]: true Jul 2 10:23:40.925907 systemd[1]: Finished ensure-sysext.service. Jul 2 10:23:40.934169 env[1557]: time="2024-07-02T10:23:40.934094488Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 10:23:40.939473 tar[1554]: linux-amd64/helm Jul 2 10:23:40.940699 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 2 10:23:40.940808 systemd[1]: Condition check resulted in tcsd.service being skipped. Jul 2 10:23:40.942589 env[1557]: time="2024-07-02T10:23:40.942568598Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 10:23:40.942645 env[1557]: time="2024-07-02T10:23:40.942634909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:23:40.943256 env[1557]: time="2024-07-02T10:23:40.943239256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 10:23:40.943256 env[1557]: time="2024-07-02T10:23:40.943254749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:23:40.943377 env[1557]: time="2024-07-02T10:23:40.943364893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 10:23:40.943377 env[1557]: time="2024-07-02T10:23:40.943375520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 10:23:40.943445 env[1557]: time="2024-07-02T10:23:40.943383056Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 10:23:40.943445 env[1557]: time="2024-07-02T10:23:40.943388576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 10:23:40.943445 env[1557]: time="2024-07-02T10:23:40.943431520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:23:40.943560 env[1557]: time="2024-07-02T10:23:40.943550924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:23:40.943627 env[1557]: time="2024-07-02T10:23:40.943616503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 10:23:40.943627 env[1557]: time="2024-07-02T10:23:40.943625912Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 10:23:40.943689 env[1557]: time="2024-07-02T10:23:40.943650901Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 10:23:40.943689 env[1557]: time="2024-07-02T10:23:40.943662205Z" level=info msg="metadata content store policy set" policy=shared Jul 2 10:23:40.945502 systemd[1]: Started update-engine.service. Jul 2 10:23:40.957197 env[1557]: time="2024-07-02T10:23:40.957157518Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 10:23:40.957197 env[1557]: time="2024-07-02T10:23:40.957176788Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 10:23:40.957197 env[1557]: time="2024-07-02T10:23:40.957186416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 10:23:40.958533 env[1557]: time="2024-07-02T10:23:40.957205402Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 10:23:40.958533 env[1557]: time="2024-07-02T10:23:40.957215799Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 10:23:40.958533 env[1557]: time="2024-07-02T10:23:40.957224575Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 10:23:40.958533 env[1557]: time="2024-07-02T10:23:40.957238393Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 10:23:40.958533 env[1557]: time="2024-07-02T10:23:40.957248541Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 10:23:40.958533 env[1557]: time="2024-07-02T10:23:40.957259807Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 10:23:40.958533 env[1557]: time="2024-07-02T10:23:40.957269320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 10:23:40.958533 env[1557]: time="2024-07-02T10:23:40.957278482Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 10:23:40.958533 env[1557]: time="2024-07-02T10:23:40.957286736Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 10:23:40.958533 env[1557]: time="2024-07-02T10:23:40.958518972Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 10:23:40.958681 env[1557]: time="2024-07-02T10:23:40.958577635Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 10:23:40.959021 env[1557]: time="2024-07-02T10:23:40.959002041Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 10:23:40.959051 env[1557]: time="2024-07-02T10:23:40.959037537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.959070 env[1557]: time="2024-07-02T10:23:40.959058446Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 10:23:40.959109 env[1557]: time="2024-07-02T10:23:40.959100163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.959130 env[1557]: time="2024-07-02T10:23:40.959112797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.959155 env[1557]: time="2024-07-02T10:23:40.959146576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.959173 env[1557]: time="2024-07-02T10:23:40.959158204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.959194 env[1557]: time="2024-07-02T10:23:40.959167537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.959238 env[1557]: time="2024-07-02T10:23:40.959224324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.961161 env[1557]: time="2024-07-02T10:23:40.959240579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.961161 env[1557]: time="2024-07-02T10:23:40.959248020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.961161 env[1557]: time="2024-07-02T10:23:40.959256083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 10:23:40.959701 systemd[1]: Reached target network-online.target. Jul 2 10:23:40.961664 env[1557]: time="2024-07-02T10:23:40.961653057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.961689 env[1557]: time="2024-07-02T10:23:40.961667360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.961689 env[1557]: time="2024-07-02T10:23:40.961675933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.961689 env[1557]: time="2024-07-02T10:23:40.961683124Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 10:23:40.961733 env[1557]: time="2024-07-02T10:23:40.961691630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 10:23:40.961733 env[1557]: time="2024-07-02T10:23:40.961699297Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 10:23:40.961733 env[1557]: time="2024-07-02T10:23:40.961709089Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 10:23:40.961733 env[1557]: time="2024-07-02T10:23:40.961729306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 10:23:40.961870 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Jul 2 10:23:40.961956 env[1557]: time="2024-07-02T10:23:40.961839880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 10:23:40.961956 env[1557]: time="2024-07-02T10:23:40.961870968Z" level=info msg="Connect containerd service" Jul 2 10:23:40.961956 env[1557]: time="2024-07-02T10:23:40.961891277Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 10:23:40.964059 env[1557]: time="2024-07-02T10:23:40.962159891Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 10:23:40.964059 env[1557]: time="2024-07-02T10:23:40.962237585Z" level=info msg="Start subscribing containerd event" Jul 2 10:23:40.964059 env[1557]: time="2024-07-02T10:23:40.962266642Z" level=info msg="Start recovering state" Jul 2 10:23:40.964059 env[1557]: time="2024-07-02T10:23:40.962287678Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 10:23:40.964059 env[1557]: time="2024-07-02T10:23:40.962299705Z" level=info msg="Start event monitor" Jul 2 10:23:40.964059 env[1557]: time="2024-07-02T10:23:40.962310746Z" level=info msg="Start snapshots syncer" Jul 2 10:23:40.964059 env[1557]: time="2024-07-02T10:23:40.962310417Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 10:23:40.964059 env[1557]: time="2024-07-02T10:23:40.962317301Z" level=info msg="Start cni network conf syncer for default" Jul 2 10:23:40.964059 env[1557]: time="2024-07-02T10:23:40.962344622Z" level=info msg="containerd successfully booted in 0.028576s" Jul 2 10:23:40.964059 env[1557]: time="2024-07-02T10:23:40.962353894Z" level=info msg="Start streaming server" Jul 2 10:23:40.974235 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Jul 2 10:23:40.983158 systemd[1]: Starting kubelet.service... Jul 2 10:23:40.991194 systemd[1]: Started locksmithd.service. Jul 2 10:23:40.999307 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 10:23:40.999328 systemd[1]: Reached target system-config.target. Jul 2 10:23:41.008645 systemd[1]: Starting systemd-logind.service... Jul 2 10:23:41.015322 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 10:23:41.015348 systemd[1]: Reached target user-config.target. Jul 2 10:23:41.024435 systemd[1]: Started containerd.service. Jul 2 10:23:41.032629 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 10:23:41.033139 systemd-logind[1596]: Watching system buttons on /dev/input/event3 (Power Button) Jul 2 10:23:41.033150 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 10:23:41.033160 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 2 10:23:41.033272 systemd-logind[1596]: New seat seat0. Jul 2 10:23:41.043080 systemd[1]: Started systemd-logind.service. Jul 2 10:23:41.052698 locksmithd[1595]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 10:23:41.154155 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 10:23:41.166062 systemd[1]: Finished sshd-keygen.service. Jul 2 10:23:41.174254 systemd[1]: Starting issuegen.service... Jul 2 10:23:41.182566 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 10:23:41.182693 systemd[1]: Finished issuegen.service. Jul 2 10:23:41.191442 systemd[1]: Starting systemd-user-sessions.service... Jul 2 10:23:41.199577 tar[1554]: linux-amd64/LICENSE Jul 2 10:23:41.199620 tar[1554]: linux-amd64/README.md Jul 2 10:23:41.201217 systemd[1]: Finished systemd-user-sessions.service. Jul 2 10:23:41.209657 systemd[1]: Finished prepare-helm.service. Jul 2 10:23:41.219117 systemd[1]: Started getty@tty1.service. Jul 2 10:23:41.227171 systemd[1]: Started serial-getty@ttyS1.service. Jul 2 10:23:41.235395 systemd[1]: Reached target getty.target. Jul 2 10:23:41.321282 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jul 2 10:23:41.349106 extend-filesystems[1532]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 2 10:23:41.349106 extend-filesystems[1532]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 2 10:23:41.349106 extend-filesystems[1532]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jul 2 10:23:41.386316 extend-filesystems[1521]: Resized filesystem in /dev/sda9 Jul 2 10:23:41.386316 extend-filesystems[1521]: Found sdb Jul 2 10:23:41.349609 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 10:23:41.349700 systemd[1]: Finished extend-filesystems.service. Jul 2 10:23:41.872297 systemd[1]: Started kubelet.service. Jul 2 10:23:42.715109 kubelet[1631]: E0702 10:23:42.715032 1631 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:23:42.716674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:23:42.716767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:23:46.249641 login[1627]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 10:23:46.255172 login[1626]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 10:23:46.257212 systemd-logind[1596]: New session 1 of user core. Jul 2 10:23:46.257748 systemd[1]: Created slice user-500.slice. Jul 2 10:23:46.258297 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 10:23:46.259510 systemd-logind[1596]: New session 2 of user core. Jul 2 10:23:46.263538 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 10:23:46.264189 systemd[1]: Starting user@500.service... Jul 2 10:23:46.266297 (systemd)[1650]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:23:46.380105 systemd[1650]: Queued start job for default target default.target. Jul 2 10:23:46.381529 systemd[1650]: Reached target paths.target. Jul 2 10:23:46.381595 systemd[1650]: Reached target sockets.target. Jul 2 10:23:46.381639 systemd[1650]: Reached target timers.target. Jul 2 10:23:46.381679 systemd[1650]: Reached target basic.target. Jul 2 10:23:46.381795 systemd[1650]: Reached target default.target. Jul 2 10:23:46.381878 systemd[1650]: Startup finished in 112ms. Jul 2 10:23:46.381991 systemd[1]: Started user@500.service. Jul 2 10:23:46.384816 systemd[1]: Started session-1.scope. Jul 2 10:23:46.386652 systemd[1]: Started session-2.scope. Jul 2 10:23:46.585609 coreos-metadata[1513]: Jul 02 10:23:46.585 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Jul 2 10:23:46.586396 coreos-metadata[1516]: Jul 02 10:23:46.585 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Jul 2 10:23:47.054501 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Jul 2 10:23:47.054658 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Jul 2 10:23:47.585664 coreos-metadata[1513]: Jul 02 10:23:47.585 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 2 10:23:47.586484 coreos-metadata[1516]: Jul 02 10:23:47.585 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 2 10:23:47.617125 systemd[1]: Created slice system-sshd.slice. Jul 2 10:23:47.617888 systemd[1]: Started sshd@0-147.75.203.11:22-139.178.68.195:42328.service. Jul 2 10:23:47.633282 coreos-metadata[1513]: Jul 02 10:23:47.633 INFO Fetch successful Jul 2 10:23:47.634838 coreos-metadata[1516]: Jul 02 10:23:47.634 INFO Fetch successful Jul 2 10:23:47.659580 unknown[1513]: wrote ssh authorized keys file for user: core Jul 2 10:23:47.660492 systemd[1]: Finished coreos-metadata.service. Jul 2 10:23:47.661780 systemd[1]: Started packet-phone-home.service. Jul 2 10:23:47.663481 sshd[1671]: Accepted publickey for core from 139.178.68.195 port 42328 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:23:47.664221 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:23:47.666615 systemd-logind[1596]: New session 3 of user core. Jul 2 10:23:47.667182 systemd[1]: Started session-3.scope. Jul 2 10:23:47.667720 curl[1676]: % Total % Received % Xferd Average Speed Time Time Time Current Jul 2 10:23:47.667720 curl[1676]: Dload Upload Total Spent Left Speed Jul 2 10:23:47.671884 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Jul 2 10:23:47.672108 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 10:23:47.672338 systemd[1]: Reached target multi-user.target. Jul 2 10:23:47.673005 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 10:23:47.676999 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 10:23:47.677070 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 10:23:47.677145 systemd[1]: Startup finished in 1.864s (kernel) + 17.619s (initrd) + 14.212s (userspace) = 33.696s. Jul 2 10:23:47.715054 systemd[1]: Started sshd@1-147.75.203.11:22-139.178.68.195:42344.service. Jul 2 10:23:47.746524 sshd[1682]: Accepted publickey for core from 139.178.68.195 port 42344 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:23:47.747201 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:23:47.749468 systemd-logind[1596]: New session 4 of user core. Jul 2 10:23:47.750064 systemd[1]: Started session-4.scope. Jul 2 10:23:47.800927 sshd[1682]: pam_unix(sshd:session): session closed for user core Jul 2 10:23:47.803606 systemd[1]: sshd@1-147.75.203.11:22-139.178.68.195:42344.service: Deactivated successfully. Jul 2 10:23:47.804172 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 10:23:47.804805 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit. Jul 2 10:23:47.806027 systemd[1]: Started sshd@2-147.75.203.11:22-139.178.68.195:42348.service. Jul 2 10:23:47.806938 systemd-logind[1596]: Removed session 4. Jul 2 10:23:47.844680 sshd[1688]: Accepted publickey for core from 139.178.68.195 port 42348 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:23:47.845360 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:23:47.847576 systemd-logind[1596]: New session 5 of user core. Jul 2 10:23:47.848232 systemd[1]: Started session-5.scope. Jul 2 10:23:47.850243 curl[1676]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Jul 2 10:23:47.850750 systemd[1]: packet-phone-home.service: Deactivated successfully. Jul 2 10:23:47.896370 sshd[1688]: pam_unix(sshd:session): session closed for user core Jul 2 10:23:47.900446 systemd[1]: sshd@2-147.75.203.11:22-139.178.68.195:42348.service: Deactivated successfully. Jul 2 10:23:47.901442 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 10:23:47.902379 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit. Jul 2 10:23:47.904357 systemd[1]: Started sshd@3-147.75.203.11:22-139.178.68.195:42364.service. Jul 2 10:23:47.906305 systemd-logind[1596]: Removed session 5. Jul 2 10:23:47.993849 sshd[1694]: Accepted publickey for core from 139.178.68.195 port 42364 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:23:47.994860 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:23:47.998043 systemd-logind[1596]: New session 6 of user core. Jul 2 10:23:47.998944 systemd[1]: Started session-6.scope. Jul 2 10:23:48.054431 sshd[1694]: pam_unix(sshd:session): session closed for user core Jul 2 10:23:48.056082 systemd[1]: sshd@3-147.75.203.11:22-139.178.68.195:42364.service: Deactivated successfully. Jul 2 10:23:48.056447 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 10:23:48.056783 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit. Jul 2 10:23:48.057392 systemd[1]: Started sshd@4-147.75.203.11:22-139.178.68.195:42376.service. Jul 2 10:23:48.057841 systemd-logind[1596]: Removed session 6. Jul 2 10:23:48.087740 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 42376 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:23:48.088623 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:23:48.091544 systemd-logind[1596]: New session 7 of user core. Jul 2 10:23:48.092341 systemd[1]: Started session-7.scope. Jul 2 10:23:47.407001 systemd-resolved[1499]: Clock change detected. Flushing caches. Jul 2 10:23:47.447896 systemd-journald[1248]: Time jumped backwards, rotating. Jul 2 10:23:47.407063 systemd-timesyncd[1500]: Contacted time server 5.161.184.148:123 (0.flatcar.pool.ntp.org). Jul 2 10:23:47.407125 systemd-timesyncd[1500]: Initial clock synchronization to Tue 2024-07-02 10:23:47.406897 UTC. Jul 2 10:23:47.457368 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 10:23:47.457660 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 10:23:47.496311 systemd[1]: Starting docker.service... Jul 2 10:23:47.553255 env[1719]: time="2024-07-02T10:23:47.553210179Z" level=info msg="Starting up" Jul 2 10:23:47.554361 env[1719]: time="2024-07-02T10:23:47.554340001Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 10:23:47.554361 env[1719]: time="2024-07-02T10:23:47.554357203Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 10:23:47.554452 env[1719]: time="2024-07-02T10:23:47.554376768Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 10:23:47.554452 env[1719]: time="2024-07-02T10:23:47.554393565Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 10:23:47.555866 env[1719]: time="2024-07-02T10:23:47.555817432Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 10:23:47.555866 env[1719]: time="2024-07-02T10:23:47.555834910Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 10:23:47.555866 env[1719]: time="2024-07-02T10:23:47.555850462Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 10:23:47.555866 env[1719]: time="2024-07-02T10:23:47.555861448Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 10:23:47.722907 env[1719]: time="2024-07-02T10:23:47.722694256Z" level=info msg="Loading containers: start." Jul 2 10:23:47.921255 kernel: Initializing XFRM netlink socket Jul 2 10:23:47.978483 env[1719]: time="2024-07-02T10:23:47.978420004Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 10:23:48.018457 systemd-networkd[1312]: docker0: Link UP Jul 2 10:23:48.022634 env[1719]: time="2024-07-02T10:23:48.022591785Z" level=info msg="Loading containers: done." Jul 2 10:23:48.039875 env[1719]: time="2024-07-02T10:23:48.039829044Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 10:23:48.039938 env[1719]: time="2024-07-02T10:23:48.039920093Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 10:23:48.040019 env[1719]: time="2024-07-02T10:23:48.039986107Z" level=info msg="Daemon has completed initialization" Jul 2 10:23:48.046953 systemd[1]: Started docker.service. Jul 2 10:23:48.050850 env[1719]: time="2024-07-02T10:23:48.050800027Z" level=info msg="API listen on /run/docker.sock" Jul 2 10:23:49.161565 env[1557]: time="2024-07-02T10:23:49.161421177Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 10:23:49.811525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623390912.mount: Deactivated successfully. Jul 2 10:23:51.624855 env[1557]: time="2024-07-02T10:23:51.624827995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:51.625542 env[1557]: time="2024-07-02T10:23:51.625520621Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:51.626897 env[1557]: time="2024-07-02T10:23:51.626885438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:51.627714 env[1557]: time="2024-07-02T10:23:51.627700684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:51.628285 env[1557]: time="2024-07-02T10:23:51.628227791Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 10:23:51.636367 env[1557]: time="2024-07-02T10:23:51.636333900Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 10:23:52.005860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 10:23:52.006649 systemd[1]: Stopped kubelet.service. Jul 2 10:23:52.009660 systemd[1]: Starting kubelet.service... Jul 2 10:23:52.212734 systemd[1]: Started kubelet.service. Jul 2 10:23:52.262127 kubelet[1892]: E0702 10:23:52.262029 1892 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:23:52.265362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:23:52.265470 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:23:53.905544 env[1557]: time="2024-07-02T10:23:53.905481416Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:53.906139 env[1557]: time="2024-07-02T10:23:53.906105585Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:53.907303 env[1557]: time="2024-07-02T10:23:53.907254982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:53.908317 env[1557]: time="2024-07-02T10:23:53.908272099Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:53.908800 env[1557]: time="2024-07-02T10:23:53.908755002Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 10:23:53.914993 env[1557]: time="2024-07-02T10:23:53.914929129Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 10:23:55.390741 env[1557]: time="2024-07-02T10:23:55.390685861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:55.391369 env[1557]: time="2024-07-02T10:23:55.391328358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:55.392455 env[1557]: time="2024-07-02T10:23:55.392420133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:55.393409 env[1557]: time="2024-07-02T10:23:55.393358270Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:55.393836 env[1557]: time="2024-07-02T10:23:55.393786542Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 10:23:55.400601 env[1557]: time="2024-07-02T10:23:55.400552038Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 10:23:56.535288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008251134.mount: Deactivated successfully. Jul 2 10:23:56.889543 env[1557]: time="2024-07-02T10:23:56.889428380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:56.890406 env[1557]: time="2024-07-02T10:23:56.890383416Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:56.891867 env[1557]: time="2024-07-02T10:23:56.891795395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:56.893198 env[1557]: time="2024-07-02T10:23:56.893164308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:56.893831 env[1557]: time="2024-07-02T10:23:56.893763133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 10:23:56.906166 env[1557]: time="2024-07-02T10:23:56.906123028Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 10:23:57.528078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004553311.mount: Deactivated successfully. Jul 2 10:23:58.415637 env[1557]: time="2024-07-02T10:23:58.415582067Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:58.416288 env[1557]: time="2024-07-02T10:23:58.416233371Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:58.417496 env[1557]: time="2024-07-02T10:23:58.417448081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:58.418783 env[1557]: time="2024-07-02T10:23:58.418749463Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:58.419146 env[1557]: time="2024-07-02T10:23:58.419109525Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 10:23:58.424916 env[1557]: time="2024-07-02T10:23:58.424901034Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 10:23:58.913419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920318293.mount: Deactivated successfully. Jul 2 10:23:58.914805 env[1557]: time="2024-07-02T10:23:58.914786959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:58.915376 env[1557]: time="2024-07-02T10:23:58.915362470Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:58.915970 env[1557]: time="2024-07-02T10:23:58.915959018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:58.917099 env[1557]: time="2024-07-02T10:23:58.917089189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:23:58.917292 env[1557]: time="2024-07-02T10:23:58.917279972Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 10:23:58.922624 env[1557]: time="2024-07-02T10:23:58.922523493Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 10:23:59.467448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount941058279.mount: Deactivated successfully. Jul 2 10:24:01.466060 env[1557]: time="2024-07-02T10:24:01.466005697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:01.466775 env[1557]: time="2024-07-02T10:24:01.466742280Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:01.467649 env[1557]: time="2024-07-02T10:24:01.467598414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:01.468626 env[1557]: time="2024-07-02T10:24:01.468579141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:01.469586 env[1557]: time="2024-07-02T10:24:01.469544161Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 10:24:02.504369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 10:24:02.504498 systemd[1]: Stopped kubelet.service. Jul 2 10:24:02.505362 systemd[1]: Starting kubelet.service... Jul 2 10:24:02.682988 systemd[1]: Started kubelet.service. Jul 2 10:24:02.712165 kubelet[2063]: E0702 10:24:02.712089 2063 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:24:02.713132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:24:02.713248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:24:03.067195 systemd[1]: Stopped kubelet.service. Jul 2 10:24:03.068547 systemd[1]: Starting kubelet.service... Jul 2 10:24:03.081103 systemd[1]: Reloading. Jul 2 10:24:03.110390 /usr/lib/systemd/system-generators/torcx-generator[2105]: time="2024-07-02T10:24:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:24:03.110406 /usr/lib/systemd/system-generators/torcx-generator[2105]: time="2024-07-02T10:24:03Z" level=info msg="torcx already run" Jul 2 10:24:03.162802 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:24:03.162810 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:24:03.174286 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:24:03.252879 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 10:24:03.253073 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 10:24:03.253588 systemd[1]: Stopped kubelet.service. Jul 2 10:24:03.257372 systemd[1]: Starting kubelet.service... Jul 2 10:24:03.444464 systemd[1]: Started kubelet.service. Jul 2 10:24:03.491659 kubelet[2171]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:24:03.491659 kubelet[2171]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 10:24:03.491659 kubelet[2171]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:24:03.491905 kubelet[2171]: I0702 10:24:03.491655 2171 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 10:24:03.702262 kubelet[2171]: I0702 10:24:03.702016 2171 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 10:24:03.702262 kubelet[2171]: I0702 10:24:03.702210 2171 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 10:24:03.702740 kubelet[2171]: I0702 10:24:03.702656 2171 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 10:24:03.717043 kubelet[2171]: E0702 10:24:03.717006 2171 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.203.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:03.719975 kubelet[2171]: I0702 10:24:03.719936 2171 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 10:24:03.742697 kubelet[2171]: I0702 10:24:03.742661 2171 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 10:24:03.742796 kubelet[2171]: I0702 10:24:03.742767 2171 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 10:24:03.742889 kubelet[2171]: I0702 10:24:03.742853 2171 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 10:24:03.742889 kubelet[2171]: I0702 10:24:03.742866 2171 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 10:24:03.742889 kubelet[2171]: I0702 10:24:03.742872 2171 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 10:24:03.743641 kubelet[2171]: I0702 10:24:03.743605 2171 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:24:03.743678 kubelet[2171]: I0702 10:24:03.743656 2171 kubelet.go:396] "Attempting to sync node with API server" Jul 2 10:24:03.743678 kubelet[2171]: I0702 10:24:03.743665 2171 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 10:24:03.743715 kubelet[2171]: I0702 10:24:03.743679 2171 kubelet.go:312] "Adding apiserver pod source" Jul 2 10:24:03.743715 kubelet[2171]: I0702 10:24:03.743686 2171 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 10:24:03.745739 kubelet[2171]: W0702 10:24:03.745708 2171 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.75.203.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:03.745794 kubelet[2171]: E0702 10:24:03.745760 2171 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.203.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:03.745861 kubelet[2171]: W0702 10:24:03.745832 2171 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.75.203.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-539a8ddad9&limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:03.745899 kubelet[2171]: E0702 10:24:03.745872 2171 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.203.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-539a8ddad9&limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:03.747149 kubelet[2171]: I0702 10:24:03.747130 2171 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 10:24:03.750788 kubelet[2171]: I0702 10:24:03.750748 2171 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 10:24:03.751775 kubelet[2171]: W0702 10:24:03.751736 2171 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 10:24:03.752079 kubelet[2171]: I0702 10:24:03.752069 2171 server.go:1256] "Started kubelet" Jul 2 10:24:03.752148 kubelet[2171]: I0702 10:24:03.752133 2171 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 10:24:03.752148 kubelet[2171]: I0702 10:24:03.752137 2171 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 10:24:03.752304 kubelet[2171]: I0702 10:24:03.752294 2171 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 10:24:03.755271 kubelet[2171]: E0702 10:24:03.755263 2171 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 10:24:03.758781 kubelet[2171]: E0702 10:24:03.758745 2171 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.203.11:6443/api/v1/namespaces/default/events\": dial tcp 147.75.203.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.5-a-539a8ddad9.17de5e58ee6844c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.5-a-539a8ddad9,UID:ci-3510.3.5-a-539a8ddad9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.5-a-539a8ddad9,},FirstTimestamp:2024-07-02 10:24:03.752051907 +0000 UTC m=+0.304008878,LastTimestamp:2024-07-02 10:24:03.752051907 +0000 UTC m=+0.304008878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.5-a-539a8ddad9,}" Jul 2 10:24:03.761957 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 10:24:03.761990 kubelet[2171]: I0702 10:24:03.761966 2171 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 10:24:03.762016 kubelet[2171]: I0702 10:24:03.761989 2171 server.go:461] "Adding debug handlers to kubelet server" Jul 2 10:24:03.762098 kubelet[2171]: I0702 10:24:03.762089 2171 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 10:24:03.762137 kubelet[2171]: I0702 10:24:03.762120 2171 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 10:24:03.762171 kubelet[2171]: I0702 10:24:03.762155 2171 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 10:24:03.762322 kubelet[2171]: E0702 10:24:03.762312 2171 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-539a8ddad9?timeout=10s\": dial tcp 147.75.203.11:6443: connect: connection refused" interval="200ms" Jul 2 10:24:03.762322 kubelet[2171]: W0702 10:24:03.762298 2171 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.75.203.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:03.762407 kubelet[2171]: E0702 10:24:03.762336 2171 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.203.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:03.762407 kubelet[2171]: I0702 10:24:03.762394 2171 factory.go:221] Registration of the systemd container factory successfully Jul 2 10:24:03.762473 kubelet[2171]: I0702 10:24:03.762439 2171 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 10:24:03.762850 kubelet[2171]: I0702 10:24:03.762841 2171 factory.go:221] Registration of the containerd container factory successfully Jul 2 10:24:03.770311 kubelet[2171]: I0702 10:24:03.770300 2171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 10:24:03.770831 kubelet[2171]: I0702 10:24:03.770823 2171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 10:24:03.770867 kubelet[2171]: I0702 10:24:03.770835 2171 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 10:24:03.770867 kubelet[2171]: I0702 10:24:03.770845 2171 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 10:24:03.770903 kubelet[2171]: E0702 10:24:03.770880 2171 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 10:24:03.771075 kubelet[2171]: W0702 10:24:03.771054 2171 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.75.203.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:03.771103 kubelet[2171]: E0702 10:24:03.771083 2171 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.203.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:03.776779 kubelet[2171]: I0702 10:24:03.776751 2171 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 10:24:03.776779 kubelet[2171]: I0702 10:24:03.776760 2171 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 10:24:03.776779 kubelet[2171]: I0702 10:24:03.776774 2171 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:24:03.777658 kubelet[2171]: I0702 10:24:03.777650 2171 policy_none.go:49] "None policy: Start" Jul 2 10:24:03.777878 kubelet[2171]: I0702 10:24:03.777871 2171 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 10:24:03.777922 kubelet[2171]: I0702 10:24:03.777882 2171 state_mem.go:35] "Initializing new in-memory state store" Jul 2 10:24:03.780782 systemd[1]: Created slice kubepods.slice. Jul 2 10:24:03.783554 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 10:24:03.805136 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 10:24:03.806001 kubelet[2171]: I0702 10:24:03.805987 2171 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 10:24:03.806180 kubelet[2171]: I0702 10:24:03.806170 2171 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 10:24:03.806942 kubelet[2171]: E0702 10:24:03.806929 2171 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:03.867070 kubelet[2171]: I0702 10:24:03.867017 2171 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:03.867818 kubelet[2171]: E0702 10:24:03.867779 2171 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.203.11:6443/api/v1/nodes\": dial tcp 147.75.203.11:6443: connect: connection refused" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:03.872184 kubelet[2171]: I0702 10:24:03.872073 2171 topology_manager.go:215] "Topology Admit Handler" podUID="773c59be8c159c2bc07d553751871f96" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:03.875445 kubelet[2171]: I0702 10:24:03.875373 2171 topology_manager.go:215] "Topology Admit Handler" podUID="0ae94102c5a288258f54aedcdf992104" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:03.879082 kubelet[2171]: I0702 10:24:03.878982 2171 topology_manager.go:215] "Topology Admit Handler" podUID="d589206371a9353bc4c1cbee8bbcde83" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:03.892175 systemd[1]: Created slice kubepods-burstable-pod773c59be8c159c2bc07d553751871f96.slice. Jul 2 10:24:03.922596 systemd[1]: Created slice kubepods-burstable-pod0ae94102c5a288258f54aedcdf992104.slice. Jul 2 10:24:03.944652 systemd[1]: Created slice kubepods-burstable-podd589206371a9353bc4c1cbee8bbcde83.slice. Jul 2 10:24:03.963567 kubelet[2171]: I0702 10:24:03.963365 2171 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/773c59be8c159c2bc07d553751871f96-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-539a8ddad9\" (UID: \"773c59be8c159c2bc07d553751871f96\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:03.963567 kubelet[2171]: I0702 10:24:03.963461 2171 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/773c59be8c159c2bc07d553751871f96-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-539a8ddad9\" (UID: \"773c59be8c159c2bc07d553751871f96\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:03.963882 kubelet[2171]: E0702 10:24:03.963627 2171 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-539a8ddad9?timeout=10s\": dial tcp 147.75.203.11:6443: connect: connection refused" interval="400ms" Jul 2 10:24:04.064632 kubelet[2171]: I0702 10:24:04.064505 2171 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ae94102c5a288258f54aedcdf992104-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-539a8ddad9\" (UID: \"0ae94102c5a288258f54aedcdf992104\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.064632 kubelet[2171]: I0702 10:24:04.064620 2171 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ae94102c5a288258f54aedcdf992104-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-539a8ddad9\" (UID: \"0ae94102c5a288258f54aedcdf992104\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.065034 kubelet[2171]: I0702 10:24:04.064692 2171 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d589206371a9353bc4c1cbee8bbcde83-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-539a8ddad9\" (UID: \"d589206371a9353bc4c1cbee8bbcde83\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.065034 kubelet[2171]: I0702 10:24:04.064910 2171 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ae94102c5a288258f54aedcdf992104-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-539a8ddad9\" (UID: \"0ae94102c5a288258f54aedcdf992104\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.065034 kubelet[2171]: I0702 10:24:04.065015 2171 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/773c59be8c159c2bc07d553751871f96-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-539a8ddad9\" (UID: \"773c59be8c159c2bc07d553751871f96\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.065378 kubelet[2171]: I0702 10:24:04.065112 2171 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ae94102c5a288258f54aedcdf992104-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-539a8ddad9\" (UID: \"0ae94102c5a288258f54aedcdf992104\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.065378 kubelet[2171]: I0702 10:24:04.065202 2171 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ae94102c5a288258f54aedcdf992104-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-539a8ddad9\" (UID: \"0ae94102c5a288258f54aedcdf992104\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.072537 kubelet[2171]: I0702 10:24:04.072451 2171 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.073207 kubelet[2171]: E0702 10:24:04.073118 2171 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.203.11:6443/api/v1/nodes\": dial tcp 147.75.203.11:6443: connect: connection refused" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.217291 env[1557]: time="2024-07-02T10:24:04.217064799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-539a8ddad9,Uid:773c59be8c159c2bc07d553751871f96,Namespace:kube-system,Attempt:0,}" Jul 2 10:24:04.243106 env[1557]: time="2024-07-02T10:24:04.242957584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-539a8ddad9,Uid:0ae94102c5a288258f54aedcdf992104,Namespace:kube-system,Attempt:0,}" Jul 2 10:24:04.250117 env[1557]: time="2024-07-02T10:24:04.250008462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-539a8ddad9,Uid:d589206371a9353bc4c1cbee8bbcde83,Namespace:kube-system,Attempt:0,}" Jul 2 10:24:04.364857 kubelet[2171]: E0702 10:24:04.364763 2171 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-539a8ddad9?timeout=10s\": dial tcp 147.75.203.11:6443: connect: connection refused" interval="800ms" Jul 2 10:24:04.477350 kubelet[2171]: I0702 10:24:04.477139 2171 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.477915 kubelet[2171]: E0702 10:24:04.477829 2171 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.203.11:6443/api/v1/nodes\": dial tcp 147.75.203.11:6443: connect: connection refused" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:04.724183 kubelet[2171]: W0702 10:24:04.724007 2171 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.75.203.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:04.724183 kubelet[2171]: E0702 10:24:04.724172 2171 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.203.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:04.818952 kubelet[2171]: W0702 10:24:04.818785 2171 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.75.203.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:04.818952 kubelet[2171]: W0702 10:24:04.818785 2171 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.75.203.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:04.818952 kubelet[2171]: E0702 10:24:04.818930 2171 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.203.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:04.818952 kubelet[2171]: E0702 10:24:04.818943 2171 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.203.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:04.852400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1455807849.mount: Deactivated successfully. Jul 2 10:24:04.853619 env[1557]: time="2024-07-02T10:24:04.853570886Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.854807 env[1557]: time="2024-07-02T10:24:04.854766082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.855413 env[1557]: time="2024-07-02T10:24:04.855378699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.856042 env[1557]: time="2024-07-02T10:24:04.856003421Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.856467 env[1557]: time="2024-07-02T10:24:04.856425830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.857634 env[1557]: time="2024-07-02T10:24:04.857593789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.859264 env[1557]: time="2024-07-02T10:24:04.859225177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.861300 env[1557]: time="2024-07-02T10:24:04.861288031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.862049 env[1557]: time="2024-07-02T10:24:04.862039917Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.862419 env[1557]: time="2024-07-02T10:24:04.862409655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.862781 env[1557]: time="2024-07-02T10:24:04.862757478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.863163 env[1557]: time="2024-07-02T10:24:04.863150719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:04.866809 env[1557]: time="2024-07-02T10:24:04.866776785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:24:04.866809 env[1557]: time="2024-07-02T10:24:04.866797409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:24:04.866809 env[1557]: time="2024-07-02T10:24:04.866804124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:24:04.866897 env[1557]: time="2024-07-02T10:24:04.866867442Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3740b82a408b7dc19ff3e490eddf8fa13d245c17a45b68b012e98fee9ba5077b pid=2220 runtime=io.containerd.runc.v2 Jul 2 10:24:04.868860 env[1557]: time="2024-07-02T10:24:04.868792803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:24:04.868860 env[1557]: time="2024-07-02T10:24:04.868816307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:24:04.868860 env[1557]: time="2024-07-02T10:24:04.868826191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:24:04.869015 env[1557]: time="2024-07-02T10:24:04.868955242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c55450526e56c8a517ea41d6bbdd9bab0c3503ef0dffd9e6c87cb8cecb70c4ef pid=2242 runtime=io.containerd.runc.v2 Jul 2 10:24:04.869720 env[1557]: time="2024-07-02T10:24:04.869666822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:24:04.869720 env[1557]: time="2024-07-02T10:24:04.869683898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:24:04.869720 env[1557]: time="2024-07-02T10:24:04.869690559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:24:04.869799 env[1557]: time="2024-07-02T10:24:04.869752064Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48f66ef0eb10f8622e034ef2d7f2c74268264d6212ef55b7311da37deb317667 pid=2254 runtime=io.containerd.runc.v2 Jul 2 10:24:04.873184 systemd[1]: Started cri-containerd-3740b82a408b7dc19ff3e490eddf8fa13d245c17a45b68b012e98fee9ba5077b.scope. Jul 2 10:24:04.875444 systemd[1]: Started cri-containerd-48f66ef0eb10f8622e034ef2d7f2c74268264d6212ef55b7311da37deb317667.scope. Jul 2 10:24:04.876087 systemd[1]: Started cri-containerd-c55450526e56c8a517ea41d6bbdd9bab0c3503ef0dffd9e6c87cb8cecb70c4ef.scope. Jul 2 10:24:04.896346 env[1557]: time="2024-07-02T10:24:04.896317581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-539a8ddad9,Uid:773c59be8c159c2bc07d553751871f96,Namespace:kube-system,Attempt:0,} returns sandbox id \"3740b82a408b7dc19ff3e490eddf8fa13d245c17a45b68b012e98fee9ba5077b\"" Jul 2 10:24:04.897196 env[1557]: time="2024-07-02T10:24:04.897176287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-539a8ddad9,Uid:0ae94102c5a288258f54aedcdf992104,Namespace:kube-system,Attempt:0,} returns sandbox id \"c55450526e56c8a517ea41d6bbdd9bab0c3503ef0dffd9e6c87cb8cecb70c4ef\"" Jul 2 10:24:04.898192 env[1557]: time="2024-07-02T10:24:04.898176956Z" level=info msg="CreateContainer within sandbox \"3740b82a408b7dc19ff3e490eddf8fa13d245c17a45b68b012e98fee9ba5077b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 10:24:04.898192 env[1557]: time="2024-07-02T10:24:04.898175415Z" level=info msg="CreateContainer within sandbox \"c55450526e56c8a517ea41d6bbdd9bab0c3503ef0dffd9e6c87cb8cecb70c4ef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 10:24:04.898528 env[1557]: time="2024-07-02T10:24:04.898510622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-539a8ddad9,Uid:d589206371a9353bc4c1cbee8bbcde83,Namespace:kube-system,Attempt:0,} returns sandbox id \"48f66ef0eb10f8622e034ef2d7f2c74268264d6212ef55b7311da37deb317667\"" Jul 2 10:24:04.899406 env[1557]: time="2024-07-02T10:24:04.899392254Z" level=info msg="CreateContainer within sandbox \"48f66ef0eb10f8622e034ef2d7f2c74268264d6212ef55b7311da37deb317667\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 10:24:04.903401 env[1557]: time="2024-07-02T10:24:04.903386071Z" level=info msg="CreateContainer within sandbox \"3740b82a408b7dc19ff3e490eddf8fa13d245c17a45b68b012e98fee9ba5077b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"036e719a1f827919025f0f9bda5eef84e3e68bf95e1e8bf5d03f333f7f011c2a\"" Jul 2 10:24:04.903830 env[1557]: time="2024-07-02T10:24:04.903811059Z" level=info msg="StartContainer for \"036e719a1f827919025f0f9bda5eef84e3e68bf95e1e8bf5d03f333f7f011c2a\"" Jul 2 10:24:04.905456 env[1557]: time="2024-07-02T10:24:04.905429285Z" level=info msg="CreateContainer within sandbox \"c55450526e56c8a517ea41d6bbdd9bab0c3503ef0dffd9e6c87cb8cecb70c4ef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4aefe71bf4c7aaa9d3e63bb3f291fe5c24e0cc7b3e7fa58a1c2de12a9e8e499e\"" Jul 2 10:24:04.905673 env[1557]: time="2024-07-02T10:24:04.905661990Z" level=info msg="StartContainer for \"4aefe71bf4c7aaa9d3e63bb3f291fe5c24e0cc7b3e7fa58a1c2de12a9e8e499e\"" Jul 2 10:24:04.906604 env[1557]: time="2024-07-02T10:24:04.906584167Z" level=info msg="CreateContainer within sandbox \"48f66ef0eb10f8622e034ef2d7f2c74268264d6212ef55b7311da37deb317667\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9415081ae576489a55aad4d88496029cff3ecc139464ddb27c3c067ea9764b53\"" Jul 2 10:24:04.906765 env[1557]: time="2024-07-02T10:24:04.906754337Z" level=info msg="StartContainer for \"9415081ae576489a55aad4d88496029cff3ecc139464ddb27c3c067ea9764b53\"" Jul 2 10:24:04.911884 systemd[1]: Started cri-containerd-036e719a1f827919025f0f9bda5eef84e3e68bf95e1e8bf5d03f333f7f011c2a.scope. Jul 2 10:24:04.913338 systemd[1]: Started cri-containerd-4aefe71bf4c7aaa9d3e63bb3f291fe5c24e0cc7b3e7fa58a1c2de12a9e8e499e.scope. Jul 2 10:24:04.914647 systemd[1]: Started cri-containerd-9415081ae576489a55aad4d88496029cff3ecc139464ddb27c3c067ea9764b53.scope. Jul 2 10:24:04.914754 kubelet[2171]: W0702 10:24:04.914666 2171 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.75.203.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-539a8ddad9&limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:04.914754 kubelet[2171]: E0702 10:24:04.914709 2171 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.203.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-539a8ddad9&limit=500&resourceVersion=0": dial tcp 147.75.203.11:6443: connect: connection refused Jul 2 10:24:04.936633 env[1557]: time="2024-07-02T10:24:04.936582345Z" level=info msg="StartContainer for \"4aefe71bf4c7aaa9d3e63bb3f291fe5c24e0cc7b3e7fa58a1c2de12a9e8e499e\" returns successfully" Jul 2 10:24:04.936743 env[1557]: time="2024-07-02T10:24:04.936683230Z" level=info msg="StartContainer for \"036e719a1f827919025f0f9bda5eef84e3e68bf95e1e8bf5d03f333f7f011c2a\" returns successfully" Jul 2 10:24:04.938848 env[1557]: time="2024-07-02T10:24:04.938815452Z" level=info msg="StartContainer for \"9415081ae576489a55aad4d88496029cff3ecc139464ddb27c3c067ea9764b53\" returns successfully" Jul 2 10:24:05.279776 kubelet[2171]: I0702 10:24:05.279763 2171 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:05.651768 kubelet[2171]: E0702 10:24:05.651707 2171 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.5-a-539a8ddad9\" not found" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:05.764910 kubelet[2171]: I0702 10:24:05.764847 2171 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:05.793726 kubelet[2171]: E0702 10:24:05.793696 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:05.894237 kubelet[2171]: E0702 10:24:05.894141 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:05.994881 kubelet[2171]: E0702 10:24:05.994684 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:06.095259 kubelet[2171]: E0702 10:24:06.095122 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:06.195588 kubelet[2171]: E0702 10:24:06.195459 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:06.296338 kubelet[2171]: E0702 10:24:06.296225 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:06.396577 kubelet[2171]: E0702 10:24:06.396474 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:06.497197 kubelet[2171]: E0702 10:24:06.497081 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:06.598442 kubelet[2171]: E0702 10:24:06.598232 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:06.698467 kubelet[2171]: E0702 10:24:06.698380 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:06.798758 kubelet[2171]: E0702 10:24:06.798699 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:06.899854 kubelet[2171]: E0702 10:24:06.899696 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:07.000668 kubelet[2171]: E0702 10:24:07.000579 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:07.101568 kubelet[2171]: E0702 10:24:07.101508 2171 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:07.746630 kubelet[2171]: I0702 10:24:07.746510 2171 apiserver.go:52] "Watching apiserver" Jul 2 10:24:07.762536 kubelet[2171]: I0702 10:24:07.762426 2171 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 10:24:08.647269 systemd[1]: Reloading. Jul 2 10:24:08.679642 /usr/lib/systemd/system-generators/torcx-generator[2505]: time="2024-07-02T10:24:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:24:08.679669 /usr/lib/systemd/system-generators/torcx-generator[2505]: time="2024-07-02T10:24:08Z" level=info msg="torcx already run" Jul 2 10:24:08.732947 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:24:08.732955 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:24:08.744207 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:24:08.810986 systemd[1]: Stopping kubelet.service... Jul 2 10:24:08.824700 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 10:24:08.824815 systemd[1]: Stopped kubelet.service. Jul 2 10:24:08.825697 systemd[1]: Starting kubelet.service... Jul 2 10:24:09.010978 systemd[1]: Started kubelet.service. Jul 2 10:24:09.048505 kubelet[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:24:09.048505 kubelet[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 10:24:09.048505 kubelet[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:24:09.048786 kubelet[2569]: I0702 10:24:09.048542 2569 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 10:24:09.053369 kubelet[2569]: I0702 10:24:09.053321 2569 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 10:24:09.053369 kubelet[2569]: I0702 10:24:09.053339 2569 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 10:24:09.053551 kubelet[2569]: I0702 10:24:09.053516 2569 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 10:24:09.054746 kubelet[2569]: I0702 10:24:09.054705 2569 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 10:24:09.056132 kubelet[2569]: I0702 10:24:09.056088 2569 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 10:24:09.076672 kubelet[2569]: I0702 10:24:09.076659 2569 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 10:24:09.076795 kubelet[2569]: I0702 10:24:09.076788 2569 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 10:24:09.076914 kubelet[2569]: I0702 10:24:09.076907 2569 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 10:24:09.076993 kubelet[2569]: I0702 10:24:09.076924 2569 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 10:24:09.076993 kubelet[2569]: I0702 10:24:09.076933 2569 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 10:24:09.076993 kubelet[2569]: I0702 10:24:09.076955 2569 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:24:09.077061 kubelet[2569]: I0702 10:24:09.077011 2569 kubelet.go:396] "Attempting to sync node with API server" Jul 2 10:24:09.077061 kubelet[2569]: I0702 10:24:09.077022 2569 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 10:24:09.077061 kubelet[2569]: I0702 10:24:09.077039 2569 kubelet.go:312] "Adding apiserver pod source" Jul 2 10:24:09.077061 kubelet[2569]: I0702 10:24:09.077048 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 10:24:09.077861 kubelet[2569]: I0702 10:24:09.077833 2569 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 10:24:09.078079 kubelet[2569]: I0702 10:24:09.078066 2569 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 10:24:09.078983 kubelet[2569]: I0702 10:24:09.078970 2569 server.go:1256] "Started kubelet" Jul 2 10:24:09.079036 kubelet[2569]: I0702 10:24:09.079027 2569 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 10:24:09.079077 kubelet[2569]: I0702 10:24:09.079042 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 10:24:09.079202 kubelet[2569]: I0702 10:24:09.079188 2569 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 10:24:09.079781 kubelet[2569]: I0702 10:24:09.079770 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 10:24:09.079836 kubelet[2569]: I0702 10:24:09.079826 2569 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 10:24:09.079836 kubelet[2569]: E0702 10:24:09.079835 2569 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-539a8ddad9\" not found" Jul 2 10:24:09.079955 kubelet[2569]: I0702 10:24:09.079855 2569 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 10:24:09.080015 kubelet[2569]: I0702 10:24:09.080006 2569 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 10:24:09.080343 kubelet[2569]: I0702 10:24:09.080327 2569 server.go:461] "Adding debug handlers to kubelet server" Jul 2 10:24:09.080405 kubelet[2569]: E0702 10:24:09.080393 2569 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 10:24:09.081122 kubelet[2569]: I0702 10:24:09.081112 2569 factory.go:221] Registration of the containerd container factory successfully Jul 2 10:24:09.081184 kubelet[2569]: I0702 10:24:09.081124 2569 factory.go:221] Registration of the systemd container factory successfully Jul 2 10:24:09.081233 kubelet[2569]: I0702 10:24:09.081218 2569 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 10:24:09.087184 kubelet[2569]: I0702 10:24:09.087163 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 10:24:09.087884 kubelet[2569]: I0702 10:24:09.087871 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 10:24:09.087933 kubelet[2569]: I0702 10:24:09.087894 2569 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 10:24:09.087933 kubelet[2569]: I0702 10:24:09.087911 2569 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 10:24:09.087977 kubelet[2569]: E0702 10:24:09.087965 2569 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 10:24:09.088194 sudo[2599]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 10:24:09.088409 sudo[2599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 10:24:09.100051 kubelet[2569]: I0702 10:24:09.100032 2569 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 10:24:09.100051 kubelet[2569]: I0702 10:24:09.100048 2569 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 10:24:09.100175 kubelet[2569]: I0702 10:24:09.100060 2569 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:24:09.100202 kubelet[2569]: I0702 10:24:09.100173 2569 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 10:24:09.100202 kubelet[2569]: I0702 10:24:09.100191 2569 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 10:24:09.100202 kubelet[2569]: I0702 10:24:09.100197 2569 policy_none.go:49] "None policy: Start" Jul 2 10:24:09.100601 kubelet[2569]: I0702 10:24:09.100559 2569 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 10:24:09.100601 kubelet[2569]: I0702 10:24:09.100574 2569 state_mem.go:35] "Initializing new in-memory state store" Jul 2 10:24:09.100676 kubelet[2569]: I0702 10:24:09.100664 2569 state_mem.go:75] "Updated machine memory state" Jul 2 10:24:09.102892 kubelet[2569]: I0702 10:24:09.102881 2569 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 10:24:09.103024 kubelet[2569]: I0702 10:24:09.103016 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 10:24:09.181580 kubelet[2569]: I0702 10:24:09.181537 2569 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.186450 kubelet[2569]: I0702 10:24:09.186409 2569 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.186497 kubelet[2569]: I0702 10:24:09.186452 2569 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.188510 kubelet[2569]: I0702 10:24:09.188486 2569 topology_manager.go:215] "Topology Admit Handler" podUID="773c59be8c159c2bc07d553751871f96" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.188596 kubelet[2569]: I0702 10:24:09.188590 2569 topology_manager.go:215] "Topology Admit Handler" podUID="0ae94102c5a288258f54aedcdf992104" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.188680 kubelet[2569]: I0702 10:24:09.188638 2569 topology_manager.go:215] "Topology Admit Handler" podUID="d589206371a9353bc4c1cbee8bbcde83" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.193071 kubelet[2569]: W0702 10:24:09.193058 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 10:24:09.193930 kubelet[2569]: W0702 10:24:09.193920 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 10:24:09.193993 kubelet[2569]: W0702 10:24:09.193987 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 10:24:09.381213 kubelet[2569]: I0702 10:24:09.381097 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/773c59be8c159c2bc07d553751871f96-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-539a8ddad9\" (UID: \"773c59be8c159c2bc07d553751871f96\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.381213 kubelet[2569]: I0702 10:24:09.381123 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/773c59be8c159c2bc07d553751871f96-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-539a8ddad9\" (UID: \"773c59be8c159c2bc07d553751871f96\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.381351 kubelet[2569]: I0702 10:24:09.381213 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ae94102c5a288258f54aedcdf992104-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-539a8ddad9\" (UID: \"0ae94102c5a288258f54aedcdf992104\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.381351 kubelet[2569]: I0702 10:24:09.381267 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ae94102c5a288258f54aedcdf992104-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-539a8ddad9\" (UID: \"0ae94102c5a288258f54aedcdf992104\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.381351 kubelet[2569]: I0702 10:24:09.381295 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d589206371a9353bc4c1cbee8bbcde83-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-539a8ddad9\" (UID: \"d589206371a9353bc4c1cbee8bbcde83\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.381351 kubelet[2569]: I0702 10:24:09.381309 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/773c59be8c159c2bc07d553751871f96-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-539a8ddad9\" (UID: \"773c59be8c159c2bc07d553751871f96\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.381351 kubelet[2569]: I0702 10:24:09.381321 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ae94102c5a288258f54aedcdf992104-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-539a8ddad9\" (UID: \"0ae94102c5a288258f54aedcdf992104\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.381457 kubelet[2569]: I0702 10:24:09.381332 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ae94102c5a288258f54aedcdf992104-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-539a8ddad9\" (UID: \"0ae94102c5a288258f54aedcdf992104\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.381457 kubelet[2569]: I0702 10:24:09.381343 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ae94102c5a288258f54aedcdf992104-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-539a8ddad9\" (UID: \"0ae94102c5a288258f54aedcdf992104\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" Jul 2 10:24:09.424198 sudo[2599]: pam_unix(sudo:session): session closed for user root Jul 2 10:24:10.078376 kubelet[2569]: I0702 10:24:10.078258 2569 apiserver.go:52] "Watching apiserver" Jul 2 10:24:10.114245 kubelet[2569]: I0702 10:24:10.114200 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.5-a-539a8ddad9" podStartSLOduration=1.114151553 podStartE2EDuration="1.114151553s" podCreationTimestamp="2024-07-02 10:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:24:10.11413705 +0000 UTC m=+1.100332932" watchObservedRunningTime="2024-07-02 10:24:10.114151553 +0000 UTC m=+1.100347435" Jul 2 10:24:10.123475 kubelet[2569]: I0702 10:24:10.123436 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.5-a-539a8ddad9" podStartSLOduration=1.123404101 podStartE2EDuration="1.123404101s" podCreationTimestamp="2024-07-02 10:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:24:10.11895021 +0000 UTC m=+1.105146091" watchObservedRunningTime="2024-07-02 10:24:10.123404101 +0000 UTC m=+1.109599984" Jul 2 10:24:10.128510 kubelet[2569]: I0702 10:24:10.128457 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-539a8ddad9" podStartSLOduration=1.128416168 podStartE2EDuration="1.128416168s" podCreationTimestamp="2024-07-02 10:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:24:10.12350823 +0000 UTC m=+1.109704107" watchObservedRunningTime="2024-07-02 10:24:10.128416168 +0000 UTC m=+1.114612046" Jul 2 10:24:10.180705 kubelet[2569]: I0702 10:24:10.180650 2569 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 10:24:12.014102 sudo[1704]: pam_unix(sudo:session): session closed for user root Jul 2 10:24:12.015101 sshd[1700]: pam_unix(sshd:session): session closed for user core Jul 2 10:24:12.017007 systemd[1]: sshd@4-147.75.203.11:22-139.178.68.195:42376.service: Deactivated successfully. Jul 2 10:24:12.017595 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 10:24:12.017705 systemd[1]: session-7.scope: Consumed 3.419s CPU time. Jul 2 10:24:12.018091 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit. Jul 2 10:24:12.018927 systemd-logind[1596]: Removed session 7. Jul 2 10:24:21.811177 kubelet[2569]: I0702 10:24:21.811105 2569 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 10:24:21.812355 env[1557]: time="2024-07-02T10:24:21.811881457Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 10:24:21.813184 kubelet[2569]: I0702 10:24:21.812356 2569 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 10:24:22.427729 kubelet[2569]: I0702 10:24:22.427640 2569 topology_manager.go:215] "Topology Admit Handler" podUID="e0bdf200-d7c3-4e71-aa69-cc7df655e816" podNamespace="kube-system" podName="kube-proxy-7clbk" Jul 2 10:24:22.433906 kubelet[2569]: I0702 10:24:22.433857 2569 topology_manager.go:215] "Topology Admit Handler" podUID="fbbfeff7-e2e7-457d-988c-1c18bcb243b0" podNamespace="kube-system" podName="cilium-fjbc9" Jul 2 10:24:22.439192 systemd[1]: Created slice kubepods-besteffort-pode0bdf200_d7c3_4e71_aa69_cc7df655e816.slice. Jul 2 10:24:22.457465 systemd[1]: Created slice kubepods-burstable-podfbbfeff7_e2e7_457d_988c_1c18bcb243b0.slice. Jul 2 10:24:22.471940 kubelet[2569]: I0702 10:24:22.471859 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-etc-cni-netd\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.472161 kubelet[2569]: I0702 10:24:22.471978 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-clustermesh-secrets\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.472161 kubelet[2569]: I0702 10:24:22.472046 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-config-path\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.472352 kubelet[2569]: I0702 10:24:22.472161 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-hubble-tls\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.472352 kubelet[2569]: I0702 10:24:22.472245 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0bdf200-d7c3-4e71-aa69-cc7df655e816-xtables-lock\") pod \"kube-proxy-7clbk\" (UID: \"e0bdf200-d7c3-4e71-aa69-cc7df655e816\") " pod="kube-system/kube-proxy-7clbk" Jul 2 10:24:22.472506 kubelet[2569]: I0702 10:24:22.472381 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cni-path\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.472591 kubelet[2569]: I0702 10:24:22.472526 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-xtables-lock\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.472667 kubelet[2569]: I0702 10:24:22.472622 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0bdf200-d7c3-4e71-aa69-cc7df655e816-kube-proxy\") pod \"kube-proxy-7clbk\" (UID: \"e0bdf200-d7c3-4e71-aa69-cc7df655e816\") " pod="kube-system/kube-proxy-7clbk" Jul 2 10:24:22.472744 kubelet[2569]: I0702 10:24:22.472676 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-bpf-maps\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.472827 kubelet[2569]: I0702 10:24:22.472748 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-host-proc-sys-kernel\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.472903 kubelet[2569]: I0702 10:24:22.472872 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-run\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.472980 kubelet[2569]: I0702 10:24:22.472940 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-lib-modules\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.473056 kubelet[2569]: I0702 10:24:22.472993 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-host-proc-sys-net\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.473192 kubelet[2569]: I0702 10:24:22.473127 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b28w\" (UniqueName: \"kubernetes.io/projected/e0bdf200-d7c3-4e71-aa69-cc7df655e816-kube-api-access-8b28w\") pod \"kube-proxy-7clbk\" (UID: \"e0bdf200-d7c3-4e71-aa69-cc7df655e816\") " pod="kube-system/kube-proxy-7clbk" Jul 2 10:24:22.473296 kubelet[2569]: I0702 10:24:22.473235 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-cgroup\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.473371 kubelet[2569]: I0702 10:24:22.473294 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsg74\" (UniqueName: \"kubernetes.io/projected/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-kube-api-access-jsg74\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.473445 kubelet[2569]: I0702 10:24:22.473386 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-hostproc\") pod \"cilium-fjbc9\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " pod="kube-system/cilium-fjbc9" Jul 2 10:24:22.473523 kubelet[2569]: I0702 10:24:22.473443 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0bdf200-d7c3-4e71-aa69-cc7df655e816-lib-modules\") pod \"kube-proxy-7clbk\" (UID: \"e0bdf200-d7c3-4e71-aa69-cc7df655e816\") " pod="kube-system/kube-proxy-7clbk" Jul 2 10:24:22.589926 kubelet[2569]: E0702 10:24:22.589846 2569 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 10:24:22.589926 kubelet[2569]: E0702 10:24:22.589934 2569 projected.go:200] Error preparing data for projected volume kube-api-access-jsg74 for pod kube-system/cilium-fjbc9: configmap "kube-root-ca.crt" not found Jul 2 10:24:22.590531 kubelet[2569]: E0702 10:24:22.590030 2569 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 10:24:22.590531 kubelet[2569]: E0702 10:24:22.590107 2569 projected.go:200] Error preparing data for projected volume kube-api-access-8b28w for pod kube-system/kube-proxy-7clbk: configmap "kube-root-ca.crt" not found Jul 2 10:24:22.590531 kubelet[2569]: E0702 10:24:22.590135 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-kube-api-access-jsg74 podName:fbbfeff7-e2e7-457d-988c-1c18bcb243b0 nodeName:}" failed. No retries permitted until 2024-07-02 10:24:23.090063074 +0000 UTC m=+14.076259122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jsg74" (UniqueName: "kubernetes.io/projected/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-kube-api-access-jsg74") pod "cilium-fjbc9" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0") : configmap "kube-root-ca.crt" not found Jul 2 10:24:22.590531 kubelet[2569]: E0702 10:24:22.590275 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0bdf200-d7c3-4e71-aa69-cc7df655e816-kube-api-access-8b28w podName:e0bdf200-d7c3-4e71-aa69-cc7df655e816 nodeName:}" failed. No retries permitted until 2024-07-02 10:24:23.09021668 +0000 UTC m=+14.076412630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8b28w" (UniqueName: "kubernetes.io/projected/e0bdf200-d7c3-4e71-aa69-cc7df655e816-kube-api-access-8b28w") pod "kube-proxy-7clbk" (UID: "e0bdf200-d7c3-4e71-aa69-cc7df655e816") : configmap "kube-root-ca.crt" not found Jul 2 10:24:22.919958 kubelet[2569]: I0702 10:24:22.919873 2569 topology_manager.go:215] "Topology Admit Handler" podUID="d466e82f-8ae3-40d7-a9d6-1867da7d990b" podNamespace="kube-system" podName="cilium-operator-5cc964979-7jxkv" Jul 2 10:24:22.932591 systemd[1]: Created slice kubepods-besteffort-podd466e82f_8ae3_40d7_a9d6_1867da7d990b.slice. Jul 2 10:24:22.977278 kubelet[2569]: I0702 10:24:22.977246 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d466e82f-8ae3-40d7-a9d6-1867da7d990b-cilium-config-path\") pod \"cilium-operator-5cc964979-7jxkv\" (UID: \"d466e82f-8ae3-40d7-a9d6-1867da7d990b\") " pod="kube-system/cilium-operator-5cc964979-7jxkv" Jul 2 10:24:22.977431 kubelet[2569]: I0702 10:24:22.977315 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjqlh\" (UniqueName: \"kubernetes.io/projected/d466e82f-8ae3-40d7-a9d6-1867da7d990b-kube-api-access-rjqlh\") pod \"cilium-operator-5cc964979-7jxkv\" (UID: \"d466e82f-8ae3-40d7-a9d6-1867da7d990b\") " pod="kube-system/cilium-operator-5cc964979-7jxkv" Jul 2 10:24:23.238033 env[1557]: time="2024-07-02T10:24:23.237814707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-7jxkv,Uid:d466e82f-8ae3-40d7-a9d6-1867da7d990b,Namespace:kube-system,Attempt:0,}" Jul 2 10:24:23.264488 env[1557]: time="2024-07-02T10:24:23.264315592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:24:23.264488 env[1557]: time="2024-07-02T10:24:23.264411725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:24:23.264488 env[1557]: time="2024-07-02T10:24:23.264450710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:24:23.265029 env[1557]: time="2024-07-02T10:24:23.264866917Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19 pid=2726 runtime=io.containerd.runc.v2 Jul 2 10:24:23.294065 systemd[1]: Started cri-containerd-afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19.scope. Jul 2 10:24:23.354944 env[1557]: time="2024-07-02T10:24:23.354851137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7clbk,Uid:e0bdf200-d7c3-4e71-aa69-cc7df655e816,Namespace:kube-system,Attempt:0,}" Jul 2 10:24:23.361249 env[1557]: time="2024-07-02T10:24:23.361217429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fjbc9,Uid:fbbfeff7-e2e7-457d-988c-1c18bcb243b0,Namespace:kube-system,Attempt:0,}" Jul 2 10:24:23.361841 env[1557]: time="2024-07-02T10:24:23.361815053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:24:23.361841 env[1557]: time="2024-07-02T10:24:23.361837137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:24:23.361909 env[1557]: time="2024-07-02T10:24:23.361844096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:24:23.361930 env[1557]: time="2024-07-02T10:24:23.361910009Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/239b4cc0f52cdeadf19929509221c09bdd872c260f5d5a10a21653d12eeee433 pid=2760 runtime=io.containerd.runc.v2 Jul 2 10:24:23.364847 env[1557]: time="2024-07-02T10:24:23.364797671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-7jxkv,Uid:d466e82f-8ae3-40d7-a9d6-1867da7d990b,Namespace:kube-system,Attempt:0,} returns sandbox id \"afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19\"" Jul 2 10:24:23.365833 env[1557]: time="2024-07-02T10:24:23.365814975Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 10:24:23.367435 systemd[1]: Started cri-containerd-239b4cc0f52cdeadf19929509221c09bdd872c260f5d5a10a21653d12eeee433.scope. Jul 2 10:24:23.367557 env[1557]: time="2024-07-02T10:24:23.367435775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:24:23.367557 env[1557]: time="2024-07-02T10:24:23.367463835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:24:23.367557 env[1557]: time="2024-07-02T10:24:23.367470968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:24:23.367557 env[1557]: time="2024-07-02T10:24:23.367538057Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623 pid=2791 runtime=io.containerd.runc.v2 Jul 2 10:24:23.373047 systemd[1]: Started cri-containerd-a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623.scope. Jul 2 10:24:23.379101 env[1557]: time="2024-07-02T10:24:23.379076890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7clbk,Uid:e0bdf200-d7c3-4e71-aa69-cc7df655e816,Namespace:kube-system,Attempt:0,} returns sandbox id \"239b4cc0f52cdeadf19929509221c09bdd872c260f5d5a10a21653d12eeee433\"" Jul 2 10:24:23.380314 env[1557]: time="2024-07-02T10:24:23.380295296Z" level=info msg="CreateContainer within sandbox \"239b4cc0f52cdeadf19929509221c09bdd872c260f5d5a10a21653d12eeee433\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 10:24:23.383814 env[1557]: time="2024-07-02T10:24:23.383792998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fjbc9,Uid:fbbfeff7-e2e7-457d-988c-1c18bcb243b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\"" Jul 2 10:24:23.386032 env[1557]: time="2024-07-02T10:24:23.385988045Z" level=info msg="CreateContainer within sandbox \"239b4cc0f52cdeadf19929509221c09bdd872c260f5d5a10a21653d12eeee433\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"80e46947b6f42f5aed10174e631d628164cf9352f6821d50486ea9fc33f18c88\"" Jul 2 10:24:23.386257 env[1557]: time="2024-07-02T10:24:23.386208548Z" level=info msg="StartContainer for \"80e46947b6f42f5aed10174e631d628164cf9352f6821d50486ea9fc33f18c88\"" Jul 2 10:24:23.393942 systemd[1]: Started cri-containerd-80e46947b6f42f5aed10174e631d628164cf9352f6821d50486ea9fc33f18c88.scope. Jul 2 10:24:23.407576 env[1557]: time="2024-07-02T10:24:23.407521776Z" level=info msg="StartContainer for \"80e46947b6f42f5aed10174e631d628164cf9352f6821d50486ea9fc33f18c88\" returns successfully" Jul 2 10:24:24.719551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822100425.mount: Deactivated successfully. Jul 2 10:24:25.170176 env[1557]: time="2024-07-02T10:24:25.170106906Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:25.170770 env[1557]: time="2024-07-02T10:24:25.170734205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:25.171519 env[1557]: time="2024-07-02T10:24:25.171506544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:25.171914 env[1557]: time="2024-07-02T10:24:25.171897798Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 10:24:25.172246 env[1557]: time="2024-07-02T10:24:25.172219797Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 10:24:25.173061 env[1557]: time="2024-07-02T10:24:25.173044672Z" level=info msg="CreateContainer within sandbox \"afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 10:24:25.178057 env[1557]: time="2024-07-02T10:24:25.178017156Z" level=info msg="CreateContainer within sandbox \"afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\"" Jul 2 10:24:25.178284 env[1557]: time="2024-07-02T10:24:25.178239105Z" level=info msg="StartContainer for \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\"" Jul 2 10:24:25.186286 systemd[1]: Started cri-containerd-90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5.scope. Jul 2 10:24:25.199333 env[1557]: time="2024-07-02T10:24:25.199282417Z" level=info msg="StartContainer for \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\" returns successfully" Jul 2 10:24:25.265211 update_engine[1551]: I0702 10:24:25.265144 1551 update_attempter.cc:509] Updating boot flags... Jul 2 10:24:26.160947 kubelet[2569]: I0702 10:24:26.160834 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-7jxkv" podStartSLOduration=2.3542097220000002 podStartE2EDuration="4.160736862s" podCreationTimestamp="2024-07-02 10:24:22 +0000 UTC" firstStartedPulling="2024-07-02 10:24:23.365596515 +0000 UTC m=+14.351792397" lastFinishedPulling="2024-07-02 10:24:25.172123659 +0000 UTC m=+16.158319537" observedRunningTime="2024-07-02 10:24:26.160527074 +0000 UTC m=+17.146723032" watchObservedRunningTime="2024-07-02 10:24:26.160736862 +0000 UTC m=+17.146932813" Jul 2 10:24:26.162048 kubelet[2569]: I0702 10:24:26.161266 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7clbk" podStartSLOduration=4.161202373 podStartE2EDuration="4.161202373s" podCreationTimestamp="2024-07-02 10:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:24:24.152978003 +0000 UTC m=+15.139173955" watchObservedRunningTime="2024-07-02 10:24:26.161202373 +0000 UTC m=+17.147398304" Jul 2 10:24:29.030421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080792594.mount: Deactivated successfully. Jul 2 10:24:30.714801 env[1557]: time="2024-07-02T10:24:30.714772849Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:30.715504 env[1557]: time="2024-07-02T10:24:30.715476284Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:30.716327 env[1557]: time="2024-07-02T10:24:30.716313692Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:24:30.716841 env[1557]: time="2024-07-02T10:24:30.716826477Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 10:24:30.718099 env[1557]: time="2024-07-02T10:24:30.718083256Z" level=info msg="CreateContainer within sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 10:24:30.723015 env[1557]: time="2024-07-02T10:24:30.722994083Z" level=info msg="CreateContainer within sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d\"" Jul 2 10:24:30.723411 env[1557]: time="2024-07-02T10:24:30.723364317Z" level=info msg="StartContainer for \"3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d\"" Jul 2 10:24:30.734296 systemd[1]: Started cri-containerd-3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d.scope. Jul 2 10:24:30.745047 env[1557]: time="2024-07-02T10:24:30.745021812Z" level=info msg="StartContainer for \"3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d\" returns successfully" Jul 2 10:24:30.749812 systemd[1]: cri-containerd-3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d.scope: Deactivated successfully. Jul 2 10:24:31.727116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d-rootfs.mount: Deactivated successfully. Jul 2 10:24:31.920340 env[1557]: time="2024-07-02T10:24:31.920214433Z" level=info msg="shim disconnected" id=3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d Jul 2 10:24:31.921156 env[1557]: time="2024-07-02T10:24:31.920340790Z" level=warning msg="cleaning up after shim disconnected" id=3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d namespace=k8s.io Jul 2 10:24:31.921156 env[1557]: time="2024-07-02T10:24:31.920383972Z" level=info msg="cleaning up dead shim" Jul 2 10:24:31.935824 env[1557]: time="2024-07-02T10:24:31.935721550Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:24:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3124 runtime=io.containerd.runc.v2\n" Jul 2 10:24:32.161512 env[1557]: time="2024-07-02T10:24:32.161427793Z" level=info msg="CreateContainer within sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 10:24:32.178670 env[1557]: time="2024-07-02T10:24:32.178009377Z" level=info msg="CreateContainer within sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5\"" Jul 2 10:24:32.179668 env[1557]: time="2024-07-02T10:24:32.179588946Z" level=info msg="StartContainer for \"d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5\"" Jul 2 10:24:32.219979 systemd[1]: Started cri-containerd-d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5.scope. Jul 2 10:24:32.262465 env[1557]: time="2024-07-02T10:24:32.262369955Z" level=info msg="StartContainer for \"d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5\" returns successfully" Jul 2 10:24:32.278690 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 10:24:32.279473 systemd[1]: Stopped systemd-sysctl.service. Jul 2 10:24:32.279830 systemd[1]: Stopping systemd-sysctl.service... Jul 2 10:24:32.282502 systemd[1]: Starting systemd-sysctl.service... Jul 2 10:24:32.283139 systemd[1]: cri-containerd-d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5.scope: Deactivated successfully. Jul 2 10:24:32.295245 systemd[1]: Finished systemd-sysctl.service. Jul 2 10:24:32.309023 env[1557]: time="2024-07-02T10:24:32.308918846Z" level=info msg="shim disconnected" id=d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5 Jul 2 10:24:32.309023 env[1557]: time="2024-07-02T10:24:32.308987563Z" level=warning msg="cleaning up after shim disconnected" id=d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5 namespace=k8s.io Jul 2 10:24:32.309023 env[1557]: time="2024-07-02T10:24:32.309006527Z" level=info msg="cleaning up dead shim" Jul 2 10:24:32.319834 env[1557]: time="2024-07-02T10:24:32.319748884Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:24:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3189 runtime=io.containerd.runc.v2\n" Jul 2 10:24:32.726397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5-rootfs.mount: Deactivated successfully. Jul 2 10:24:33.169134 env[1557]: time="2024-07-02T10:24:33.169029935Z" level=info msg="CreateContainer within sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 10:24:33.182339 env[1557]: time="2024-07-02T10:24:33.182276351Z" level=info msg="CreateContainer within sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2\"" Jul 2 10:24:33.182739 env[1557]: time="2024-07-02T10:24:33.182684815Z" level=info msg="StartContainer for \"c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2\"" Jul 2 10:24:33.183552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851678838.mount: Deactivated successfully. Jul 2 10:24:33.192110 systemd[1]: Started cri-containerd-c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2.scope. Jul 2 10:24:33.203910 env[1557]: time="2024-07-02T10:24:33.203884524Z" level=info msg="StartContainer for \"c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2\" returns successfully" Jul 2 10:24:33.205380 systemd[1]: cri-containerd-c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2.scope: Deactivated successfully. Jul 2 10:24:33.232603 env[1557]: time="2024-07-02T10:24:33.232575016Z" level=info msg="shim disconnected" id=c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2 Jul 2 10:24:33.232603 env[1557]: time="2024-07-02T10:24:33.232603961Z" level=warning msg="cleaning up after shim disconnected" id=c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2 namespace=k8s.io Jul 2 10:24:33.232727 env[1557]: time="2024-07-02T10:24:33.232610854Z" level=info msg="cleaning up dead shim" Jul 2 10:24:33.236887 env[1557]: time="2024-07-02T10:24:33.236835380Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:24:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3245 runtime=io.containerd.runc.v2\n" Jul 2 10:24:33.726748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2-rootfs.mount: Deactivated successfully. Jul 2 10:24:34.179123 env[1557]: time="2024-07-02T10:24:34.179034787Z" level=info msg="CreateContainer within sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 10:24:34.194799 env[1557]: time="2024-07-02T10:24:34.194677011Z" level=info msg="CreateContainer within sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7\"" Jul 2 10:24:34.195454 env[1557]: time="2024-07-02T10:24:34.195438234Z" level=info msg="StartContainer for \"1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7\"" Jul 2 10:24:34.204046 systemd[1]: Started cri-containerd-1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7.scope. Jul 2 10:24:34.214367 env[1557]: time="2024-07-02T10:24:34.214343453Z" level=info msg="StartContainer for \"1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7\" returns successfully" Jul 2 10:24:34.214787 systemd[1]: cri-containerd-1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7.scope: Deactivated successfully. Jul 2 10:24:34.223881 env[1557]: time="2024-07-02T10:24:34.223830410Z" level=info msg="shim disconnected" id=1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7 Jul 2 10:24:34.223881 env[1557]: time="2024-07-02T10:24:34.223857773Z" level=warning msg="cleaning up after shim disconnected" id=1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7 namespace=k8s.io Jul 2 10:24:34.223881 env[1557]: time="2024-07-02T10:24:34.223863706Z" level=info msg="cleaning up dead shim" Jul 2 10:24:34.227215 env[1557]: time="2024-07-02T10:24:34.227189081Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:24:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3298 runtime=io.containerd.runc.v2\n" Jul 2 10:24:34.727133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7-rootfs.mount: Deactivated successfully. Jul 2 10:24:35.185997 env[1557]: time="2024-07-02T10:24:35.185895360Z" level=info msg="CreateContainer within sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 10:24:35.205845 env[1557]: time="2024-07-02T10:24:35.205821170Z" level=info msg="CreateContainer within sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf\"" Jul 2 10:24:35.206182 env[1557]: time="2024-07-02T10:24:35.206167146Z" level=info msg="StartContainer for \"67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf\"" Jul 2 10:24:35.215246 systemd[1]: Started cri-containerd-67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf.scope. Jul 2 10:24:35.227260 env[1557]: time="2024-07-02T10:24:35.227230404Z" level=info msg="StartContainer for \"67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf\" returns successfully" Jul 2 10:24:35.264528 kubelet[2569]: I0702 10:24:35.264488 2569 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 10:24:35.280155 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 10:24:35.280311 kubelet[2569]: I0702 10:24:35.280296 2569 topology_manager.go:215] "Topology Admit Handler" podUID="393194a4-edcd-49cc-a54f-0e5ea71d7fd0" podNamespace="kube-system" podName="coredns-76f75df574-qkvbs" Jul 2 10:24:35.280413 kubelet[2569]: I0702 10:24:35.280404 2569 topology_manager.go:215] "Topology Admit Handler" podUID="93f7fc7b-5590-426f-acc7-656599c6c4d8" podNamespace="kube-system" podName="coredns-76f75df574-mbv7n" Jul 2 10:24:35.283237 systemd[1]: Created slice kubepods-burstable-pod393194a4_edcd_49cc_a54f_0e5ea71d7fd0.slice. Jul 2 10:24:35.285292 systemd[1]: Created slice kubepods-burstable-pod93f7fc7b_5590_426f_acc7_656599c6c4d8.slice. Jul 2 10:24:35.362063 kubelet[2569]: I0702 10:24:35.362041 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbcj8\" (UniqueName: \"kubernetes.io/projected/393194a4-edcd-49cc-a54f-0e5ea71d7fd0-kube-api-access-hbcj8\") pod \"coredns-76f75df574-qkvbs\" (UID: \"393194a4-edcd-49cc-a54f-0e5ea71d7fd0\") " pod="kube-system/coredns-76f75df574-qkvbs" Jul 2 10:24:35.362167 kubelet[2569]: I0702 10:24:35.362086 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mq4d\" (UniqueName: \"kubernetes.io/projected/93f7fc7b-5590-426f-acc7-656599c6c4d8-kube-api-access-2mq4d\") pod \"coredns-76f75df574-mbv7n\" (UID: \"93f7fc7b-5590-426f-acc7-656599c6c4d8\") " pod="kube-system/coredns-76f75df574-mbv7n" Jul 2 10:24:35.362167 kubelet[2569]: I0702 10:24:35.362111 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93f7fc7b-5590-426f-acc7-656599c6c4d8-config-volume\") pod \"coredns-76f75df574-mbv7n\" (UID: \"93f7fc7b-5590-426f-acc7-656599c6c4d8\") " pod="kube-system/coredns-76f75df574-mbv7n" Jul 2 10:24:35.362167 kubelet[2569]: I0702 10:24:35.362133 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393194a4-edcd-49cc-a54f-0e5ea71d7fd0-config-volume\") pod \"coredns-76f75df574-qkvbs\" (UID: \"393194a4-edcd-49cc-a54f-0e5ea71d7fd0\") " pod="kube-system/coredns-76f75df574-qkvbs" Jul 2 10:24:35.434216 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 10:24:35.586294 env[1557]: time="2024-07-02T10:24:35.586137207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvbs,Uid:393194a4-edcd-49cc-a54f-0e5ea71d7fd0,Namespace:kube-system,Attempt:0,}" Jul 2 10:24:35.588311 env[1557]: time="2024-07-02T10:24:35.588233204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mbv7n,Uid:93f7fc7b-5590-426f-acc7-656599c6c4d8,Namespace:kube-system,Attempt:0,}" Jul 2 10:24:36.208282 kubelet[2569]: I0702 10:24:36.208261 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fjbc9" podStartSLOduration=6.8755261 podStartE2EDuration="14.208233443s" podCreationTimestamp="2024-07-02 10:24:22 +0000 UTC" firstStartedPulling="2024-07-02 10:24:23.384310367 +0000 UTC m=+14.370506245" lastFinishedPulling="2024-07-02 10:24:30.717017706 +0000 UTC m=+21.703213588" observedRunningTime="2024-07-02 10:24:36.207841838 +0000 UTC m=+27.194037721" watchObservedRunningTime="2024-07-02 10:24:36.208233443 +0000 UTC m=+27.194429324" Jul 2 10:24:37.037170 systemd-networkd[1312]: cilium_host: Link UP Jul 2 10:24:37.037268 systemd-networkd[1312]: cilium_net: Link UP Jul 2 10:24:37.037271 systemd-networkd[1312]: cilium_net: Gained carrier Jul 2 10:24:37.037376 systemd-networkd[1312]: cilium_host: Gained carrier Jul 2 10:24:37.045165 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 10:24:37.045161 systemd-networkd[1312]: cilium_host: Gained IPv6LL Jul 2 10:24:37.087480 systemd-networkd[1312]: cilium_vxlan: Link UP Jul 2 10:24:37.087483 systemd-networkd[1312]: cilium_vxlan: Gained carrier Jul 2 10:24:37.223218 kernel: NET: Registered PF_ALG protocol family Jul 2 10:24:37.821106 systemd-networkd[1312]: lxc_health: Link UP Jul 2 10:24:37.841069 systemd-networkd[1312]: lxc_health: Gained carrier Jul 2 10:24:37.841183 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 10:24:37.964281 systemd-networkd[1312]: cilium_net: Gained IPv6LL Jul 2 10:24:38.131775 systemd-networkd[1312]: lxc5c5b1fa4d536: Link UP Jul 2 10:24:38.131862 systemd-networkd[1312]: lxc1d1720cf975b: Link UP Jul 2 10:24:38.168170 kernel: eth0: renamed from tmp892ab Jul 2 10:24:38.183243 kernel: eth0: renamed from tmp76ec8 Jul 2 10:24:38.207577 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 10:24:38.207637 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1d1720cf975b: link becomes ready Jul 2 10:24:38.207655 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 10:24:38.221722 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5c5b1fa4d536: link becomes ready Jul 2 10:24:38.222206 systemd-networkd[1312]: lxc1d1720cf975b: Gained carrier Jul 2 10:24:38.222317 systemd-networkd[1312]: lxc5c5b1fa4d536: Gained carrier Jul 2 10:24:38.349227 systemd-networkd[1312]: cilium_vxlan: Gained IPv6LL Jul 2 10:24:39.372378 systemd-networkd[1312]: lxc_health: Gained IPv6LL Jul 2 10:24:39.884278 systemd-networkd[1312]: lxc1d1720cf975b: Gained IPv6LL Jul 2 10:24:40.268268 systemd-networkd[1312]: lxc5c5b1fa4d536: Gained IPv6LL Jul 2 10:24:40.534481 env[1557]: time="2024-07-02T10:24:40.534420418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:24:40.534481 env[1557]: time="2024-07-02T10:24:40.534443094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:24:40.534481 env[1557]: time="2024-07-02T10:24:40.534449914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:24:40.534713 env[1557]: time="2024-07-02T10:24:40.534519778Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76ec833f3cd837bb27633a140627d0f2db08c48ae3928329290afc201df6ed14 pid=3989 runtime=io.containerd.runc.v2 Jul 2 10:24:40.534871 env[1557]: time="2024-07-02T10:24:40.534849408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:24:40.534871 env[1557]: time="2024-07-02T10:24:40.534866341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:24:40.534914 env[1557]: time="2024-07-02T10:24:40.534873244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:24:40.534942 env[1557]: time="2024-07-02T10:24:40.534928116Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/892ab4180fda0bbc48cd41253313f76770354450ef86c7389eca26324271aa64 pid=3996 runtime=io.containerd.runc.v2 Jul 2 10:24:40.543112 systemd[1]: Started cri-containerd-76ec833f3cd837bb27633a140627d0f2db08c48ae3928329290afc201df6ed14.scope. Jul 2 10:24:40.543834 systemd[1]: Started cri-containerd-892ab4180fda0bbc48cd41253313f76770354450ef86c7389eca26324271aa64.scope. Jul 2 10:24:40.563955 env[1557]: time="2024-07-02T10:24:40.563930093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qkvbs,Uid:393194a4-edcd-49cc-a54f-0e5ea71d7fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"76ec833f3cd837bb27633a140627d0f2db08c48ae3928329290afc201df6ed14\"" Jul 2 10:24:40.564345 env[1557]: time="2024-07-02T10:24:40.564328993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mbv7n,Uid:93f7fc7b-5590-426f-acc7-656599c6c4d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"892ab4180fda0bbc48cd41253313f76770354450ef86c7389eca26324271aa64\"" Jul 2 10:24:40.565168 env[1557]: time="2024-07-02T10:24:40.565153946Z" level=info msg="CreateContainer within sandbox \"76ec833f3cd837bb27633a140627d0f2db08c48ae3928329290afc201df6ed14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 10:24:40.565305 env[1557]: time="2024-07-02T10:24:40.565294054Z" level=info msg="CreateContainer within sandbox \"892ab4180fda0bbc48cd41253313f76770354450ef86c7389eca26324271aa64\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 10:24:40.570578 env[1557]: time="2024-07-02T10:24:40.570529985Z" level=info msg="CreateContainer within sandbox \"76ec833f3cd837bb27633a140627d0f2db08c48ae3928329290afc201df6ed14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c442afafde80d5a0f35a5258095078819a011995360a79a7b7cba66bd7bfcf26\"" Jul 2 10:24:40.570838 env[1557]: time="2024-07-02T10:24:40.570776170Z" level=info msg="StartContainer for \"c442afafde80d5a0f35a5258095078819a011995360a79a7b7cba66bd7bfcf26\"" Jul 2 10:24:40.571563 env[1557]: time="2024-07-02T10:24:40.571538246Z" level=info msg="CreateContainer within sandbox \"892ab4180fda0bbc48cd41253313f76770354450ef86c7389eca26324271aa64\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1c789d35f7715003c5084089f634d6edb034b3df2bbe339ba272d304884728d\"" Jul 2 10:24:40.571786 env[1557]: time="2024-07-02T10:24:40.571744148Z" level=info msg="StartContainer for \"c1c789d35f7715003c5084089f634d6edb034b3df2bbe339ba272d304884728d\"" Jul 2 10:24:40.587805 systemd[1]: Started cri-containerd-c442afafde80d5a0f35a5258095078819a011995360a79a7b7cba66bd7bfcf26.scope. Jul 2 10:24:40.589488 systemd[1]: Started cri-containerd-c1c789d35f7715003c5084089f634d6edb034b3df2bbe339ba272d304884728d.scope. Jul 2 10:24:40.600308 env[1557]: time="2024-07-02T10:24:40.600257887Z" level=info msg="StartContainer for \"c442afafde80d5a0f35a5258095078819a011995360a79a7b7cba66bd7bfcf26\" returns successfully" Jul 2 10:24:40.600931 env[1557]: time="2024-07-02T10:24:40.600914806Z" level=info msg="StartContainer for \"c1c789d35f7715003c5084089f634d6edb034b3df2bbe339ba272d304884728d\" returns successfully" Jul 2 10:24:41.216949 kubelet[2569]: I0702 10:24:41.216934 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mbv7n" podStartSLOduration=19.216911242 podStartE2EDuration="19.216911242s" podCreationTimestamp="2024-07-02 10:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:24:41.216640839 +0000 UTC m=+32.202836721" watchObservedRunningTime="2024-07-02 10:24:41.216911242 +0000 UTC m=+32.203107123" Jul 2 10:24:41.222057 kubelet[2569]: I0702 10:24:41.222040 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qkvbs" podStartSLOduration=19.222013786 podStartE2EDuration="19.222013786s" podCreationTimestamp="2024-07-02 10:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:24:41.221685948 +0000 UTC m=+32.207881830" watchObservedRunningTime="2024-07-02 10:24:41.222013786 +0000 UTC m=+32.208209664" Jul 2 10:26:20.291132 update_engine[1551]: I0702 10:26:20.291070 1551 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 10:26:20.291934 update_engine[1551]: I0702 10:26:20.291136 1551 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 10:26:20.291934 update_engine[1551]: I0702 10:26:20.291908 1551 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 10:26:20.292653 update_engine[1551]: I0702 10:26:20.292618 1551 omaha_request_params.cc:62] Current group set to lts Jul 2 10:26:20.292897 update_engine[1551]: I0702 10:26:20.292867 1551 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 10:26:20.292897 update_engine[1551]: I0702 10:26:20.292884 1551 update_attempter.cc:643] Scheduling an action processor start. Jul 2 10:26:20.293167 update_engine[1551]: I0702 10:26:20.292918 1551 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 10:26:20.293167 update_engine[1551]: I0702 10:26:20.292977 1551 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 10:26:20.293167 update_engine[1551]: I0702 10:26:20.293119 1551 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 10:26:20.293167 update_engine[1551]: I0702 10:26:20.293136 1551 omaha_request_action.cc:271] Request: Jul 2 10:26:20.293167 update_engine[1551]: Jul 2 10:26:20.293167 update_engine[1551]: Jul 2 10:26:20.293167 update_engine[1551]: Jul 2 10:26:20.293167 update_engine[1551]: Jul 2 10:26:20.293167 update_engine[1551]: Jul 2 10:26:20.293167 update_engine[1551]: Jul 2 10:26:20.293167 update_engine[1551]: Jul 2 10:26:20.293167 update_engine[1551]: Jul 2 10:26:20.293167 update_engine[1551]: I0702 10:26:20.293159 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 10:26:20.294358 locksmithd[1595]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 10:26:20.295633 update_engine[1551]: I0702 10:26:20.295599 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 10:26:20.295830 update_engine[1551]: E0702 10:26:20.295800 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 10:26:20.295950 update_engine[1551]: I0702 10:26:20.295944 1551 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 10:26:30.275104 update_engine[1551]: I0702 10:26:30.275063 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 10:26:30.275566 update_engine[1551]: I0702 10:26:30.275314 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 10:26:30.275566 update_engine[1551]: E0702 10:26:30.275416 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 10:26:30.275566 update_engine[1551]: I0702 10:26:30.275498 1551 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 10:26:40.274727 update_engine[1551]: I0702 10:26:40.274638 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 10:26:40.275219 update_engine[1551]: I0702 10:26:40.274904 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 10:26:40.275219 update_engine[1551]: E0702 10:26:40.275017 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 10:26:40.275219 update_engine[1551]: I0702 10:26:40.275115 1551 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 10:26:50.275312 update_engine[1551]: I0702 10:26:50.275269 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 10:26:50.275829 update_engine[1551]: I0702 10:26:50.275530 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 10:26:50.275829 update_engine[1551]: E0702 10:26:50.275655 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 10:26:50.275829 update_engine[1551]: I0702 10:26:50.275738 1551 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 10:26:50.275829 update_engine[1551]: I0702 10:26:50.275748 1551 omaha_request_action.cc:621] Omaha request response: Jul 2 10:26:50.275829 update_engine[1551]: E0702 10:26:50.275825 1551 omaha_request_action.cc:640] Omaha request network transfer failed. Jul 2 10:26:50.276124 update_engine[1551]: I0702 10:26:50.275841 1551 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 10:26:50.276124 update_engine[1551]: I0702 10:26:50.275848 1551 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 10:26:50.276124 update_engine[1551]: I0702 10:26:50.275854 1551 update_attempter.cc:306] Processing Done. Jul 2 10:26:50.276124 update_engine[1551]: E0702 10:26:50.275869 1551 update_attempter.cc:619] Update failed. Jul 2 10:26:50.276124 update_engine[1551]: I0702 10:26:50.275875 1551 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 10:26:50.276124 update_engine[1551]: I0702 10:26:50.275882 1551 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 10:26:50.276124 update_engine[1551]: I0702 10:26:50.275888 1551 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 10:26:50.276124 update_engine[1551]: I0702 10:26:50.275983 1551 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 10:26:50.276124 update_engine[1551]: I0702 10:26:50.276013 1551 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 10:26:50.276124 update_engine[1551]: I0702 10:26:50.276021 1551 omaha_request_action.cc:271] Request: Jul 2 10:26:50.276124 update_engine[1551]: Jul 2 10:26:50.276124 update_engine[1551]: Jul 2 10:26:50.276124 update_engine[1551]: Jul 2 10:26:50.276124 update_engine[1551]: Jul 2 10:26:50.276124 update_engine[1551]: Jul 2 10:26:50.276124 update_engine[1551]: Jul 2 10:26:50.276124 update_engine[1551]: I0702 10:26:50.276027 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 10:26:50.276855 update_engine[1551]: I0702 10:26:50.276227 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 10:26:50.276855 update_engine[1551]: E0702 10:26:50.276324 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 10:26:50.276855 update_engine[1551]: I0702 10:26:50.276403 1551 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 10:26:50.276855 update_engine[1551]: I0702 10:26:50.276412 1551 omaha_request_action.cc:621] Omaha request response: Jul 2 10:26:50.276855 update_engine[1551]: I0702 10:26:50.276419 1551 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 10:26:50.276855 update_engine[1551]: I0702 10:26:50.276425 1551 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 10:26:50.276855 update_engine[1551]: I0702 10:26:50.276430 1551 update_attempter.cc:306] Processing Done. Jul 2 10:26:50.276855 update_engine[1551]: I0702 10:26:50.276436 1551 update_attempter.cc:310] Error event sent. Jul 2 10:26:50.276855 update_engine[1551]: I0702 10:26:50.276448 1551 update_check_scheduler.cc:74] Next update check in 48m53s Jul 2 10:26:50.277149 locksmithd[1595]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 10:26:50.277149 locksmithd[1595]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 10:30:38.496513 systemd[1]: Started sshd@5-147.75.203.11:22-139.178.68.195:34224.service. Jul 2 10:30:38.527340 sshd[4206]: Accepted publickey for core from 139.178.68.195 port 34224 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:30:38.528249 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:30:38.531585 systemd-logind[1596]: New session 8 of user core. Jul 2 10:30:38.532186 systemd[1]: Started session-8.scope. Jul 2 10:30:38.667318 sshd[4206]: pam_unix(sshd:session): session closed for user core Jul 2 10:30:38.668924 systemd[1]: sshd@5-147.75.203.11:22-139.178.68.195:34224.service: Deactivated successfully. Jul 2 10:30:38.669411 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 10:30:38.669851 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit. Jul 2 10:30:38.670440 systemd-logind[1596]: Removed session 8. Jul 2 10:30:43.677100 systemd[1]: Started sshd@6-147.75.203.11:22-139.178.68.195:36716.service. Jul 2 10:30:43.707801 sshd[4238]: Accepted publickey for core from 139.178.68.195 port 36716 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:30:43.708685 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:30:43.711820 systemd-logind[1596]: New session 9 of user core. Jul 2 10:30:43.712389 systemd[1]: Started session-9.scope. Jul 2 10:30:43.800516 sshd[4238]: pam_unix(sshd:session): session closed for user core Jul 2 10:30:43.801960 systemd[1]: sshd@6-147.75.203.11:22-139.178.68.195:36716.service: Deactivated successfully. Jul 2 10:30:43.802416 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 10:30:43.802868 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit. Jul 2 10:30:43.803482 systemd-logind[1596]: Removed session 9. Jul 2 10:30:48.809980 systemd[1]: Started sshd@7-147.75.203.11:22-139.178.68.195:36730.service. Jul 2 10:30:48.840332 sshd[4266]: Accepted publickey for core from 139.178.68.195 port 36730 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:30:48.841248 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:30:48.844231 systemd-logind[1596]: New session 10 of user core. Jul 2 10:30:48.844954 systemd[1]: Started session-10.scope. Jul 2 10:30:48.935713 sshd[4266]: pam_unix(sshd:session): session closed for user core Jul 2 10:30:48.937400 systemd[1]: sshd@7-147.75.203.11:22-139.178.68.195:36730.service: Deactivated successfully. Jul 2 10:30:48.937900 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 10:30:48.938366 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit. Jul 2 10:30:48.939007 systemd-logind[1596]: Removed session 10. Jul 2 10:30:53.946325 systemd[1]: Started sshd@8-147.75.203.11:22-139.178.68.195:33744.service. Jul 2 10:30:53.979440 sshd[4294]: Accepted publickey for core from 139.178.68.195 port 33744 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:30:53.980201 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:30:53.982855 systemd-logind[1596]: New session 11 of user core. Jul 2 10:30:53.983315 systemd[1]: Started session-11.scope. Jul 2 10:30:54.068024 sshd[4294]: pam_unix(sshd:session): session closed for user core Jul 2 10:30:54.069811 systemd[1]: sshd@8-147.75.203.11:22-139.178.68.195:33744.service: Deactivated successfully. Jul 2 10:30:54.070217 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 10:30:54.070591 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit. Jul 2 10:30:54.071111 systemd[1]: Started sshd@9-147.75.203.11:22-139.178.68.195:33754.service. Jul 2 10:30:54.071694 systemd-logind[1596]: Removed session 11. Jul 2 10:30:54.101508 sshd[4320]: Accepted publickey for core from 139.178.68.195 port 33754 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:30:54.102383 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:30:54.105241 systemd-logind[1596]: New session 12 of user core. Jul 2 10:30:54.105842 systemd[1]: Started session-12.scope. Jul 2 10:30:54.209348 sshd[4320]: pam_unix(sshd:session): session closed for user core Jul 2 10:30:54.211542 systemd[1]: sshd@9-147.75.203.11:22-139.178.68.195:33754.service: Deactivated successfully. Jul 2 10:30:54.211952 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 10:30:54.212352 systemd-logind[1596]: Session 12 logged out. Waiting for processes to exit. Jul 2 10:30:54.212957 systemd[1]: Started sshd@10-147.75.203.11:22-139.178.68.195:33762.service. Jul 2 10:30:54.213474 systemd-logind[1596]: Removed session 12. Jul 2 10:30:54.243540 sshd[4345]: Accepted publickey for core from 139.178.68.195 port 33762 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:30:54.244336 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:30:54.246783 systemd-logind[1596]: New session 13 of user core. Jul 2 10:30:54.247323 systemd[1]: Started session-13.scope. Jul 2 10:30:54.337321 sshd[4345]: pam_unix(sshd:session): session closed for user core Jul 2 10:30:54.338890 systemd[1]: sshd@10-147.75.203.11:22-139.178.68.195:33762.service: Deactivated successfully. Jul 2 10:30:54.339366 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 10:30:54.339766 systemd-logind[1596]: Session 13 logged out. Waiting for processes to exit. Jul 2 10:30:54.340354 systemd-logind[1596]: Removed session 13. Jul 2 10:30:59.349175 systemd[1]: Started sshd@11-147.75.203.11:22-139.178.68.195:33768.service. Jul 2 10:30:59.382306 sshd[4369]: Accepted publickey for core from 139.178.68.195 port 33768 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:30:59.383107 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:30:59.385561 systemd-logind[1596]: New session 14 of user core. Jul 2 10:30:59.386183 systemd[1]: Started session-14.scope. Jul 2 10:30:59.470784 sshd[4369]: pam_unix(sshd:session): session closed for user core Jul 2 10:30:59.472284 systemd[1]: sshd@11-147.75.203.11:22-139.178.68.195:33768.service: Deactivated successfully. Jul 2 10:30:59.472724 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 10:30:59.473037 systemd-logind[1596]: Session 14 logged out. Waiting for processes to exit. Jul 2 10:30:59.473607 systemd-logind[1596]: Removed session 14. Jul 2 10:31:04.482568 systemd[1]: Started sshd@12-147.75.203.11:22-139.178.68.195:35244.service. Jul 2 10:31:04.554500 sshd[4394]: Accepted publickey for core from 139.178.68.195 port 35244 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:04.555913 sshd[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:04.560526 systemd-logind[1596]: New session 15 of user core. Jul 2 10:31:04.561500 systemd[1]: Started session-15.scope. Jul 2 10:31:04.652491 sshd[4394]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:04.654528 systemd[1]: sshd@12-147.75.203.11:22-139.178.68.195:35244.service: Deactivated successfully. Jul 2 10:31:04.654982 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 10:31:04.655387 systemd-logind[1596]: Session 15 logged out. Waiting for processes to exit. Jul 2 10:31:04.656044 systemd[1]: Started sshd@13-147.75.203.11:22-139.178.68.195:35256.service. Jul 2 10:31:04.656599 systemd-logind[1596]: Removed session 15. Jul 2 10:31:04.687496 sshd[4420]: Accepted publickey for core from 139.178.68.195 port 35256 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:04.688414 sshd[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:04.691490 systemd-logind[1596]: New session 16 of user core. Jul 2 10:31:04.692460 systemd[1]: Started session-16.scope. Jul 2 10:31:04.834109 sshd[4420]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:04.836059 systemd[1]: sshd@13-147.75.203.11:22-139.178.68.195:35256.service: Deactivated successfully. Jul 2 10:31:04.836506 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 10:31:04.836858 systemd-logind[1596]: Session 16 logged out. Waiting for processes to exit. Jul 2 10:31:04.837494 systemd[1]: Started sshd@14-147.75.203.11:22-139.178.68.195:35260.service. Jul 2 10:31:04.837971 systemd-logind[1596]: Removed session 16. Jul 2 10:31:04.867844 sshd[4443]: Accepted publickey for core from 139.178.68.195 port 35260 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:04.868750 sshd[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:04.871939 systemd-logind[1596]: New session 17 of user core. Jul 2 10:31:04.872924 systemd[1]: Started session-17.scope. Jul 2 10:31:05.786130 sshd[4443]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:05.789288 systemd[1]: sshd@14-147.75.203.11:22-139.178.68.195:35260.service: Deactivated successfully. Jul 2 10:31:05.790047 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 10:31:05.790624 systemd-logind[1596]: Session 17 logged out. Waiting for processes to exit. Jul 2 10:31:05.791896 systemd[1]: Started sshd@15-147.75.203.11:22-139.178.68.195:35264.service. Jul 2 10:31:05.792581 systemd-logind[1596]: Removed session 17. Jul 2 10:31:05.851599 sshd[4474]: Accepted publickey for core from 139.178.68.195 port 35264 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:05.853181 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:05.857209 systemd-logind[1596]: New session 18 of user core. Jul 2 10:31:05.858365 systemd[1]: Started session-18.scope. Jul 2 10:31:06.030166 sshd[4474]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:06.037534 systemd[1]: sshd@15-147.75.203.11:22-139.178.68.195:35264.service: Deactivated successfully. Jul 2 10:31:06.039267 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 10:31:06.041083 systemd-logind[1596]: Session 18 logged out. Waiting for processes to exit. Jul 2 10:31:06.044031 systemd[1]: Started sshd@16-147.75.203.11:22-139.178.68.195:35266.service. Jul 2 10:31:06.046695 systemd-logind[1596]: Removed session 18. Jul 2 10:31:06.113339 sshd[4502]: Accepted publickey for core from 139.178.68.195 port 35266 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:06.114391 sshd[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:06.117692 systemd-logind[1596]: New session 19 of user core. Jul 2 10:31:06.118458 systemd[1]: Started session-19.scope. Jul 2 10:31:06.208205 sshd[4502]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:06.209762 systemd[1]: sshd@16-147.75.203.11:22-139.178.68.195:35266.service: Deactivated successfully. Jul 2 10:31:06.210223 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 10:31:06.210652 systemd-logind[1596]: Session 19 logged out. Waiting for processes to exit. Jul 2 10:31:06.211110 systemd-logind[1596]: Removed session 19. Jul 2 10:31:11.219497 systemd[1]: Started sshd@17-147.75.203.11:22-139.178.68.195:35274.service. Jul 2 10:31:11.316223 sshd[4532]: Accepted publickey for core from 139.178.68.195 port 35274 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:11.320270 sshd[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:11.331108 systemd-logind[1596]: New session 20 of user core. Jul 2 10:31:11.334421 systemd[1]: Started session-20.scope. Jul 2 10:31:11.443782 sshd[4532]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:11.449902 systemd[1]: sshd@17-147.75.203.11:22-139.178.68.195:35274.service: Deactivated successfully. Jul 2 10:31:11.452233 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 10:31:11.454164 systemd-logind[1596]: Session 20 logged out. Waiting for processes to exit. Jul 2 10:31:11.456213 systemd-logind[1596]: Removed session 20. Jul 2 10:31:16.452457 systemd[1]: Started sshd@18-147.75.203.11:22-139.178.68.195:49136.service. Jul 2 10:31:16.482763 sshd[4555]: Accepted publickey for core from 139.178.68.195 port 49136 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:16.483645 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:16.486784 systemd-logind[1596]: New session 21 of user core. Jul 2 10:31:16.487389 systemd[1]: Started session-21.scope. Jul 2 10:31:16.578012 sshd[4555]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:16.579638 systemd[1]: sshd@18-147.75.203.11:22-139.178.68.195:49136.service: Deactivated successfully. Jul 2 10:31:16.580127 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 10:31:16.580560 systemd-logind[1596]: Session 21 logged out. Waiting for processes to exit. Jul 2 10:31:16.581056 systemd-logind[1596]: Removed session 21. Jul 2 10:31:21.587936 systemd[1]: Started sshd@19-147.75.203.11:22-139.178.68.195:49146.service. Jul 2 10:31:21.618136 sshd[4580]: Accepted publickey for core from 139.178.68.195 port 49146 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:21.618996 sshd[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:21.621543 systemd-logind[1596]: New session 22 of user core. Jul 2 10:31:21.621968 systemd[1]: Started session-22.scope. Jul 2 10:31:21.703618 sshd[4580]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:21.705822 systemd[1]: sshd@19-147.75.203.11:22-139.178.68.195:49146.service: Deactivated successfully. Jul 2 10:31:21.706236 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 10:31:21.706621 systemd-logind[1596]: Session 22 logged out. Waiting for processes to exit. Jul 2 10:31:21.707249 systemd[1]: Started sshd@20-147.75.203.11:22-139.178.68.195:49148.service. Jul 2 10:31:21.707757 systemd-logind[1596]: Removed session 22. Jul 2 10:31:21.736719 sshd[4602]: Accepted publickey for core from 139.178.68.195 port 49148 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:21.737516 sshd[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:21.740229 systemd-logind[1596]: New session 23 of user core. Jul 2 10:31:21.740785 systemd[1]: Started session-23.scope. Jul 2 10:31:23.120806 env[1557]: time="2024-07-02T10:31:23.120705796Z" level=info msg="StopContainer for \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\" with timeout 30 (s)" Jul 2 10:31:23.121879 env[1557]: time="2024-07-02T10:31:23.121372052Z" level=info msg="Stop container \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\" with signal terminated" Jul 2 10:31:23.137307 systemd[1]: cri-containerd-90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5.scope: Deactivated successfully. Jul 2 10:31:23.148872 env[1557]: time="2024-07-02T10:31:23.148817736Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 10:31:23.152118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5-rootfs.mount: Deactivated successfully. Jul 2 10:31:23.153729 env[1557]: time="2024-07-02T10:31:23.153700558Z" level=info msg="StopContainer for \"67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf\" with timeout 2 (s)" Jul 2 10:31:23.153907 env[1557]: time="2024-07-02T10:31:23.153885328Z" level=info msg="Stop container \"67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf\" with signal terminated" Jul 2 10:31:23.159024 systemd-networkd[1312]: lxc_health: Link DOWN Jul 2 10:31:23.159031 systemd-networkd[1312]: lxc_health: Lost carrier Jul 2 10:31:23.165673 env[1557]: time="2024-07-02T10:31:23.165635003Z" level=info msg="shim disconnected" id=90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5 Jul 2 10:31:23.165751 env[1557]: time="2024-07-02T10:31:23.165678437Z" level=warning msg="cleaning up after shim disconnected" id=90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5 namespace=k8s.io Jul 2 10:31:23.165751 env[1557]: time="2024-07-02T10:31:23.165689921Z" level=info msg="cleaning up dead shim" Jul 2 10:31:23.171960 env[1557]: time="2024-07-02T10:31:23.171910453Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:31:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4669 runtime=io.containerd.runc.v2\n" Jul 2 10:31:23.173152 env[1557]: time="2024-07-02T10:31:23.173119482Z" level=info msg="StopContainer for \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\" returns successfully" Jul 2 10:31:23.173644 env[1557]: time="2024-07-02T10:31:23.173619533Z" level=info msg="StopPodSandbox for \"afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19\"" Jul 2 10:31:23.173703 env[1557]: time="2024-07-02T10:31:23.173679766Z" level=info msg="Container to stop \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:31:23.175894 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19-shm.mount: Deactivated successfully. Jul 2 10:31:23.179250 systemd[1]: cri-containerd-afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19.scope: Deactivated successfully. Jul 2 10:31:23.195389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19-rootfs.mount: Deactivated successfully. Jul 2 10:31:23.217084 env[1557]: time="2024-07-02T10:31:23.217017142Z" level=info msg="shim disconnected" id=afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19 Jul 2 10:31:23.217234 env[1557]: time="2024-07-02T10:31:23.217085008Z" level=warning msg="cleaning up after shim disconnected" id=afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19 namespace=k8s.io Jul 2 10:31:23.217234 env[1557]: time="2024-07-02T10:31:23.217100658Z" level=info msg="cleaning up dead shim" Jul 2 10:31:23.226356 env[1557]: time="2024-07-02T10:31:23.226260481Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:31:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4704 runtime=io.containerd.runc.v2\n" Jul 2 10:31:23.226847 env[1557]: time="2024-07-02T10:31:23.226787785Z" level=info msg="TearDown network for sandbox \"afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19\" successfully" Jul 2 10:31:23.226847 env[1557]: time="2024-07-02T10:31:23.226836160Z" level=info msg="StopPodSandbox for \"afe50d810489de9caf9c92265f90b8d798c115b72a833a7536c9354e1617dd19\" returns successfully" Jul 2 10:31:23.241847 systemd[1]: cri-containerd-67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf.scope: Deactivated successfully. Jul 2 10:31:23.242328 systemd[1]: cri-containerd-67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf.scope: Consumed 6.636s CPU time. Jul 2 10:31:23.255335 kubelet[2569]: I0702 10:31:23.255246 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d466e82f-8ae3-40d7-a9d6-1867da7d990b-cilium-config-path\") pod \"d466e82f-8ae3-40d7-a9d6-1867da7d990b\" (UID: \"d466e82f-8ae3-40d7-a9d6-1867da7d990b\") " Jul 2 10:31:23.256000 kubelet[2569]: I0702 10:31:23.255384 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjqlh\" (UniqueName: \"kubernetes.io/projected/d466e82f-8ae3-40d7-a9d6-1867da7d990b-kube-api-access-rjqlh\") pod \"d466e82f-8ae3-40d7-a9d6-1867da7d990b\" (UID: \"d466e82f-8ae3-40d7-a9d6-1867da7d990b\") " Jul 2 10:31:23.259304 kubelet[2569]: I0702 10:31:23.259220 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d466e82f-8ae3-40d7-a9d6-1867da7d990b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d466e82f-8ae3-40d7-a9d6-1867da7d990b" (UID: "d466e82f-8ae3-40d7-a9d6-1867da7d990b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 10:31:23.264896 systemd[1]: var-lib-kubelet-pods-d466e82f\x2d8ae3\x2d40d7\x2da9d6\x2d1867da7d990b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjqlh.mount: Deactivated successfully. Jul 2 10:31:23.273624 env[1557]: time="2024-07-02T10:31:23.273542300Z" level=info msg="shim disconnected" id=67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf Jul 2 10:31:23.273853 env[1557]: time="2024-07-02T10:31:23.273628647Z" level=warning msg="cleaning up after shim disconnected" id=67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf namespace=k8s.io Jul 2 10:31:23.273853 env[1557]: time="2024-07-02T10:31:23.273654628Z" level=info msg="cleaning up dead shim" Jul 2 10:31:23.275971 kubelet[2569]: I0702 10:31:23.275885 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d466e82f-8ae3-40d7-a9d6-1867da7d990b-kube-api-access-rjqlh" (OuterVolumeSpecName: "kube-api-access-rjqlh") pod "d466e82f-8ae3-40d7-a9d6-1867da7d990b" (UID: "d466e82f-8ae3-40d7-a9d6-1867da7d990b"). InnerVolumeSpecName "kube-api-access-rjqlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:31:23.286116 env[1557]: time="2024-07-02T10:31:23.286028644Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:31:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4730 runtime=io.containerd.runc.v2\n" Jul 2 10:31:23.287988 env[1557]: time="2024-07-02T10:31:23.287896952Z" level=info msg="StopContainer for \"67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf\" returns successfully" Jul 2 10:31:23.288741 env[1557]: time="2024-07-02T10:31:23.288651823Z" level=info msg="StopPodSandbox for \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\"" Jul 2 10:31:23.288890 env[1557]: time="2024-07-02T10:31:23.288774117Z" level=info msg="Container to stop \"3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:31:23.288890 env[1557]: time="2024-07-02T10:31:23.288809270Z" level=info msg="Container to stop \"67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:31:23.288890 env[1557]: time="2024-07-02T10:31:23.288833125Z" level=info msg="Container to stop \"d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:31:23.288890 env[1557]: time="2024-07-02T10:31:23.288867692Z" level=info msg="Container to stop \"c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:31:23.289229 env[1557]: time="2024-07-02T10:31:23.288889674Z" level=info msg="Container to stop \"1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:31:23.298457 systemd[1]: cri-containerd-a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623.scope: Deactivated successfully. Jul 2 10:31:23.341142 env[1557]: time="2024-07-02T10:31:23.341056353Z" level=info msg="shim disconnected" id=a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623 Jul 2 10:31:23.341395 env[1557]: time="2024-07-02T10:31:23.341157744Z" level=warning msg="cleaning up after shim disconnected" id=a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623 namespace=k8s.io Jul 2 10:31:23.341395 env[1557]: time="2024-07-02T10:31:23.341186436Z" level=info msg="cleaning up dead shim" Jul 2 10:31:23.350122 env[1557]: time="2024-07-02T10:31:23.350040281Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:31:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4761 runtime=io.containerd.runc.v2\n" Jul 2 10:31:23.350593 env[1557]: time="2024-07-02T10:31:23.350549104Z" level=info msg="TearDown network for sandbox \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" successfully" Jul 2 10:31:23.350705 env[1557]: time="2024-07-02T10:31:23.350590388Z" level=info msg="StopPodSandbox for \"a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623\" returns successfully" Jul 2 10:31:23.355809 kubelet[2569]: I0702 10:31:23.355770 2569 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rjqlh\" (UniqueName: \"kubernetes.io/projected/d466e82f-8ae3-40d7-a9d6-1867da7d990b-kube-api-access-rjqlh\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.356048 kubelet[2569]: I0702 10:31:23.355821 2569 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d466e82f-8ae3-40d7-a9d6-1867da7d990b-cilium-config-path\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.358895 kubelet[2569]: I0702 10:31:23.358825 2569 scope.go:117] "RemoveContainer" containerID="90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5" Jul 2 10:31:23.361244 env[1557]: time="2024-07-02T10:31:23.361123954Z" level=info msg="RemoveContainer for \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\"" Jul 2 10:31:23.365802 env[1557]: time="2024-07-02T10:31:23.365733957Z" level=info msg="RemoveContainer for \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\" returns successfully" Jul 2 10:31:23.366264 kubelet[2569]: I0702 10:31:23.366190 2569 scope.go:117] "RemoveContainer" containerID="90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5" Jul 2 10:31:23.366779 env[1557]: time="2024-07-02T10:31:23.366629460Z" level=error msg="ContainerStatus for \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\": not found" Jul 2 10:31:23.367099 systemd[1]: Removed slice kubepods-besteffort-podd466e82f_8ae3_40d7_a9d6_1867da7d990b.slice. Jul 2 10:31:23.367405 systemd[1]: kubepods-besteffort-podd466e82f_8ae3_40d7_a9d6_1867da7d990b.slice: Consumed 1.015s CPU time. Jul 2 10:31:23.367621 kubelet[2569]: E0702 10:31:23.367105 2569 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\": not found" containerID="90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5" Jul 2 10:31:23.367621 kubelet[2569]: I0702 10:31:23.367357 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5"} err="failed to get container status \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"90ed26d787f84a6d5c98a3e21b4f32f76b0fe6816a395f09960cad12cb5a28b5\": not found" Jul 2 10:31:23.456296 kubelet[2569]: I0702 10:31:23.456074 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-xtables-lock\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.456296 kubelet[2569]: I0702 10:31:23.456232 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-lib-modules\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.456296 kubelet[2569]: I0702 10:31:23.456219 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:23.456987 kubelet[2569]: I0702 10:31:23.456343 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-host-proc-sys-kernel\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.456987 kubelet[2569]: I0702 10:31:23.456340 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:23.456987 kubelet[2569]: I0702 10:31:23.456470 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-clustermesh-secrets\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.456987 kubelet[2569]: I0702 10:31:23.456450 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:23.456987 kubelet[2569]: I0702 10:31:23.456580 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cni-path\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.457888 kubelet[2569]: I0702 10:31:23.456687 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsg74\" (UniqueName: \"kubernetes.io/projected/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-kube-api-access-jsg74\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.457888 kubelet[2569]: I0702 10:31:23.456672 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cni-path" (OuterVolumeSpecName: "cni-path") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:23.457888 kubelet[2569]: I0702 10:31:23.456794 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-hostproc\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.457888 kubelet[2569]: I0702 10:31:23.456862 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-hostproc" (OuterVolumeSpecName: "hostproc") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:23.457888 kubelet[2569]: I0702 10:31:23.456901 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-etc-cni-netd\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.457888 kubelet[2569]: I0702 10:31:23.456997 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-bpf-maps\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.458721 kubelet[2569]: I0702 10:31:23.456992 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:23.458721 kubelet[2569]: I0702 10:31:23.457045 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:23.458721 kubelet[2569]: I0702 10:31:23.457108 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-hubble-tls\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.458721 kubelet[2569]: I0702 10:31:23.457229 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-host-proc-sys-net\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.458721 kubelet[2569]: I0702 10:31:23.457328 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-cgroup\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.459266 kubelet[2569]: I0702 10:31:23.457315 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:23.459266 kubelet[2569]: I0702 10:31:23.457412 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:23.459266 kubelet[2569]: I0702 10:31:23.457466 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-config-path\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.459266 kubelet[2569]: I0702 10:31:23.457571 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-run\") pod \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\" (UID: \"fbbfeff7-e2e7-457d-988c-1c18bcb243b0\") " Jul 2 10:31:23.459266 kubelet[2569]: I0702 10:31:23.457677 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:23.459756 kubelet[2569]: I0702 10:31:23.457710 2569 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-xtables-lock\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.459756 kubelet[2569]: I0702 10:31:23.457778 2569 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-lib-modules\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.459756 kubelet[2569]: I0702 10:31:23.457907 2569 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.459756 kubelet[2569]: I0702 10:31:23.457962 2569 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cni-path\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.459756 kubelet[2569]: I0702 10:31:23.458025 2569 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-hostproc\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.459756 kubelet[2569]: I0702 10:31:23.458083 2569 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-etc-cni-netd\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.459756 kubelet[2569]: I0702 10:31:23.458160 2569 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-bpf-maps\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.459756 kubelet[2569]: I0702 10:31:23.458236 2569 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-host-proc-sys-net\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.460554 kubelet[2569]: I0702 10:31:23.458295 2569 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-cgroup\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.463029 kubelet[2569]: I0702 10:31:23.462930 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 10:31:23.463417 kubelet[2569]: I0702 10:31:23.463297 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 10:31:23.464099 kubelet[2569]: I0702 10:31:23.463984 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:31:23.464829 kubelet[2569]: I0702 10:31:23.464722 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-kube-api-access-jsg74" (OuterVolumeSpecName: "kube-api-access-jsg74") pod "fbbfeff7-e2e7-457d-988c-1c18bcb243b0" (UID: "fbbfeff7-e2e7-457d-988c-1c18bcb243b0"). InnerVolumeSpecName "kube-api-access-jsg74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:31:23.559527 kubelet[2569]: I0702 10:31:23.559411 2569 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-clustermesh-secrets\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.559527 kubelet[2569]: I0702 10:31:23.559512 2569 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jsg74\" (UniqueName: \"kubernetes.io/projected/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-kube-api-access-jsg74\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.559930 kubelet[2569]: I0702 10:31:23.559582 2569 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-hubble-tls\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.559930 kubelet[2569]: I0702 10:31:23.559654 2569 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-config-path\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:23.559930 kubelet[2569]: I0702 10:31:23.559698 2569 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbbfeff7-e2e7-457d-988c-1c18bcb243b0-cilium-run\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:24.139665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf-rootfs.mount: Deactivated successfully. Jul 2 10:31:24.139721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623-rootfs.mount: Deactivated successfully. Jul 2 10:31:24.139755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a523d45ef58d81bd59f161bf5556a2fab0e10da0e17f9ab0d27962b023de6623-shm.mount: Deactivated successfully. Jul 2 10:31:24.139790 systemd[1]: var-lib-kubelet-pods-fbbfeff7\x2de2e7\x2d457d\x2d988c\x2d1c18bcb243b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djsg74.mount: Deactivated successfully. Jul 2 10:31:24.139822 systemd[1]: var-lib-kubelet-pods-fbbfeff7\x2de2e7\x2d457d\x2d988c\x2d1c18bcb243b0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 10:31:24.139856 systemd[1]: var-lib-kubelet-pods-fbbfeff7\x2de2e7\x2d457d\x2d988c\x2d1c18bcb243b0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 10:31:24.232341 kubelet[2569]: E0702 10:31:24.232298 2569 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 10:31:24.371504 kubelet[2569]: I0702 10:31:24.371445 2569 scope.go:117] "RemoveContainer" containerID="67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf" Jul 2 10:31:24.374488 env[1557]: time="2024-07-02T10:31:24.374404100Z" level=info msg="RemoveContainer for \"67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf\"" Jul 2 10:31:24.377306 env[1557]: time="2024-07-02T10:31:24.377295104Z" level=info msg="RemoveContainer for \"67c7c54b5a2f5ce9d2816c95f92020fa5b9acf08681b3182ee8f2eda2ebc5bcf\" returns successfully" Jul 2 10:31:24.377415 kubelet[2569]: I0702 10:31:24.377406 2569 scope.go:117] "RemoveContainer" containerID="1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7" Jul 2 10:31:24.378004 env[1557]: time="2024-07-02T10:31:24.377992273Z" level=info msg="RemoveContainer for \"1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7\"" Jul 2 10:31:24.378085 systemd[1]: Removed slice kubepods-burstable-podfbbfeff7_e2e7_457d_988c_1c18bcb243b0.slice. Jul 2 10:31:24.378151 systemd[1]: kubepods-burstable-podfbbfeff7_e2e7_457d_988c_1c18bcb243b0.slice: Consumed 6.709s CPU time. Jul 2 10:31:24.379402 env[1557]: time="2024-07-02T10:31:24.379362364Z" level=info msg="RemoveContainer for \"1eab0e7ac35ad3074cdb5f6855b1fab2be86a2136da517f8f0f8072b1b2e85d7\" returns successfully" Jul 2 10:31:24.379500 kubelet[2569]: I0702 10:31:24.379455 2569 scope.go:117] "RemoveContainer" containerID="c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2" Jul 2 10:31:24.379973 env[1557]: time="2024-07-02T10:31:24.379960559Z" level=info msg="RemoveContainer for \"c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2\"" Jul 2 10:31:24.397937 env[1557]: time="2024-07-02T10:31:24.397864831Z" level=info msg="RemoveContainer for \"c79005d029e93b6d97ff3a2fb3e209538e727beb12bdaba153da429ec7af09b2\" returns successfully" Jul 2 10:31:24.397998 kubelet[2569]: I0702 10:31:24.397953 2569 scope.go:117] "RemoveContainer" containerID="d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5" Jul 2 10:31:24.398460 env[1557]: time="2024-07-02T10:31:24.398445518Z" level=info msg="RemoveContainer for \"d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5\"" Jul 2 10:31:24.399879 env[1557]: time="2024-07-02T10:31:24.399862886Z" level=info msg="RemoveContainer for \"d4f730a962474144e99dcc20d08bd250da81b527c59f5b41f1975b08dda9d5e5\" returns successfully" Jul 2 10:31:24.399939 kubelet[2569]: I0702 10:31:24.399931 2569 scope.go:117] "RemoveContainer" containerID="3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d" Jul 2 10:31:24.400526 env[1557]: time="2024-07-02T10:31:24.400482092Z" level=info msg="RemoveContainer for \"3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d\"" Jul 2 10:31:24.401706 env[1557]: time="2024-07-02T10:31:24.401662423Z" level=info msg="RemoveContainer for \"3bff2b8a5834991e2a537afc46ea1701b6357db2942a0b1b328c72a8d2eab75d\" returns successfully" Jul 2 10:31:25.059297 sshd[4602]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:25.061320 systemd[1]: sshd@20-147.75.203.11:22-139.178.68.195:49148.service: Deactivated successfully. Jul 2 10:31:25.061774 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 10:31:25.062270 systemd-logind[1596]: Session 23 logged out. Waiting for processes to exit. Jul 2 10:31:25.062949 systemd[1]: Started sshd@21-147.75.203.11:22-139.178.68.195:48182.service. Jul 2 10:31:25.063456 systemd-logind[1596]: Removed session 23. Jul 2 10:31:25.090870 kubelet[2569]: I0702 10:31:25.090855 2569 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d466e82f-8ae3-40d7-a9d6-1867da7d990b" path="/var/lib/kubelet/pods/d466e82f-8ae3-40d7-a9d6-1867da7d990b/volumes" Jul 2 10:31:25.091080 kubelet[2569]: I0702 10:31:25.091075 2569 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fbbfeff7-e2e7-457d-988c-1c18bcb243b0" path="/var/lib/kubelet/pods/fbbfeff7-e2e7-457d-988c-1c18bcb243b0/volumes" Jul 2 10:31:25.093010 sshd[4780]: Accepted publickey for core from 139.178.68.195 port 48182 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:25.093856 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:25.096378 systemd-logind[1596]: New session 24 of user core. Jul 2 10:31:25.097061 systemd[1]: Started session-24.scope. Jul 2 10:31:25.413994 sshd[4780]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:25.415854 systemd[1]: sshd@21-147.75.203.11:22-139.178.68.195:48182.service: Deactivated successfully. Jul 2 10:31:25.416277 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 10:31:25.416578 systemd-logind[1596]: Session 24 logged out. Waiting for processes to exit. Jul 2 10:31:25.417311 systemd[1]: Started sshd@22-147.75.203.11:22-139.178.68.195:48194.service. Jul 2 10:31:25.417850 systemd-logind[1596]: Removed session 24. Jul 2 10:31:25.421482 kubelet[2569]: I0702 10:31:25.421456 2569 topology_manager.go:215] "Topology Admit Handler" podUID="79e3915f-21a9-4a77-83b5-72b755cf03d2" podNamespace="kube-system" podName="cilium-4wr5p" Jul 2 10:31:25.421696 kubelet[2569]: E0702 10:31:25.421503 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d466e82f-8ae3-40d7-a9d6-1867da7d990b" containerName="cilium-operator" Jul 2 10:31:25.421696 kubelet[2569]: E0702 10:31:25.421512 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbbfeff7-e2e7-457d-988c-1c18bcb243b0" containerName="cilium-agent" Jul 2 10:31:25.421696 kubelet[2569]: E0702 10:31:25.421518 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbbfeff7-e2e7-457d-988c-1c18bcb243b0" containerName="mount-cgroup" Jul 2 10:31:25.421696 kubelet[2569]: E0702 10:31:25.421525 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbbfeff7-e2e7-457d-988c-1c18bcb243b0" containerName="apply-sysctl-overwrites" Jul 2 10:31:25.421696 kubelet[2569]: E0702 10:31:25.421531 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbbfeff7-e2e7-457d-988c-1c18bcb243b0" containerName="mount-bpf-fs" Jul 2 10:31:25.421696 kubelet[2569]: E0702 10:31:25.421537 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbbfeff7-e2e7-457d-988c-1c18bcb243b0" containerName="clean-cilium-state" Jul 2 10:31:25.421696 kubelet[2569]: I0702 10:31:25.421556 2569 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbbfeff7-e2e7-457d-988c-1c18bcb243b0" containerName="cilium-agent" Jul 2 10:31:25.421696 kubelet[2569]: I0702 10:31:25.421563 2569 memory_manager.go:354] "RemoveStaleState removing state" podUID="d466e82f-8ae3-40d7-a9d6-1867da7d990b" containerName="cilium-operator" Jul 2 10:31:25.425037 systemd[1]: Created slice kubepods-burstable-pod79e3915f_21a9_4a77_83b5_72b755cf03d2.slice. Jul 2 10:31:25.448215 sshd[4804]: Accepted publickey for core from 139.178.68.195 port 48194 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:25.448940 sshd[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:25.451529 systemd-logind[1596]: New session 25 of user core. Jul 2 10:31:25.451991 systemd[1]: Started session-25.scope. Jul 2 10:31:25.473019 kubelet[2569]: I0702 10:31:25.472977 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-ipsec-secrets\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473019 kubelet[2569]: I0702 10:31:25.473001 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swwzj\" (UniqueName: \"kubernetes.io/projected/79e3915f-21a9-4a77-83b5-72b755cf03d2-kube-api-access-swwzj\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473019 kubelet[2569]: I0702 10:31:25.473017 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-cgroup\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473115 kubelet[2569]: I0702 10:31:25.473031 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-host-proc-sys-net\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473115 kubelet[2569]: I0702 10:31:25.473044 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79e3915f-21a9-4a77-83b5-72b755cf03d2-hubble-tls\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473115 kubelet[2569]: I0702 10:31:25.473056 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-run\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473115 kubelet[2569]: I0702 10:31:25.473086 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-xtables-lock\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473115 kubelet[2569]: I0702 10:31:25.473113 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79e3915f-21a9-4a77-83b5-72b755cf03d2-clustermesh-secrets\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473224 kubelet[2569]: I0702 10:31:25.473157 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-etc-cni-netd\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473224 kubelet[2569]: I0702 10:31:25.473196 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cni-path\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473224 kubelet[2569]: I0702 10:31:25.473222 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-hostproc\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473286 kubelet[2569]: I0702 10:31:25.473240 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-lib-modules\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473286 kubelet[2569]: I0702 10:31:25.473262 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-config-path\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473328 kubelet[2569]: I0702 10:31:25.473286 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-host-proc-sys-kernel\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.473328 kubelet[2569]: I0702 10:31:25.473323 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-bpf-maps\") pod \"cilium-4wr5p\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " pod="kube-system/cilium-4wr5p" Jul 2 10:31:25.587786 sshd[4804]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:25.589595 systemd[1]: sshd@22-147.75.203.11:22-139.178.68.195:48194.service: Deactivated successfully. Jul 2 10:31:25.590043 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 10:31:25.590476 systemd-logind[1596]: Session 25 logged out. Waiting for processes to exit. Jul 2 10:31:25.591098 systemd[1]: Started sshd@23-147.75.203.11:22-139.178.68.195:48200.service. Jul 2 10:31:25.591556 systemd-logind[1596]: Removed session 25. Jul 2 10:31:25.594403 env[1557]: time="2024-07-02T10:31:25.594376901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4wr5p,Uid:79e3915f-21a9-4a77-83b5-72b755cf03d2,Namespace:kube-system,Attempt:0,}" Jul 2 10:31:25.599970 env[1557]: time="2024-07-02T10:31:25.599931648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:31:25.599970 env[1557]: time="2024-07-02T10:31:25.599957661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:31:25.600084 env[1557]: time="2024-07-02T10:31:25.599970149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:31:25.600084 env[1557]: time="2024-07-02T10:31:25.600051484Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e pid=4843 runtime=io.containerd.runc.v2 Jul 2 10:31:25.607078 systemd[1]: Started cri-containerd-0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e.scope. Jul 2 10:31:25.618821 env[1557]: time="2024-07-02T10:31:25.618794182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4wr5p,Uid:79e3915f-21a9-4a77-83b5-72b755cf03d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e\"" Jul 2 10:31:25.619996 env[1557]: time="2024-07-02T10:31:25.619980100Z" level=info msg="CreateContainer within sandbox \"0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 10:31:25.620997 sshd[4834]: Accepted publickey for core from 139.178.68.195 port 48200 ssh2: RSA SHA256:4N6puFKrvBMrXqGwHM53c9EV3cTC5UljK2kulpsohkY Jul 2 10:31:25.621981 sshd[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:25.624229 systemd-logind[1596]: New session 26 of user core. Jul 2 10:31:25.624699 env[1557]: time="2024-07-02T10:31:25.624652426Z" level=info msg="CreateContainer within sandbox \"0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab\"" Jul 2 10:31:25.624751 systemd[1]: Started session-26.scope. Jul 2 10:31:25.625004 env[1557]: time="2024-07-02T10:31:25.624987958Z" level=info msg="StartContainer for \"e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab\"" Jul 2 10:31:25.633118 systemd[1]: Started cri-containerd-e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab.scope. Jul 2 10:31:25.639277 systemd[1]: cri-containerd-e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab.scope: Deactivated successfully. Jul 2 10:31:25.639419 systemd[1]: Stopped cri-containerd-e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab.scope. Jul 2 10:31:25.646348 env[1557]: time="2024-07-02T10:31:25.646285120Z" level=info msg="shim disconnected" id=e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab Jul 2 10:31:25.646348 env[1557]: time="2024-07-02T10:31:25.646315187Z" level=warning msg="cleaning up after shim disconnected" id=e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab namespace=k8s.io Jul 2 10:31:25.646348 env[1557]: time="2024-07-02T10:31:25.646320919Z" level=info msg="cleaning up dead shim" Jul 2 10:31:25.650110 env[1557]: time="2024-07-02T10:31:25.650088383Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:31:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4901 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T10:31:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 10:31:25.650326 env[1557]: time="2024-07-02T10:31:25.650227306Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Jul 2 10:31:25.650388 env[1557]: time="2024-07-02T10:31:25.650359038Z" level=error msg="Failed to pipe stdout of container \"e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab\"" error="reading from a closed fifo" Jul 2 10:31:25.650422 env[1557]: time="2024-07-02T10:31:25.650378355Z" level=error msg="Failed to pipe stderr of container \"e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab\"" error="reading from a closed fifo" Jul 2 10:31:25.650984 env[1557]: time="2024-07-02T10:31:25.650930050Z" level=error msg="StartContainer for \"e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 10:31:25.651080 kubelet[2569]: E0702 10:31:25.651066 2569 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab" Jul 2 10:31:25.651197 kubelet[2569]: E0702 10:31:25.651159 2569 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 10:31:25.651197 kubelet[2569]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 10:31:25.651197 kubelet[2569]: rm /hostbin/cilium-mount Jul 2 10:31:25.651274 kubelet[2569]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-swwzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-4wr5p_kube-system(79e3915f-21a9-4a77-83b5-72b755cf03d2): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 10:31:25.651274 kubelet[2569]: E0702 10:31:25.651187 2569 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4wr5p" podUID="79e3915f-21a9-4a77-83b5-72b755cf03d2" Jul 2 10:31:26.380697 env[1557]: time="2024-07-02T10:31:26.380559428Z" level=info msg="StopPodSandbox for \"0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e\"" Jul 2 10:31:26.380981 env[1557]: time="2024-07-02T10:31:26.380722374Z" level=info msg="Container to stop \"e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:31:26.389911 systemd[1]: cri-containerd-0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e.scope: Deactivated successfully. Jul 2 10:31:26.399506 env[1557]: time="2024-07-02T10:31:26.399451524Z" level=info msg="shim disconnected" id=0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e Jul 2 10:31:26.399506 env[1557]: time="2024-07-02T10:31:26.399482388Z" level=warning msg="cleaning up after shim disconnected" id=0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e namespace=k8s.io Jul 2 10:31:26.399506 env[1557]: time="2024-07-02T10:31:26.399492299Z" level=info msg="cleaning up dead shim" Jul 2 10:31:26.403297 env[1557]: time="2024-07-02T10:31:26.403251862Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:31:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4947 runtime=io.containerd.runc.v2\n" Jul 2 10:31:26.403441 env[1557]: time="2024-07-02T10:31:26.403409112Z" level=info msg="TearDown network for sandbox \"0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e\" successfully" Jul 2 10:31:26.403441 env[1557]: time="2024-07-02T10:31:26.403421656Z" level=info msg="StopPodSandbox for \"0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e\" returns successfully" Jul 2 10:31:26.481006 kubelet[2569]: I0702 10:31:26.480905 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79e3915f-21a9-4a77-83b5-72b755cf03d2-clustermesh-secrets\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.481006 kubelet[2569]: I0702 10:31:26.481007 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-cgroup\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481069 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-host-proc-sys-kernel\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481124 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-bpf-maps\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481111 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481234 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79e3915f-21a9-4a77-83b5-72b755cf03d2-hubble-tls\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481238 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481228 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481312 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-etc-cni-netd\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481402 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swwzj\" (UniqueName: \"kubernetes.io/projected/79e3915f-21a9-4a77-83b5-72b755cf03d2-kube-api-access-swwzj\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481474 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481513 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-ipsec-secrets\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481594 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-hostproc\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481666 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-host-proc-sys-net\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481754 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cni-path\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481750 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-hostproc" (OuterVolumeSpecName: "hostproc") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:26.482060 kubelet[2569]: I0702 10:31:26.481854 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-config-path\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.481822 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.481934 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-lib-modules\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.481923 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cni-path" (OuterVolumeSpecName: "cni-path") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482021 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-run\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482036 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482090 2569 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-xtables-lock\") pod \"79e3915f-21a9-4a77-83b5-72b755cf03d2\" (UID: \"79e3915f-21a9-4a77-83b5-72b755cf03d2\") " Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482123 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482161 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482288 2569 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-host-proc-sys-net\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482333 2569 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cni-path\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482372 2569 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-run\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482404 2569 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-xtables-lock\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482437 2569 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-lib-modules\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482467 2569 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-cgroup\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482498 2569 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482527 2569 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-bpf-maps\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.483665 kubelet[2569]: I0702 10:31:26.482562 2569 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-etc-cni-netd\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.485307 kubelet[2569]: I0702 10:31:26.482593 2569 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79e3915f-21a9-4a77-83b5-72b755cf03d2-hostproc\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.486868 kubelet[2569]: I0702 10:31:26.486828 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 10:31:26.487091 kubelet[2569]: I0702 10:31:26.487054 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3915f-21a9-4a77-83b5-72b755cf03d2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 10:31:26.487091 kubelet[2569]: I0702 10:31:26.487054 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e3915f-21a9-4a77-83b5-72b755cf03d2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:31:26.487091 kubelet[2569]: I0702 10:31:26.487068 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e3915f-21a9-4a77-83b5-72b755cf03d2-kube-api-access-swwzj" (OuterVolumeSpecName: "kube-api-access-swwzj") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "kube-api-access-swwzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:31:26.487091 kubelet[2569]: I0702 10:31:26.487076 2569 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "79e3915f-21a9-4a77-83b5-72b755cf03d2" (UID: "79e3915f-21a9-4a77-83b5-72b755cf03d2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 10:31:26.580948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e-rootfs.mount: Deactivated successfully. Jul 2 10:31:26.581003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0572b52a4b5842dd4553c65b48ed7de1b3e803a8a24058f756722e591361a64e-shm.mount: Deactivated successfully. Jul 2 10:31:26.581038 systemd[1]: var-lib-kubelet-pods-79e3915f\x2d21a9\x2d4a77\x2d83b5\x2d72b755cf03d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dswwzj.mount: Deactivated successfully. Jul 2 10:31:26.581073 systemd[1]: var-lib-kubelet-pods-79e3915f\x2d21a9\x2d4a77\x2d83b5\x2d72b755cf03d2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 10:31:26.581105 systemd[1]: var-lib-kubelet-pods-79e3915f\x2d21a9\x2d4a77\x2d83b5\x2d72b755cf03d2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 10:31:26.581136 systemd[1]: var-lib-kubelet-pods-79e3915f\x2d21a9\x2d4a77\x2d83b5\x2d72b755cf03d2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 10:31:26.582912 kubelet[2569]: I0702 10:31:26.582867 2569 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-config-path\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.582912 kubelet[2569]: I0702 10:31:26.582884 2569 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79e3915f-21a9-4a77-83b5-72b755cf03d2-clustermesh-secrets\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.582912 kubelet[2569]: I0702 10:31:26.582892 2569 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79e3915f-21a9-4a77-83b5-72b755cf03d2-hubble-tls\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.582912 kubelet[2569]: I0702 10:31:26.582900 2569 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-swwzj\" (UniqueName: \"kubernetes.io/projected/79e3915f-21a9-4a77-83b5-72b755cf03d2-kube-api-access-swwzj\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:26.582912 kubelet[2569]: I0702 10:31:26.582906 2569 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/79e3915f-21a9-4a77-83b5-72b755cf03d2-cilium-ipsec-secrets\") on node \"ci-3510.3.5-a-539a8ddad9\" DevicePath \"\"" Jul 2 10:31:27.104299 systemd[1]: Removed slice kubepods-burstable-pod79e3915f_21a9_4a77_83b5_72b755cf03d2.slice. Jul 2 10:31:27.385790 kubelet[2569]: I0702 10:31:27.385593 2569 scope.go:117] "RemoveContainer" containerID="e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab" Jul 2 10:31:27.388258 env[1557]: time="2024-07-02T10:31:27.388177491Z" level=info msg="RemoveContainer for \"e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab\"" Jul 2 10:31:27.392602 env[1557]: time="2024-07-02T10:31:27.392500721Z" level=info msg="RemoveContainer for \"e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab\" returns successfully" Jul 2 10:31:27.414629 kubelet[2569]: I0702 10:31:27.414607 2569 topology_manager.go:215] "Topology Admit Handler" podUID="a206110f-44ab-442f-957d-c2b864b678b3" podNamespace="kube-system" podName="cilium-nq6x9" Jul 2 10:31:27.414751 kubelet[2569]: E0702 10:31:27.414640 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79e3915f-21a9-4a77-83b5-72b755cf03d2" containerName="mount-cgroup" Jul 2 10:31:27.414751 kubelet[2569]: I0702 10:31:27.414656 2569 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e3915f-21a9-4a77-83b5-72b755cf03d2" containerName="mount-cgroup" Jul 2 10:31:27.417699 systemd[1]: Created slice kubepods-burstable-poda206110f_44ab_442f_957d_c2b864b678b3.slice. Jul 2 10:31:27.489537 kubelet[2569]: I0702 10:31:27.489475 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a206110f-44ab-442f-957d-c2b864b678b3-host-proc-sys-net\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.489537 kubelet[2569]: I0702 10:31:27.489527 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a206110f-44ab-442f-957d-c2b864b678b3-cilium-run\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.489930 kubelet[2569]: I0702 10:31:27.489583 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a206110f-44ab-442f-957d-c2b864b678b3-cilium-cgroup\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.489930 kubelet[2569]: I0702 10:31:27.489638 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a206110f-44ab-442f-957d-c2b864b678b3-xtables-lock\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.489930 kubelet[2569]: I0702 10:31:27.489694 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a206110f-44ab-442f-957d-c2b864b678b3-cni-path\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.489930 kubelet[2569]: I0702 10:31:27.489759 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a206110f-44ab-442f-957d-c2b864b678b3-lib-modules\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.489930 kubelet[2569]: I0702 10:31:27.489802 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a206110f-44ab-442f-957d-c2b864b678b3-hubble-tls\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.489930 kubelet[2569]: I0702 10:31:27.489884 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a206110f-44ab-442f-957d-c2b864b678b3-clustermesh-secrets\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.489930 kubelet[2569]: I0702 10:31:27.489917 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a206110f-44ab-442f-957d-c2b864b678b3-host-proc-sys-kernel\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.490202 kubelet[2569]: I0702 10:31:27.489980 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a206110f-44ab-442f-957d-c2b864b678b3-hostproc\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.490202 kubelet[2569]: I0702 10:31:27.490009 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a206110f-44ab-442f-957d-c2b864b678b3-bpf-maps\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.490202 kubelet[2569]: I0702 10:31:27.490036 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a206110f-44ab-442f-957d-c2b864b678b3-etc-cni-netd\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.490202 kubelet[2569]: I0702 10:31:27.490089 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a206110f-44ab-442f-957d-c2b864b678b3-cilium-ipsec-secrets\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.490202 kubelet[2569]: I0702 10:31:27.490180 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a206110f-44ab-442f-957d-c2b864b678b3-cilium-config-path\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.490423 kubelet[2569]: I0702 10:31:27.490227 2569 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vgh\" (UniqueName: \"kubernetes.io/projected/a206110f-44ab-442f-957d-c2b864b678b3-kube-api-access-n2vgh\") pod \"cilium-nq6x9\" (UID: \"a206110f-44ab-442f-957d-c2b864b678b3\") " pod="kube-system/cilium-nq6x9" Jul 2 10:31:27.720075 env[1557]: time="2024-07-02T10:31:27.719925139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nq6x9,Uid:a206110f-44ab-442f-957d-c2b864b678b3,Namespace:kube-system,Attempt:0,}" Jul 2 10:31:27.739391 env[1557]: time="2024-07-02T10:31:27.739095927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:31:27.739391 env[1557]: time="2024-07-02T10:31:27.739256990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:31:27.739391 env[1557]: time="2024-07-02T10:31:27.739323867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:31:27.739987 env[1557]: time="2024-07-02T10:31:27.739810922Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2 pid=4974 runtime=io.containerd.runc.v2 Jul 2 10:31:27.772813 systemd[1]: Started cri-containerd-840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2.scope. Jul 2 10:31:27.805127 env[1557]: time="2024-07-02T10:31:27.805025573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nq6x9,Uid:a206110f-44ab-442f-957d-c2b864b678b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\"" Jul 2 10:31:27.809393 env[1557]: time="2024-07-02T10:31:27.809308068Z" level=info msg="CreateContainer within sandbox \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 10:31:27.820804 env[1557]: time="2024-07-02T10:31:27.820695104Z" level=info msg="CreateContainer within sandbox \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"526e4cf67aa070df0c110e560c382af1b7adf31f248fde01ce0e21356d867673\"" Jul 2 10:31:27.821399 env[1557]: time="2024-07-02T10:31:27.821320154Z" level=info msg="StartContainer for \"526e4cf67aa070df0c110e560c382af1b7adf31f248fde01ce0e21356d867673\"" Jul 2 10:31:27.846851 systemd[1]: Started cri-containerd-526e4cf67aa070df0c110e560c382af1b7adf31f248fde01ce0e21356d867673.scope. Jul 2 10:31:27.886302 env[1557]: time="2024-07-02T10:31:27.886202111Z" level=info msg="StartContainer for \"526e4cf67aa070df0c110e560c382af1b7adf31f248fde01ce0e21356d867673\" returns successfully" Jul 2 10:31:27.902837 systemd[1]: cri-containerd-526e4cf67aa070df0c110e560c382af1b7adf31f248fde01ce0e21356d867673.scope: Deactivated successfully. Jul 2 10:31:27.937615 env[1557]: time="2024-07-02T10:31:27.937534926Z" level=info msg="shim disconnected" id=526e4cf67aa070df0c110e560c382af1b7adf31f248fde01ce0e21356d867673 Jul 2 10:31:27.937615 env[1557]: time="2024-07-02T10:31:27.937613162Z" level=warning msg="cleaning up after shim disconnected" id=526e4cf67aa070df0c110e560c382af1b7adf31f248fde01ce0e21356d867673 namespace=k8s.io Jul 2 10:31:27.937966 env[1557]: time="2024-07-02T10:31:27.937634717Z" level=info msg="cleaning up dead shim" Jul 2 10:31:27.949499 env[1557]: time="2024-07-02T10:31:27.949359143Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:31:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5057 runtime=io.containerd.runc.v2\n" Jul 2 10:31:28.397365 env[1557]: time="2024-07-02T10:31:28.397232592Z" level=info msg="CreateContainer within sandbox \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 10:31:28.411790 env[1557]: time="2024-07-02T10:31:28.411670327Z" level=info msg="CreateContainer within sandbox \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fa37e78407b87ddc8467be850703941ed673a3b7de9fa595143661416e03b598\"" Jul 2 10:31:28.412689 env[1557]: time="2024-07-02T10:31:28.412581661Z" level=info msg="StartContainer for \"fa37e78407b87ddc8467be850703941ed673a3b7de9fa595143661416e03b598\"" Jul 2 10:31:28.449253 systemd[1]: Started cri-containerd-fa37e78407b87ddc8467be850703941ed673a3b7de9fa595143661416e03b598.scope. Jul 2 10:31:28.489961 env[1557]: time="2024-07-02T10:31:28.489854726Z" level=info msg="StartContainer for \"fa37e78407b87ddc8467be850703941ed673a3b7de9fa595143661416e03b598\" returns successfully" Jul 2 10:31:28.503880 systemd[1]: cri-containerd-fa37e78407b87ddc8467be850703941ed673a3b7de9fa595143661416e03b598.scope: Deactivated successfully. Jul 2 10:31:28.507078 kubelet[2569]: I0702 10:31:28.507008 2569 setters.go:568] "Node became not ready" node="ci-3510.3.5-a-539a8ddad9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T10:31:28Z","lastTransitionTime":"2024-07-02T10:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 10:31:28.531085 env[1557]: time="2024-07-02T10:31:28.531037529Z" level=info msg="shim disconnected" id=fa37e78407b87ddc8467be850703941ed673a3b7de9fa595143661416e03b598 Jul 2 10:31:28.531272 env[1557]: time="2024-07-02T10:31:28.531090742Z" level=warning msg="cleaning up after shim disconnected" id=fa37e78407b87ddc8467be850703941ed673a3b7de9fa595143661416e03b598 namespace=k8s.io Jul 2 10:31:28.531272 env[1557]: time="2024-07-02T10:31:28.531105170Z" level=info msg="cleaning up dead shim" Jul 2 10:31:28.538828 env[1557]: time="2024-07-02T10:31:28.538760980Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:31:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5118 runtime=io.containerd.runc.v2\n" Jul 2 10:31:28.752839 kubelet[2569]: W0702 10:31:28.752622 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79e3915f_21a9_4a77_83b5_72b755cf03d2.slice/cri-containerd-e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab.scope WatchSource:0}: container "e3e8bbf64e33942dd1a73b61427af258f9f264df62734361837eb2b8a75974ab" in namespace "k8s.io": not found Jul 2 10:31:29.089934 kubelet[2569]: I0702 10:31:29.089891 2569 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="79e3915f-21a9-4a77-83b5-72b755cf03d2" path="/var/lib/kubelet/pods/79e3915f-21a9-4a77-83b5-72b755cf03d2/volumes" Jul 2 10:31:29.232927 kubelet[2569]: E0702 10:31:29.232850 2569 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 10:31:29.406721 env[1557]: time="2024-07-02T10:31:29.406505817Z" level=info msg="CreateContainer within sandbox \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 10:31:29.419999 env[1557]: time="2024-07-02T10:31:29.419959008Z" level=info msg="CreateContainer within sandbox \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dec12a54a6c130f3a3c1b19a9ddb14c8f6f808c479278d5bb1ab10c17ef450ef\"" Jul 2 10:31:29.420220 env[1557]: time="2024-07-02T10:31:29.420178745Z" level=info msg="StartContainer for \"dec12a54a6c130f3a3c1b19a9ddb14c8f6f808c479278d5bb1ab10c17ef450ef\"" Jul 2 10:31:29.429746 systemd[1]: Started cri-containerd-dec12a54a6c130f3a3c1b19a9ddb14c8f6f808c479278d5bb1ab10c17ef450ef.scope. Jul 2 10:31:29.442097 env[1557]: time="2024-07-02T10:31:29.442074862Z" level=info msg="StartContainer for \"dec12a54a6c130f3a3c1b19a9ddb14c8f6f808c479278d5bb1ab10c17ef450ef\" returns successfully" Jul 2 10:31:29.443450 systemd[1]: cri-containerd-dec12a54a6c130f3a3c1b19a9ddb14c8f6f808c479278d5bb1ab10c17ef450ef.scope: Deactivated successfully. Jul 2 10:31:29.453529 env[1557]: time="2024-07-02T10:31:29.453502199Z" level=info msg="shim disconnected" id=dec12a54a6c130f3a3c1b19a9ddb14c8f6f808c479278d5bb1ab10c17ef450ef Jul 2 10:31:29.453619 env[1557]: time="2024-07-02T10:31:29.453530721Z" level=warning msg="cleaning up after shim disconnected" id=dec12a54a6c130f3a3c1b19a9ddb14c8f6f808c479278d5bb1ab10c17ef450ef namespace=k8s.io Jul 2 10:31:29.453619 env[1557]: time="2024-07-02T10:31:29.453536364Z" level=info msg="cleaning up dead shim" Jul 2 10:31:29.457196 env[1557]: time="2024-07-02T10:31:29.457151344Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:31:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5174 runtime=io.containerd.runc.v2\n" Jul 2 10:31:29.603250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dec12a54a6c130f3a3c1b19a9ddb14c8f6f808c479278d5bb1ab10c17ef450ef-rootfs.mount: Deactivated successfully. Jul 2 10:31:30.415799 env[1557]: time="2024-07-02T10:31:30.415690885Z" level=info msg="CreateContainer within sandbox \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 10:31:30.425010 env[1557]: time="2024-07-02T10:31:30.424966642Z" level=info msg="CreateContainer within sandbox \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3ec25efb9b7dabbdd4f37572dc36216365417fe1ffff6a7e72812ee65c10a5dd\"" Jul 2 10:31:30.425371 env[1557]: time="2024-07-02T10:31:30.425313626Z" level=info msg="StartContainer for \"3ec25efb9b7dabbdd4f37572dc36216365417fe1ffff6a7e72812ee65c10a5dd\"" Jul 2 10:31:30.435441 systemd[1]: Started cri-containerd-3ec25efb9b7dabbdd4f37572dc36216365417fe1ffff6a7e72812ee65c10a5dd.scope. Jul 2 10:31:30.447506 env[1557]: time="2024-07-02T10:31:30.447484420Z" level=info msg="StartContainer for \"3ec25efb9b7dabbdd4f37572dc36216365417fe1ffff6a7e72812ee65c10a5dd\" returns successfully" Jul 2 10:31:30.447884 systemd[1]: cri-containerd-3ec25efb9b7dabbdd4f37572dc36216365417fe1ffff6a7e72812ee65c10a5dd.scope: Deactivated successfully. Jul 2 10:31:30.456731 env[1557]: time="2024-07-02T10:31:30.456701728Z" level=info msg="shim disconnected" id=3ec25efb9b7dabbdd4f37572dc36216365417fe1ffff6a7e72812ee65c10a5dd Jul 2 10:31:30.456731 env[1557]: time="2024-07-02T10:31:30.456731134Z" level=warning msg="cleaning up after shim disconnected" id=3ec25efb9b7dabbdd4f37572dc36216365417fe1ffff6a7e72812ee65c10a5dd namespace=k8s.io Jul 2 10:31:30.456843 env[1557]: time="2024-07-02T10:31:30.456737304Z" level=info msg="cleaning up dead shim" Jul 2 10:31:30.460204 env[1557]: time="2024-07-02T10:31:30.460151545Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:31:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5226 runtime=io.containerd.runc.v2\n" Jul 2 10:31:30.603598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ec25efb9b7dabbdd4f37572dc36216365417fe1ffff6a7e72812ee65c10a5dd-rootfs.mount: Deactivated successfully. Jul 2 10:31:31.425365 env[1557]: time="2024-07-02T10:31:31.425275000Z" level=info msg="CreateContainer within sandbox \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 10:31:31.437287 env[1557]: time="2024-07-02T10:31:31.437235377Z" level=info msg="CreateContainer within sandbox \"840ddf0ca33eb1cb1b19db97ca261bbff286d4dccd9fe8fcf65607dbfe55cbf2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f049bce40929a3ce574d4038a3b35d97d7d643ec4ffc0321461ad9b4295f5816\"" Jul 2 10:31:31.437702 env[1557]: time="2024-07-02T10:31:31.437647132Z" level=info msg="StartContainer for \"f049bce40929a3ce574d4038a3b35d97d7d643ec4ffc0321461ad9b4295f5816\"" Jul 2 10:31:31.439349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888027376.mount: Deactivated successfully. Jul 2 10:31:31.447214 systemd[1]: Started cri-containerd-f049bce40929a3ce574d4038a3b35d97d7d643ec4ffc0321461ad9b4295f5816.scope. Jul 2 10:31:31.459891 env[1557]: time="2024-07-02T10:31:31.459867180Z" level=info msg="StartContainer for \"f049bce40929a3ce574d4038a3b35d97d7d643ec4ffc0321461ad9b4295f5816\" returns successfully" Jul 2 10:31:31.612153 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 10:31:31.868165 kubelet[2569]: W0702 10:31:31.868074 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda206110f_44ab_442f_957d_c2b864b678b3.slice/cri-containerd-526e4cf67aa070df0c110e560c382af1b7adf31f248fde01ce0e21356d867673.scope WatchSource:0}: task 526e4cf67aa070df0c110e560c382af1b7adf31f248fde01ce0e21356d867673 not found: not found Jul 2 10:31:32.447595 kubelet[2569]: I0702 10:31:32.447526 2569 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-nq6x9" podStartSLOduration=5.447479485 podStartE2EDuration="5.447479485s" podCreationTimestamp="2024-07-02 10:31:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:31:32.447038723 +0000 UTC m=+443.433234605" watchObservedRunningTime="2024-07-02 10:31:32.447479485 +0000 UTC m=+443.433675363" Jul 2 10:31:34.492852 systemd-networkd[1312]: lxc_health: Link UP Jul 2 10:31:34.514159 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 10:31:34.514172 systemd-networkd[1312]: lxc_health: Gained carrier Jul 2 10:31:34.978387 kubelet[2569]: W0702 10:31:34.978321 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda206110f_44ab_442f_957d_c2b864b678b3.slice/cri-containerd-fa37e78407b87ddc8467be850703941ed673a3b7de9fa595143661416e03b598.scope WatchSource:0}: task fa37e78407b87ddc8467be850703941ed673a3b7de9fa595143661416e03b598 not found: not found Jul 2 10:31:36.012278 systemd-networkd[1312]: lxc_health: Gained IPv6LL Jul 2 10:31:38.084303 kubelet[2569]: W0702 10:31:38.084171 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda206110f_44ab_442f_957d_c2b864b678b3.slice/cri-containerd-dec12a54a6c130f3a3c1b19a9ddb14c8f6f808c479278d5bb1ab10c17ef450ef.scope WatchSource:0}: task dec12a54a6c130f3a3c1b19a9ddb14c8f6f808c479278d5bb1ab10c17ef450ef not found: not found Jul 2 10:31:40.363992 sshd[4834]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:40.365339 systemd[1]: sshd@23-147.75.203.11:22-139.178.68.195:48200.service: Deactivated successfully. Jul 2 10:31:40.365783 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 10:31:40.366106 systemd-logind[1596]: Session 26 logged out. Waiting for processes to exit. Jul 2 10:31:40.366636 systemd-logind[1596]: Removed session 26. Jul 2 10:31:41.195309 kubelet[2569]: W0702 10:31:41.195193 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda206110f_44ab_442f_957d_c2b864b678b3.slice/cri-containerd-3ec25efb9b7dabbdd4f37572dc36216365417fe1ffff6a7e72812ee65c10a5dd.scope WatchSource:0}: task 3ec25efb9b7dabbdd4f37572dc36216365417fe1ffff6a7e72812ee65c10a5dd not found: not found