Jul 2 11:30:19.558853 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 11:30:19.558866 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 11:30:19.558873 kernel: BIOS-provided physical RAM map: Jul 2 11:30:19.558877 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 2 11:30:19.558880 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 2 11:30:19.558884 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 2 11:30:19.558889 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 2 11:30:19.558893 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 2 11:30:19.558897 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819ccfff] usable Jul 2 11:30:19.558901 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] ACPI NVS Jul 2 11:30:19.558906 kernel: BIOS-e820: [mem 0x00000000819ce000-0x00000000819cefff] reserved Jul 2 11:30:19.558910 kernel: BIOS-e820: [mem 0x00000000819cf000-0x000000008afccfff] usable Jul 2 11:30:19.558913 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jul 2 11:30:19.558918 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jul 2 11:30:19.558923 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jul 2 11:30:19.558928 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jul 2 11:30:19.558932 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jul 2 11:30:19.558937 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jul 2 11:30:19.558941 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 2 11:30:19.558945 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 2 11:30:19.558950 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 2 11:30:19.558954 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 2 11:30:19.558958 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 2 11:30:19.558963 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jul 2 11:30:19.558967 kernel: NX (Execute Disable) protection: active Jul 2 11:30:19.558971 kernel: SMBIOS 3.2.1 present. Jul 2 11:30:19.558976 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Jul 2 11:30:19.558981 kernel: tsc: Detected 3400.000 MHz processor Jul 2 11:30:19.558985 kernel: tsc: Detected 3399.906 MHz TSC Jul 2 11:30:19.558990 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 11:30:19.558995 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 11:30:19.558999 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jul 2 11:30:19.559004 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 11:30:19.559008 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jul 2 11:30:19.559013 kernel: Using GB pages for direct mapping Jul 2 11:30:19.559017 kernel: ACPI: Early table checksum verification disabled Jul 2 11:30:19.559022 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 2 11:30:19.559027 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 2 11:30:19.559031 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jul 2 11:30:19.559036 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 2 11:30:19.559042 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jul 2 11:30:19.559047 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jul 2 11:30:19.559053 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jul 2 11:30:19.559058 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 2 11:30:19.559063 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 2 11:30:19.559068 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 2 11:30:19.559072 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 2 11:30:19.559077 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 2 11:30:19.559082 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 2 11:30:19.559087 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:30:19.559093 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 2 11:30:19.559098 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 2 11:30:19.559102 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:30:19.559107 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:30:19.559112 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 2 11:30:19.559117 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 2 11:30:19.559122 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:30:19.559127 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:30:19.559132 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 2 11:30:19.559137 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jul 2 11:30:19.559142 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 2 11:30:19.559147 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 2 11:30:19.559152 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 2 11:30:19.559157 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jul 2 11:30:19.559161 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 2 11:30:19.559166 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 2 11:30:19.559171 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 2 11:30:19.559177 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 2 11:30:19.559182 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 2 11:30:19.559187 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jul 2 11:30:19.559191 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jul 2 11:30:19.559196 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jul 2 11:30:19.559201 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jul 2 11:30:19.559206 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jul 2 11:30:19.559211 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jul 2 11:30:19.559216 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jul 2 11:30:19.559221 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jul 2 11:30:19.559229 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jul 2 11:30:19.559234 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jul 2 11:30:19.559239 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jul 2 11:30:19.559243 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jul 2 11:30:19.559248 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jul 2 11:30:19.559253 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jul 2 11:30:19.559258 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jul 2 11:30:19.559264 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jul 2 11:30:19.559269 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jul 2 11:30:19.559288 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jul 2 11:30:19.559293 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jul 2 11:30:19.559298 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jul 2 11:30:19.559302 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jul 2 11:30:19.559307 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jul 2 11:30:19.559312 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jul 2 11:30:19.559317 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jul 2 11:30:19.559322 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jul 2 11:30:19.559327 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jul 2 11:30:19.559332 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jul 2 11:30:19.559336 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jul 2 11:30:19.559341 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jul 2 11:30:19.559346 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jul 2 11:30:19.559350 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jul 2 11:30:19.559355 kernel: No NUMA configuration found Jul 2 11:30:19.559360 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jul 2 11:30:19.559366 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jul 2 11:30:19.559370 kernel: Zone ranges: Jul 2 11:30:19.559375 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 11:30:19.559380 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 11:30:19.559385 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jul 2 11:30:19.559389 kernel: Movable zone start for each node Jul 2 11:30:19.559394 kernel: Early memory node ranges Jul 2 11:30:19.559399 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 2 11:30:19.559404 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 2 11:30:19.559409 kernel: node 0: [mem 0x0000000040400000-0x00000000819ccfff] Jul 2 11:30:19.559414 kernel: node 0: [mem 0x00000000819cf000-0x000000008afccfff] Jul 2 11:30:19.559419 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jul 2 11:30:19.559424 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jul 2 11:30:19.559429 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jul 2 11:30:19.559433 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jul 2 11:30:19.559438 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 11:30:19.559446 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 2 11:30:19.559452 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 2 11:30:19.559457 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 2 11:30:19.559462 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jul 2 11:30:19.559468 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jul 2 11:30:19.559473 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jul 2 11:30:19.559478 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jul 2 11:30:19.559484 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 2 11:30:19.559489 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 2 11:30:19.559494 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 2 11:30:19.559499 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 2 11:30:19.559505 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 2 11:30:19.559510 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 2 11:30:19.559515 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 2 11:30:19.559520 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 2 11:30:19.559525 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 2 11:30:19.559530 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 2 11:30:19.559535 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 2 11:30:19.559540 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 2 11:30:19.559545 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 2 11:30:19.559552 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 2 11:30:19.559557 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 2 11:30:19.559562 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 2 11:30:19.559567 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 2 11:30:19.559572 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 2 11:30:19.559577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 11:30:19.559582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 11:30:19.559587 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 11:30:19.559592 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 11:30:19.559598 kernel: TSC deadline timer available Jul 2 11:30:19.559603 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jul 2 11:30:19.559609 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jul 2 11:30:19.559614 kernel: Booting paravirtualized kernel on bare hardware Jul 2 11:30:19.559619 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 11:30:19.559624 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Jul 2 11:30:19.559629 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 2 11:30:19.559634 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 2 11:30:19.559639 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 2 11:30:19.559645 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jul 2 11:30:19.559650 kernel: Policy zone: Normal Jul 2 11:30:19.559656 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 11:30:19.559661 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 11:30:19.559666 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 2 11:30:19.559672 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 2 11:30:19.559677 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 11:30:19.559683 kernel: Memory: 32722604K/33452980K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 730116K reserved, 0K cma-reserved) Jul 2 11:30:19.559688 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 2 11:30:19.559693 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 11:30:19.559698 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 11:30:19.559703 kernel: rcu: Hierarchical RCU implementation. Jul 2 11:30:19.559709 kernel: rcu: RCU event tracing is enabled. Jul 2 11:30:19.559714 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 2 11:30:19.559719 kernel: Rude variant of Tasks RCU enabled. Jul 2 11:30:19.559724 kernel: Tracing variant of Tasks RCU enabled. Jul 2 11:30:19.559730 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 11:30:19.559736 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 2 11:30:19.559741 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 2 11:30:19.559746 kernel: random: crng init done Jul 2 11:30:19.559751 kernel: Console: colour dummy device 80x25 Jul 2 11:30:19.559756 kernel: printk: console [tty0] enabled Jul 2 11:30:19.559761 kernel: printk: console [ttyS1] enabled Jul 2 11:30:19.559767 kernel: ACPI: Core revision 20210730 Jul 2 11:30:19.559772 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jul 2 11:30:19.559777 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 11:30:19.559783 kernel: DMAR: Host address width 39 Jul 2 11:30:19.559788 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 2 11:30:19.559793 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 2 11:30:19.559798 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jul 2 11:30:19.559803 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jul 2 11:30:19.559808 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 2 11:30:19.559814 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 2 11:30:19.559819 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 2 11:30:19.559824 kernel: x2apic enabled Jul 2 11:30:19.559830 kernel: Switched APIC routing to cluster x2apic. Jul 2 11:30:19.559835 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 2 11:30:19.559840 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 2 11:30:19.559845 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 2 11:30:19.559850 kernel: process: using mwait in idle threads Jul 2 11:30:19.559856 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 11:30:19.559861 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 11:30:19.559866 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 11:30:19.559871 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 11:30:19.559877 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 11:30:19.559882 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 11:30:19.559887 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 2 11:30:19.559892 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 11:30:19.559897 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 2 11:30:19.559902 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 2 11:30:19.559907 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 11:30:19.559912 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 11:30:19.559917 kernel: TAA: Mitigation: TSX disabled Jul 2 11:30:19.559922 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 2 11:30:19.559927 kernel: SRBDS: Mitigation: Microcode Jul 2 11:30:19.559933 kernel: GDS: Vulnerable: No microcode Jul 2 11:30:19.559938 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 11:30:19.559943 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 11:30:19.559948 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 11:30:19.559953 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 11:30:19.559958 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 11:30:19.559963 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 11:30:19.559968 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 11:30:19.559973 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 11:30:19.559978 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 2 11:30:19.559984 kernel: Freeing SMP alternatives memory: 32K Jul 2 11:30:19.559989 kernel: pid_max: default: 32768 minimum: 301 Jul 2 11:30:19.559994 kernel: LSM: Security Framework initializing Jul 2 11:30:19.559999 kernel: SELinux: Initializing. Jul 2 11:30:19.560004 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 11:30:19.560010 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 11:30:19.560015 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 2 11:30:19.560020 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 2 11:30:19.560025 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 2 11:30:19.560030 kernel: ... version: 4 Jul 2 11:30:19.560035 kernel: ... bit width: 48 Jul 2 11:30:19.560040 kernel: ... generic registers: 4 Jul 2 11:30:19.560046 kernel: ... value mask: 0000ffffffffffff Jul 2 11:30:19.560051 kernel: ... max period: 00007fffffffffff Jul 2 11:30:19.560056 kernel: ... fixed-purpose events: 3 Jul 2 11:30:19.560061 kernel: ... event mask: 000000070000000f Jul 2 11:30:19.560066 kernel: signal: max sigframe size: 2032 Jul 2 11:30:19.560071 kernel: rcu: Hierarchical SRCU implementation. Jul 2 11:30:19.560077 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 2 11:30:19.560082 kernel: smp: Bringing up secondary CPUs ... Jul 2 11:30:19.560087 kernel: x86: Booting SMP configuration: Jul 2 11:30:19.560093 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Jul 2 11:30:19.560098 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 11:30:19.560103 kernel: #9 #10 #11 #12 #13 #14 #15 Jul 2 11:30:19.560108 kernel: smp: Brought up 1 node, 16 CPUs Jul 2 11:30:19.560113 kernel: smpboot: Max logical packages: 1 Jul 2 11:30:19.560118 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 2 11:30:19.560124 kernel: devtmpfs: initialized Jul 2 11:30:19.560129 kernel: x86/mm: Memory block size: 128MB Jul 2 11:30:19.560134 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cd000-0x819cdfff] (4096 bytes) Jul 2 11:30:19.560140 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jul 2 11:30:19.560145 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 11:30:19.560150 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 2 11:30:19.560155 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 11:30:19.560160 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 11:30:19.560166 kernel: audit: initializing netlink subsys (disabled) Jul 2 11:30:19.560171 kernel: audit: type=2000 audit(1719919813.041:1): state=initialized audit_enabled=0 res=1 Jul 2 11:30:19.560176 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 11:30:19.560181 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 11:30:19.560187 kernel: cpuidle: using governor menu Jul 2 11:30:19.560192 kernel: ACPI: bus type PCI registered Jul 2 11:30:19.560197 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 11:30:19.560202 kernel: dca service started, version 1.12.1 Jul 2 11:30:19.560207 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 2 11:30:19.560212 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Jul 2 11:30:19.560217 kernel: PCI: Using configuration type 1 for base access Jul 2 11:30:19.560222 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 2 11:30:19.560229 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 11:30:19.560256 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 11:30:19.560261 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 11:30:19.560266 kernel: ACPI: Added _OSI(Module Device) Jul 2 11:30:19.560272 kernel: ACPI: Added _OSI(Processor Device) Jul 2 11:30:19.560277 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 11:30:19.560282 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 11:30:19.560301 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 11:30:19.560306 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 11:30:19.560311 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 11:30:19.560317 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 2 11:30:19.560322 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:30:19.560327 kernel: ACPI: SSDT 0xFFFFA029C0221400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 2 11:30:19.560332 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Jul 2 11:30:19.560338 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:30:19.560343 kernel: ACPI: SSDT 0xFFFFA029C1AEEC00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 2 11:30:19.560348 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:30:19.560353 kernel: ACPI: SSDT 0xFFFFA029C1A61800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 2 11:30:19.560358 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:30:19.560364 kernel: ACPI: SSDT 0xFFFFA029C1B50800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 2 11:30:19.560369 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:30:19.560374 kernel: ACPI: SSDT 0xFFFFA029C0154000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 2 11:30:19.560379 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:30:19.560384 kernel: ACPI: SSDT 0xFFFFA029C1AEFC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 2 11:30:19.560389 kernel: ACPI: Interpreter enabled Jul 2 11:30:19.560394 kernel: ACPI: PM: (supports S0 S5) Jul 2 11:30:19.560399 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 11:30:19.560404 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 2 11:30:19.560410 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 2 11:30:19.560415 kernel: HEST: Table parsing has been initialized. Jul 2 11:30:19.560420 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 2 11:30:19.560426 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 11:30:19.560431 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 2 11:30:19.560436 kernel: ACPI: PM: Power Resource [USBC] Jul 2 11:30:19.560441 kernel: ACPI: PM: Power Resource [V0PR] Jul 2 11:30:19.560446 kernel: ACPI: PM: Power Resource [V1PR] Jul 2 11:30:19.560451 kernel: ACPI: PM: Power Resource [V2PR] Jul 2 11:30:19.560456 kernel: ACPI: PM: Power Resource [WRST] Jul 2 11:30:19.560462 kernel: ACPI: PM: Power Resource [FN00] Jul 2 11:30:19.560467 kernel: ACPI: PM: Power Resource [FN01] Jul 2 11:30:19.560472 kernel: ACPI: PM: Power Resource [FN02] Jul 2 11:30:19.560477 kernel: ACPI: PM: Power Resource [FN03] Jul 2 11:30:19.560482 kernel: ACPI: PM: Power Resource [FN04] Jul 2 11:30:19.560487 kernel: ACPI: PM: Power Resource [PIN] Jul 2 11:30:19.560492 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 2 11:30:19.560555 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 11:30:19.560603 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 2 11:30:19.560644 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 2 11:30:19.560652 kernel: PCI host bridge to bus 0000:00 Jul 2 11:30:19.560695 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 11:30:19.560733 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 11:30:19.560769 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 11:30:19.560805 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jul 2 11:30:19.560845 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 2 11:30:19.560881 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 2 11:30:19.560931 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jul 2 11:30:19.560982 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jul 2 11:30:19.561026 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 2 11:30:19.561072 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jul 2 11:30:19.561116 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jul 2 11:30:19.561163 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jul 2 11:30:19.561206 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jul 2 11:30:19.561271 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jul 2 11:30:19.561314 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jul 2 11:30:19.561359 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 2 11:30:19.561407 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jul 2 11:30:19.561451 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jul 2 11:30:19.561493 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jul 2 11:30:19.561540 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jul 2 11:30:19.561585 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 11:30:19.561633 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jul 2 11:30:19.561678 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 11:30:19.561724 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jul 2 11:30:19.561767 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jul 2 11:30:19.561809 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 2 11:30:19.561854 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jul 2 11:30:19.561896 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jul 2 11:30:19.561938 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 2 11:30:19.561986 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jul 2 11:30:19.562028 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jul 2 11:30:19.562070 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 2 11:30:19.562115 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jul 2 11:30:19.562158 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jul 2 11:30:19.562202 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jul 2 11:30:19.562252 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jul 2 11:30:19.562298 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jul 2 11:30:19.562340 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jul 2 11:30:19.562383 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jul 2 11:30:19.562425 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 2 11:30:19.562473 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jul 2 11:30:19.562517 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 2 11:30:19.562564 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jul 2 11:30:19.562609 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 2 11:30:19.562657 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jul 2 11:30:19.562703 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 2 11:30:19.562749 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jul 2 11:30:19.562793 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 2 11:30:19.562841 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jul 2 11:30:19.562886 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jul 2 11:30:19.562934 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jul 2 11:30:19.562977 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 11:30:19.563027 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jul 2 11:30:19.563073 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jul 2 11:30:19.563117 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jul 2 11:30:19.563160 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jul 2 11:30:19.563209 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jul 2 11:30:19.563255 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jul 2 11:30:19.563307 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jul 2 11:30:19.563352 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jul 2 11:30:19.563396 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jul 2 11:30:19.563441 kernel: pci 0000:01:00.0: PME# supported from D3cold Jul 2 11:30:19.563485 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 2 11:30:19.563530 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 2 11:30:19.563578 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jul 2 11:30:19.563625 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jul 2 11:30:19.563669 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jul 2 11:30:19.563714 kernel: pci 0000:01:00.1: PME# supported from D3cold Jul 2 11:30:19.563757 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 2 11:30:19.563802 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 2 11:30:19.563845 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 11:30:19.563888 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 2 11:30:19.563933 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 11:30:19.563978 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 2 11:30:19.564098 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jul 2 11:30:19.564143 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jul 2 11:30:19.564188 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jul 2 11:30:19.564234 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jul 2 11:30:19.564280 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jul 2 11:30:19.564324 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 2 11:30:19.564370 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 2 11:30:19.564413 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 2 11:30:19.564457 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 2 11:30:19.564505 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 2 11:30:19.564551 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jul 2 11:30:19.564596 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jul 2 11:30:19.564639 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jul 2 11:30:19.564686 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jul 2 11:30:19.564730 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 2 11:30:19.564774 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 2 11:30:19.564817 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 2 11:30:19.564859 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 2 11:30:19.564903 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 2 11:30:19.564951 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jul 2 11:30:19.564996 kernel: pci 0000:06:00.0: enabling Extended Tags Jul 2 11:30:19.565042 kernel: pci 0000:06:00.0: supports D1 D2 Jul 2 11:30:19.565086 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 11:30:19.565128 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 2 11:30:19.565172 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 2 11:30:19.565214 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 2 11:30:19.565267 kernel: pci_bus 0000:07: extended config space not accessible Jul 2 11:30:19.565318 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jul 2 11:30:19.565369 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jul 2 11:30:19.565415 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jul 2 11:30:19.565463 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jul 2 11:30:19.565509 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 11:30:19.565556 kernel: pci 0000:07:00.0: supports D1 D2 Jul 2 11:30:19.565602 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 11:30:19.565650 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 2 11:30:19.565696 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 2 11:30:19.565741 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 2 11:30:19.565749 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 2 11:30:19.565755 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 2 11:30:19.565760 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 2 11:30:19.565766 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 2 11:30:19.565772 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 2 11:30:19.565777 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 2 11:30:19.565783 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 2 11:30:19.565790 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 2 11:30:19.565795 kernel: iommu: Default domain type: Translated Jul 2 11:30:19.565801 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 11:30:19.565847 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jul 2 11:30:19.565895 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 11:30:19.565941 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jul 2 11:30:19.565949 kernel: vgaarb: loaded Jul 2 11:30:19.565956 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 11:30:19.565962 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 11:30:19.565968 kernel: PTP clock support registered Jul 2 11:30:19.565974 kernel: PCI: Using ACPI for IRQ routing Jul 2 11:30:19.565979 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 11:30:19.565985 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 2 11:30:19.565990 kernel: e820: reserve RAM buffer [mem 0x819cd000-0x83ffffff] Jul 2 11:30:19.565996 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jul 2 11:30:19.566001 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jul 2 11:30:19.566007 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jul 2 11:30:19.566013 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jul 2 11:30:19.566018 kernel: clocksource: Switched to clocksource tsc-early Jul 2 11:30:19.566024 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 11:30:19.566030 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 11:30:19.566035 kernel: pnp: PnP ACPI init Jul 2 11:30:19.566081 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 2 11:30:19.566123 kernel: pnp 00:02: [dma 0 disabled] Jul 2 11:30:19.566166 kernel: pnp 00:03: [dma 0 disabled] Jul 2 11:30:19.566212 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 2 11:30:19.566254 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 2 11:30:19.566316 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jul 2 11:30:19.566357 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jul 2 11:30:19.566396 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jul 2 11:30:19.566433 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jul 2 11:30:19.566473 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jul 2 11:30:19.566510 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 2 11:30:19.566547 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 2 11:30:19.566586 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 2 11:30:19.566623 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 2 11:30:19.566665 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jul 2 11:30:19.566703 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 2 11:30:19.566743 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 2 11:30:19.566780 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 2 11:30:19.566818 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 2 11:30:19.566855 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 2 11:30:19.566893 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jul 2 11:30:19.566935 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jul 2 11:30:19.566942 kernel: pnp: PnP ACPI: found 10 devices Jul 2 11:30:19.566949 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 11:30:19.566955 kernel: NET: Registered PF_INET protocol family Jul 2 11:30:19.566961 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 11:30:19.566966 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 11:30:19.566972 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 11:30:19.566977 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 11:30:19.566983 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 11:30:19.566988 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 2 11:30:19.566994 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 11:30:19.567001 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 11:30:19.567006 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 11:30:19.567012 kernel: NET: Registered PF_XDP protocol family Jul 2 11:30:19.567054 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jul 2 11:30:19.567097 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jul 2 11:30:19.567139 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jul 2 11:30:19.567184 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 2 11:30:19.567230 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 2 11:30:19.567323 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 2 11:30:19.567367 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 2 11:30:19.567409 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 11:30:19.567453 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 2 11:30:19.567495 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 11:30:19.567538 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 2 11:30:19.567582 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 2 11:30:19.567625 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 2 11:30:19.567668 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 2 11:30:19.567711 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 2 11:30:19.567753 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 2 11:30:19.567796 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 2 11:30:19.567838 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 2 11:30:19.567903 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 2 11:30:19.567947 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 2 11:30:19.567991 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 2 11:30:19.568035 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 2 11:30:19.568079 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 2 11:30:19.568122 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 2 11:30:19.568161 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 2 11:30:19.568199 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 11:30:19.568240 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 11:30:19.568279 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 11:30:19.568317 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jul 2 11:30:19.568354 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 2 11:30:19.568398 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jul 2 11:30:19.568439 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 11:30:19.568485 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jul 2 11:30:19.568527 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jul 2 11:30:19.568570 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 2 11:30:19.568611 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jul 2 11:30:19.568654 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jul 2 11:30:19.568694 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jul 2 11:30:19.568737 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 2 11:30:19.568779 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jul 2 11:30:19.568788 kernel: PCI: CLS 64 bytes, default 64 Jul 2 11:30:19.568794 kernel: DMAR: No ATSR found Jul 2 11:30:19.568799 kernel: DMAR: No SATC found Jul 2 11:30:19.568805 kernel: DMAR: dmar0: Using Queued invalidation Jul 2 11:30:19.568850 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 2 11:30:19.568894 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 2 11:30:19.568938 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jul 2 11:30:19.568981 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 2 11:30:19.569027 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 2 11:30:19.569070 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 2 11:30:19.569112 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 2 11:30:19.569156 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 2 11:30:19.569199 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 2 11:30:19.569244 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 2 11:30:19.569287 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 2 11:30:19.569330 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 2 11:30:19.569372 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 2 11:30:19.569419 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 2 11:30:19.569461 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 2 11:30:19.569504 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 2 11:30:19.569547 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jul 2 11:30:19.569589 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 2 11:30:19.569632 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 2 11:30:19.569675 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 2 11:30:19.569717 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 2 11:30:19.569763 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jul 2 11:30:19.569809 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jul 2 11:30:19.569853 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jul 2 11:30:19.569897 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jul 2 11:30:19.569943 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jul 2 11:30:19.569989 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 2 11:30:19.569997 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 2 11:30:19.570003 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 11:30:19.570010 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jul 2 11:30:19.570016 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jul 2 11:30:19.570021 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 2 11:30:19.570027 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 2 11:30:19.570033 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 2 11:30:19.570080 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 2 11:30:19.570088 kernel: Initialise system trusted keyrings Jul 2 11:30:19.570094 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 2 11:30:19.570101 kernel: Key type asymmetric registered Jul 2 11:30:19.570106 kernel: Asymmetric key parser 'x509' registered Jul 2 11:30:19.570111 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 11:30:19.570117 kernel: io scheduler mq-deadline registered Jul 2 11:30:19.570123 kernel: io scheduler kyber registered Jul 2 11:30:19.570128 kernel: io scheduler bfq registered Jul 2 11:30:19.570171 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jul 2 11:30:19.570216 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jul 2 11:30:19.570262 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jul 2 11:30:19.570307 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jul 2 11:30:19.570351 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jul 2 11:30:19.570394 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jul 2 11:30:19.570442 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 2 11:30:19.570450 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 2 11:30:19.570456 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 2 11:30:19.570462 kernel: pstore: Registered erst as persistent store backend Jul 2 11:30:19.570469 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 11:30:19.570475 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 11:30:19.570481 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 11:30:19.570486 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 11:30:19.570492 kernel: hpet_acpi_add: no address or irqs in _CRS Jul 2 11:30:19.570537 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 2 11:30:19.570545 kernel: i8042: PNP: No PS/2 controller found. Jul 2 11:30:19.570585 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 2 11:30:19.570626 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 2 11:30:19.570666 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-07-02T11:30:18 UTC (1719919818) Jul 2 11:30:19.570705 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 2 11:30:19.570713 kernel: fail to initialize ptp_kvm Jul 2 11:30:19.570718 kernel: intel_pstate: Intel P-state driver initializing Jul 2 11:30:19.570724 kernel: intel_pstate: Disabling energy efficiency optimization Jul 2 11:30:19.570730 kernel: intel_pstate: HWP enabled Jul 2 11:30:19.570735 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jul 2 11:30:19.570741 kernel: vesafb: scrolling: redraw Jul 2 11:30:19.570748 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jul 2 11:30:19.570753 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000006ef972ae, using 768k, total 768k Jul 2 11:30:19.570759 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 11:30:19.570765 kernel: fb0: VESA VGA frame buffer device Jul 2 11:30:19.570770 kernel: NET: Registered PF_INET6 protocol family Jul 2 11:30:19.570776 kernel: Segment Routing with IPv6 Jul 2 11:30:19.570782 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 11:30:19.570787 kernel: NET: Registered PF_PACKET protocol family Jul 2 11:30:19.570793 kernel: Key type dns_resolver registered Jul 2 11:30:19.570799 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Jul 2 11:30:19.570805 kernel: microcode: Microcode Update Driver: v2.2. Jul 2 11:30:19.570810 kernel: IPI shorthand broadcast: enabled Jul 2 11:30:19.570816 kernel: sched_clock: Marking stable (1735860041, 1339417957)->(4517827522, -1442549524) Jul 2 11:30:19.570822 kernel: registered taskstats version 1 Jul 2 11:30:19.570827 kernel: Loading compiled-in X.509 certificates Jul 2 11:30:19.570833 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 11:30:19.570838 kernel: Key type .fscrypt registered Jul 2 11:30:19.570844 kernel: Key type fscrypt-provisioning registered Jul 2 11:30:19.570850 kernel: pstore: Using crash dump compression: deflate Jul 2 11:30:19.570856 kernel: ima: Allocated hash algorithm: sha1 Jul 2 11:30:19.570862 kernel: ima: No architecture policies found Jul 2 11:30:19.570867 kernel: clk: Disabling unused clocks Jul 2 11:30:19.570873 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 11:30:19.570878 kernel: Write protecting the kernel read-only data: 28672k Jul 2 11:30:19.570884 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 11:30:19.570890 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 11:30:19.570895 kernel: Run /init as init process Jul 2 11:30:19.570902 kernel: with arguments: Jul 2 11:30:19.570908 kernel: /init Jul 2 11:30:19.570913 kernel: with environment: Jul 2 11:30:19.570918 kernel: HOME=/ Jul 2 11:30:19.570924 kernel: TERM=linux Jul 2 11:30:19.570929 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 11:30:19.570936 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 11:30:19.570943 systemd[1]: Detected architecture x86-64. Jul 2 11:30:19.570950 systemd[1]: Running in initrd. Jul 2 11:30:19.570956 systemd[1]: No hostname configured, using default hostname. Jul 2 11:30:19.570962 systemd[1]: Hostname set to . Jul 2 11:30:19.570967 systemd[1]: Initializing machine ID from random generator. Jul 2 11:30:19.570973 systemd[1]: Queued start job for default target initrd.target. Jul 2 11:30:19.570979 systemd[1]: Started systemd-ask-password-console.path. Jul 2 11:30:19.570985 systemd[1]: Reached target cryptsetup.target. Jul 2 11:30:19.570991 systemd[1]: Reached target paths.target. Jul 2 11:30:19.570997 systemd[1]: Reached target slices.target. Jul 2 11:30:19.571003 systemd[1]: Reached target swap.target. Jul 2 11:30:19.571009 systemd[1]: Reached target timers.target. Jul 2 11:30:19.571015 systemd[1]: Listening on iscsid.socket. Jul 2 11:30:19.571021 systemd[1]: Listening on iscsiuio.socket. Jul 2 11:30:19.571027 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 11:30:19.571033 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 11:30:19.571039 systemd[1]: Listening on systemd-journald.socket. Jul 2 11:30:19.571045 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jul 2 11:30:19.571051 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jul 2 11:30:19.571057 kernel: clocksource: Switched to clocksource tsc Jul 2 11:30:19.571062 systemd[1]: Listening on systemd-networkd.socket. Jul 2 11:30:19.571068 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 11:30:19.571074 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 11:30:19.571080 systemd[1]: Reached target sockets.target. Jul 2 11:30:19.571086 systemd[1]: Starting kmod-static-nodes.service... Jul 2 11:30:19.571093 systemd[1]: Finished network-cleanup.service. Jul 2 11:30:19.571099 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 11:30:19.571104 systemd[1]: Starting systemd-journald.service... Jul 2 11:30:19.571110 systemd[1]: Starting systemd-modules-load.service... Jul 2 11:30:19.571119 systemd-journald[267]: Journal started Jul 2 11:30:19.571144 systemd-journald[267]: Runtime Journal (/run/log/journal/63f93d77656640839aada6fd82b34028) is 8.0M, max 640.1M, 632.1M free. Jul 2 11:30:19.572377 systemd-modules-load[268]: Inserted module 'overlay' Jul 2 11:30:19.601596 kernel: audit: type=1334 audit(1719919819.578:2): prog-id=6 op=LOAD Jul 2 11:30:19.601606 systemd[1]: Starting systemd-resolved.service... Jul 2 11:30:19.578000 audit: BPF prog-id=6 op=LOAD Jul 2 11:30:19.645246 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 11:30:19.645262 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 11:30:19.679269 kernel: Bridge firewalling registered Jul 2 11:30:19.679285 systemd[1]: Started systemd-journald.service. Jul 2 11:30:19.693353 systemd-modules-load[268]: Inserted module 'br_netfilter' Jul 2 11:30:19.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:19.695674 systemd-resolved[270]: Positive Trust Anchors: Jul 2 11:30:19.816404 kernel: audit: type=1130 audit(1719919819.701:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:19.816417 kernel: SCSI subsystem initialized Jul 2 11:30:19.816425 kernel: audit: type=1130 audit(1719919819.754:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:19.816432 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 11:30:19.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:19.695680 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 11:30:19.916348 kernel: device-mapper: uevent: version 1.0.3 Jul 2 11:30:19.916359 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 11:30:19.916368 kernel: audit: type=1130 audit(1719919819.873:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:19.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:19.695700 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 11:30:20.014428 kernel: audit: type=1130 audit(1719919819.925:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:19.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:19.697265 systemd-resolved[270]: Defaulting to hostname 'linux'. Jul 2 11:30:20.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:19.701442 systemd[1]: Started systemd-resolved.service. Jul 2 11:30:20.128502 kernel: audit: type=1130 audit(1719919820.022:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:20.128518 kernel: audit: type=1130 audit(1719919820.075:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:20.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:19.754401 systemd[1]: Finished kmod-static-nodes.service. Jul 2 11:30:19.874091 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 11:30:19.916968 systemd-modules-load[268]: Inserted module 'dm_multipath' Jul 2 11:30:19.925530 systemd[1]: Finished systemd-modules-load.service. Jul 2 11:30:20.022604 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 11:30:20.075510 systemd[1]: Reached target nss-lookup.target. Jul 2 11:30:20.137913 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 11:30:20.144864 systemd[1]: Starting systemd-sysctl.service... Jul 2 11:30:20.158806 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 11:30:20.159518 systemd[1]: Finished systemd-sysctl.service. Jul 2 11:30:20.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:20.161475 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 11:30:20.279439 kernel: audit: type=1130 audit(1719919820.159:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:20.279452 kernel: audit: type=1130 audit(1719919820.221:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:20.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:20.221572 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 11:30:20.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:20.288810 systemd[1]: Starting dracut-cmdline.service... Jul 2 11:30:20.310343 dracut-cmdline[293]: dracut-dracut-053 Jul 2 11:30:20.310343 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 2 11:30:20.310343 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 11:30:20.382312 kernel: Loading iSCSI transport class v2.0-870. Jul 2 11:30:20.382327 kernel: iscsi: registered transport (tcp) Jul 2 11:30:20.437056 kernel: iscsi: registered transport (qla4xxx) Jul 2 11:30:20.437075 kernel: QLogic iSCSI HBA Driver Jul 2 11:30:20.453362 systemd[1]: Finished dracut-cmdline.service. Jul 2 11:30:20.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:20.461940 systemd[1]: Starting dracut-pre-udev.service... Jul 2 11:30:20.518259 kernel: raid6: avx2x4 gen() 49327 MB/s Jul 2 11:30:20.553283 kernel: raid6: avx2x4 xor() 14928 MB/s Jul 2 11:30:20.588258 kernel: raid6: avx2x2 gen() 53890 MB/s Jul 2 11:30:20.623284 kernel: raid6: avx2x2 xor() 33411 MB/s Jul 2 11:30:20.658281 kernel: raid6: avx2x1 gen() 46404 MB/s Jul 2 11:30:20.691258 kernel: raid6: avx2x1 xor() 29089 MB/s Jul 2 11:30:20.725265 kernel: raid6: sse2x4 gen() 22280 MB/s Jul 2 11:30:20.759258 kernel: raid6: sse2x4 xor() 11976 MB/s Jul 2 11:30:20.793283 kernel: raid6: sse2x2 gen() 22542 MB/s Jul 2 11:30:20.827266 kernel: raid6: sse2x2 xor() 13998 MB/s Jul 2 11:30:20.861281 kernel: raid6: sse2x1 gen() 19072 MB/s Jul 2 11:30:20.912824 kernel: raid6: sse2x1 xor() 9284 MB/s Jul 2 11:30:20.912839 kernel: raid6: using algorithm avx2x2 gen() 53890 MB/s Jul 2 11:30:20.912847 kernel: raid6: .... xor() 33411 MB/s, rmw enabled Jul 2 11:30:20.930886 kernel: raid6: using avx2x2 recovery algorithm Jul 2 11:30:20.977230 kernel: xor: automatically using best checksumming function avx Jul 2 11:30:21.055277 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 11:30:21.060166 systemd[1]: Finished dracut-pre-udev.service. Jul 2 11:30:21.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:21.060000 audit: BPF prog-id=7 op=LOAD Jul 2 11:30:21.060000 audit: BPF prog-id=8 op=LOAD Jul 2 11:30:21.060951 systemd[1]: Starting systemd-udevd.service... Jul 2 11:30:21.068948 systemd-udevd[473]: Using default interface naming scheme 'v252'. Jul 2 11:30:21.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:21.081730 systemd[1]: Started systemd-udevd.service. Jul 2 11:30:21.100315 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 11:30:21.131627 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Jul 2 11:30:21.148952 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 11:30:21.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:21.158877 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 11:30:21.208733 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 11:30:21.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:21.236270 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 11:30:21.260238 kernel: ACPI: bus type USB registered Jul 2 11:30:21.260277 kernel: libata version 3.00 loaded. Jul 2 11:30:21.260290 kernel: usbcore: registered new interface driver usbfs Jul 2 11:30:21.278234 kernel: usbcore: registered new interface driver hub Jul 2 11:30:21.278262 kernel: usbcore: registered new device driver usb Jul 2 11:30:21.280303 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 2 11:30:21.280338 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 2 11:30:21.386714 kernel: pps pps0: new PPS source ptp0 Jul 2 11:30:21.386835 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 11:30:21.386845 kernel: igb 0000:03:00.0: added PHC on eth0 Jul 2 11:30:21.419792 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 2 11:30:21.419878 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:44 Jul 2 11:30:21.419943 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jul 2 11:30:21.453228 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 2 11:30:21.453302 kernel: AES CTR mode by8 optimization enabled Jul 2 11:30:21.505000 kernel: ahci 0000:00:17.0: version 3.0 Jul 2 11:30:21.505088 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jul 2 11:30:21.505151 kernel: pps pps1: new PPS source ptp1 Jul 2 11:30:21.505217 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 2 11:30:21.537630 kernel: igb 0000:04:00.0: added PHC on eth1 Jul 2 11:30:21.557232 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 2 11:30:21.557334 kernel: mlx5_core 0000:01:00.0: firmware version: 14.29.2002 Jul 2 11:30:21.557402 kernel: scsi host0: ahci Jul 2 11:30:21.557484 kernel: scsi host1: ahci Jul 2 11:30:21.557542 kernel: scsi host2: ahci Jul 2 11:30:21.557595 kernel: scsi host3: ahci Jul 2 11:30:21.557646 kernel: scsi host4: ahci Jul 2 11:30:21.557696 kernel: scsi host5: ahci Jul 2 11:30:21.557747 kernel: scsi host6: ahci Jul 2 11:30:21.557801 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 137 Jul 2 11:30:21.557809 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 137 Jul 2 11:30:21.557816 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 137 Jul 2 11:30:21.557822 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 137 Jul 2 11:30:21.557829 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 137 Jul 2 11:30:21.557836 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 137 Jul 2 11:30:21.557842 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 137 Jul 2 11:30:21.567822 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:45 Jul 2 11:30:21.594784 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 2 11:30:21.606093 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jul 2 11:30:21.809485 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 2 11:30:21.867296 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 2 11:30:21.882264 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 2 11:30:21.882283 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 2 11:30:21.895266 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 11:30:21.895339 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 2 11:30:21.925270 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 2 11:30:21.938229 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 2 11:30:21.952268 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 2 11:30:21.965229 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 2 11:30:21.978229 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 2 11:30:21.993262 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 2 11:30:22.037815 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 2 11:30:22.037831 kernel: ata1.00: Features: NCQ-prio Jul 2 11:30:22.037838 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 2 11:30:22.064237 kernel: ata2.00: Features: NCQ-prio Jul 2 11:30:22.081230 kernel: ata1.00: configured for UDMA/133 Jul 2 11:30:22.081245 kernel: ata2.00: configured for UDMA/133 Jul 2 11:30:22.081254 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 2 11:30:22.109279 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 2 11:30:22.131230 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 2 11:30:22.131312 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jul 2 11:30:22.131370 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jul 2 11:30:22.153202 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 2 11:30:22.218403 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 2 11:30:22.218519 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 2 11:30:22.218573 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 2 11:30:22.233848 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 2 11:30:22.260562 kernel: hub 1-0:1.0: USB hub found Jul 2 11:30:22.260731 kernel: hub 1-0:1.0: 16 ports detected Jul 2 11:30:22.274301 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:30:22.274316 kernel: hub 2-0:1.0: USB hub found Jul 2 11:30:22.274392 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jul 2 11:30:22.284615 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:30:22.308527 kernel: hub 2-0:1.0: 10 ports detected Jul 2 11:30:22.308623 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 2 11:30:22.308713 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 11:30:22.308771 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 2 11:30:22.308837 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jul 2 11:30:22.308917 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 11:30:22.309000 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 2 11:30:22.309056 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jul 2 11:30:22.309110 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 2 11:30:22.309163 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 11:30:22.309217 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 11:30:22.309286 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:30:22.468382 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:30:22.495234 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:30:22.495251 kernel: mlx5_core 0000:01:00.1: firmware version: 14.29.2002 Jul 2 11:30:22.495322 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 11:30:22.497255 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 11:30:22.497271 kernel: GPT:9289727 != 937703087 Jul 2 11:30:22.497279 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 11:30:22.497286 kernel: GPT:9289727 != 937703087 Jul 2 11:30:22.497295 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 11:30:22.497301 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jul 2 11:30:22.497308 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:30:22.497314 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jul 2 11:30:22.522833 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 2 11:30:22.591231 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 2 11:30:22.618858 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 11:30:22.679605 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (534) Jul 2 11:30:22.700344 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 11:30:22.703105 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 11:30:22.741261 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 11:30:22.743994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 11:30:22.782928 kernel: hub 1-14:1.0: USB hub found Jul 2 11:30:22.783045 kernel: hub 1-14:1.0: 4 ports detected Jul 2 11:30:22.791735 systemd[1]: Starting disk-uuid.service... Jul 2 11:30:22.833073 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:30:22.833090 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jul 2 11:30:22.833102 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:30:22.833165 disk-uuid[678]: Primary Header is updated. Jul 2 11:30:22.833165 disk-uuid[678]: Secondary Entries is updated. Jul 2 11:30:22.833165 disk-uuid[678]: Secondary Header is updated. Jul 2 11:30:22.918276 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jul 2 11:30:22.918290 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 2 11:30:22.918373 kernel: port_module: 9 callbacks suppressed Jul 2 11:30:22.918382 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jul 2 11:30:22.944234 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 11:30:23.092266 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 2 11:30:23.161300 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jul 2 11:30:23.192230 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jul 2 11:30:23.192331 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 11:30:23.224231 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jul 2 11:30:23.254452 kernel: usbcore: registered new interface driver usbhid Jul 2 11:30:23.254500 kernel: usbhid: USB HID core driver Jul 2 11:30:23.287300 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 2 11:30:23.404458 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 2 11:30:23.404600 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 2 11:30:23.404609 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 2 11:30:23.874325 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:30:23.893122 disk-uuid[679]: The operation has completed successfully. Jul 2 11:30:23.901328 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jul 2 11:30:23.931763 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 11:30:24.029480 kernel: audit: type=1130 audit(1719919823.938:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.029495 kernel: audit: type=1131 audit(1719919823.938:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:23.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:23.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:23.931806 systemd[1]: Finished disk-uuid.service. Jul 2 11:30:24.060265 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 11:30:23.940430 systemd[1]: Starting verity-setup.service... Jul 2 11:30:24.091472 systemd[1]: Found device dev-mapper-usr.device. Jul 2 11:30:24.100236 systemd[1]: Mounting sysusr-usr.mount... Jul 2 11:30:24.106450 systemd[1]: Finished verity-setup.service. Jul 2 11:30:24.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.166232 kernel: audit: type=1130 audit(1719919824.117:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.222726 systemd[1]: Mounted sysusr-usr.mount. Jul 2 11:30:24.236331 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 11:30:24.229515 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 11:30:24.321328 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jul 2 11:30:24.321343 kernel: BTRFS info (device sdb6): using free space tree Jul 2 11:30:24.321351 kernel: BTRFS info (device sdb6): has skinny extents Jul 2 11:30:24.321358 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jul 2 11:30:24.229921 systemd[1]: Starting ignition-setup.service... Jul 2 11:30:24.251681 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 11:30:24.394230 kernel: audit: type=1130 audit(1719919824.345:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.329765 systemd[1]: Finished ignition-setup.service. Jul 2 11:30:24.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.345575 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 11:30:24.485184 kernel: audit: type=1130 audit(1719919824.402:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.485199 kernel: audit: type=1334 audit(1719919824.462:24): prog-id=9 op=LOAD Jul 2 11:30:24.462000 audit: BPF prog-id=9 op=LOAD Jul 2 11:30:24.402902 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 11:30:24.463179 systemd[1]: Starting systemd-networkd.service... Jul 2 11:30:24.561421 kernel: audit: type=1130 audit(1719919824.509:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.550729 ignition[866]: Ignition 2.14.0 Jul 2 11:30:24.500391 systemd-networkd[871]: lo: Link UP Jul 2 11:30:24.550733 ignition[866]: Stage: fetch-offline Jul 2 11:30:24.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.500394 systemd-networkd[871]: lo: Gained carrier Jul 2 11:30:24.732757 kernel: audit: type=1130 audit(1719919824.596:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.732772 kernel: audit: type=1130 audit(1719919824.657:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.732780 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 11:30:24.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.550759 ignition[866]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:30:24.758644 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Jul 2 11:30:24.500676 systemd-networkd[871]: Enumeration completed Jul 2 11:30:24.550772 ignition[866]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:30:24.500743 systemd[1]: Started systemd-networkd.service. Jul 2 11:30:24.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.553406 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:30:24.501311 systemd-networkd[871]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:30:24.810364 iscsid[897]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 11:30:24.810364 iscsid[897]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 11:30:24.810364 iscsid[897]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 11:30:24.810364 iscsid[897]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 11:30:24.810364 iscsid[897]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 11:30:24.810364 iscsid[897]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 11:30:24.810364 iscsid[897]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 11:30:24.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:24.553470 ignition[866]: parsed url from cmdline: "" Jul 2 11:30:24.509359 systemd[1]: Reached target network.target. Jul 2 11:30:24.553472 ignition[866]: no config URL provided Jul 2 11:30:24.564283 unknown[866]: fetched base config from "system" Jul 2 11:30:25.013415 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 2 11:30:24.553475 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 11:30:24.564287 unknown[866]: fetched user config from "system" Jul 2 11:30:24.553496 ignition[866]: parsing config with SHA512: 152e95dbe6c8adff3318ca48ef23bc3024aba5f83ecdf1120804656f214ba8191d67fdfd4f21f2ef32df4e2ceb7b404262b3b3f8dfcfc5dafb1d76b3d3acf1a6 Jul 2 11:30:24.575985 systemd[1]: Starting iscsiuio.service... Jul 2 11:30:24.564557 ignition[866]: fetch-offline: fetch-offline passed Jul 2 11:30:24.588579 systemd[1]: Started iscsiuio.service. Jul 2 11:30:24.564560 ignition[866]: POST message to Packet Timeline Jul 2 11:30:24.596652 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 11:30:24.564565 ignition[866]: POST Status error: resource requires networking Jul 2 11:30:24.657471 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 11:30:24.564599 ignition[866]: Ignition finished successfully Jul 2 11:30:24.657913 systemd[1]: Starting ignition-kargs.service... Jul 2 11:30:24.737217 ignition[888]: Ignition 2.14.0 Jul 2 11:30:24.734400 systemd-networkd[871]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:30:24.737221 ignition[888]: Stage: kargs Jul 2 11:30:24.746834 systemd[1]: Starting iscsid.service... Jul 2 11:30:24.737313 ignition[888]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:30:24.772542 systemd[1]: Started iscsid.service. Jul 2 11:30:24.737323 ignition[888]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:30:24.786737 systemd[1]: Starting dracut-initqueue.service... Jul 2 11:30:24.739160 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:30:24.800420 systemd[1]: Finished dracut-initqueue.service. Jul 2 11:30:24.739765 ignition[888]: kargs: kargs passed Jul 2 11:30:24.828356 systemd[1]: Reached target remote-fs-pre.target. Jul 2 11:30:24.739768 ignition[888]: POST message to Packet Timeline Jul 2 11:30:24.848300 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 11:30:24.739778 ignition[888]: GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:30:24.848337 systemd[1]: Reached target remote-fs.target. Jul 2 11:30:24.742074 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42247->[::1]:53: read: connection refused Jul 2 11:30:24.883125 systemd[1]: Starting dracut-pre-mount.service... Jul 2 11:30:24.942602 ignition[888]: GET https://metadata.packet.net/metadata: attempt #2 Jul 2 11:30:24.909808 systemd[1]: Finished dracut-pre-mount.service. Jul 2 11:30:24.942855 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51485->[::1]:53: read: connection refused Jul 2 11:30:25.007998 systemd-networkd[871]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:30:25.036776 systemd-networkd[871]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:30:25.066019 systemd-networkd[871]: enp1s0f1np1: Link UP Jul 2 11:30:25.066287 systemd-networkd[871]: enp1s0f1np1: Gained carrier Jul 2 11:30:25.080732 systemd-networkd[871]: enp1s0f0np0: Link UP Jul 2 11:30:25.081104 systemd-networkd[871]: eno2: Link UP Jul 2 11:30:25.081461 systemd-networkd[871]: eno1: Link UP Jul 2 11:30:25.343915 ignition[888]: GET https://metadata.packet.net/metadata: attempt #3 Jul 2 11:30:25.345037 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59228->[::1]:53: read: connection refused Jul 2 11:30:25.799565 systemd-networkd[871]: enp1s0f0np0: Gained carrier Jul 2 11:30:25.809477 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Jul 2 11:30:25.838406 systemd-networkd[871]: enp1s0f0np0: DHCPv4 address 139.178.91.9/31, gateway 139.178.91.8 acquired from 145.40.83.140 Jul 2 11:30:26.145510 ignition[888]: GET https://metadata.packet.net/metadata: attempt #4 Jul 2 11:30:26.146670 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36362->[::1]:53: read: connection refused Jul 2 11:30:26.287833 systemd-networkd[871]: enp1s0f1np1: Gained IPv6LL Jul 2 11:30:27.748666 ignition[888]: GET https://metadata.packet.net/metadata: attempt #5 Jul 2 11:30:27.749782 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42989->[::1]:53: read: connection refused Jul 2 11:30:27.823872 systemd-networkd[871]: enp1s0f0np0: Gained IPv6LL Jul 2 11:30:30.953342 ignition[888]: GET https://metadata.packet.net/metadata: attempt #6 Jul 2 11:30:30.996182 ignition[888]: GET result: OK Jul 2 11:30:31.182673 ignition[888]: Ignition finished successfully Jul 2 11:30:31.186727 systemd[1]: Finished ignition-kargs.service. Jul 2 11:30:31.275894 kernel: kauditd_printk_skb: 3 callbacks suppressed Jul 2 11:30:31.275911 kernel: audit: type=1130 audit(1719919831.197:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:31.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:31.206924 ignition[914]: Ignition 2.14.0 Jul 2 11:30:31.199624 systemd[1]: Starting ignition-disks.service... Jul 2 11:30:31.206927 ignition[914]: Stage: disks Jul 2 11:30:31.207003 ignition[914]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:30:31.207014 ignition[914]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:30:31.209134 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:30:31.209740 ignition[914]: disks: disks passed Jul 2 11:30:31.209743 ignition[914]: POST message to Packet Timeline Jul 2 11:30:31.209753 ignition[914]: GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:30:31.299724 ignition[914]: GET result: OK Jul 2 11:30:31.617441 ignition[914]: Ignition finished successfully Jul 2 11:30:31.620456 systemd[1]: Finished ignition-disks.service. Jul 2 11:30:31.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:31.634783 systemd[1]: Reached target initrd-root-device.target. Jul 2 11:30:31.714496 kernel: audit: type=1130 audit(1719919831.634:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:31.699437 systemd[1]: Reached target local-fs-pre.target. Jul 2 11:30:31.699473 systemd[1]: Reached target local-fs.target. Jul 2 11:30:31.723457 systemd[1]: Reached target sysinit.target. Jul 2 11:30:31.737396 systemd[1]: Reached target basic.target. Jul 2 11:30:31.737992 systemd[1]: Starting systemd-fsck-root.service... Jul 2 11:30:31.770606 systemd-fsck[930]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 11:30:31.784433 systemd[1]: Finished systemd-fsck-root.service. Jul 2 11:30:31.879127 kernel: audit: type=1130 audit(1719919831.793:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:31.879143 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 11:30:31.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:31.799203 systemd[1]: Mounting sysroot.mount... Jul 2 11:30:31.887862 systemd[1]: Mounted sysroot.mount. Jul 2 11:30:31.902486 systemd[1]: Reached target initrd-root-fs.target. Jul 2 11:30:31.918128 systemd[1]: Mounting sysroot-usr.mount... Jul 2 11:30:31.933319 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 11:30:31.948952 systemd[1]: Starting flatcar-static-network.service... Jul 2 11:30:31.964401 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 11:30:31.964458 systemd[1]: Reached target ignition-diskful.target. Jul 2 11:30:31.983564 systemd[1]: Mounted sysroot-usr.mount. Jul 2 11:30:32.006315 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 11:30:32.081361 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (941) Jul 2 11:30:32.081384 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jul 2 11:30:32.018961 systemd[1]: Starting initrd-setup-root.service... Jul 2 11:30:32.157052 kernel: BTRFS info (device sdb6): using free space tree Jul 2 11:30:32.157067 kernel: BTRFS info (device sdb6): has skinny extents Jul 2 11:30:32.157076 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jul 2 11:30:32.081705 systemd[1]: Finished initrd-setup-root.service. Jul 2 11:30:32.220422 kernel: audit: type=1130 audit(1719919832.165:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.220460 coreos-metadata[937]: Jul 02 11:30:32.084 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:30:32.220460 coreos-metadata[937]: Jul 02 11:30:32.107 INFO Fetch successful Jul 2 11:30:32.220460 coreos-metadata[937]: Jul 02 11:30:32.126 INFO wrote hostname ci-3510.3.5-a-b7736b5df5 to /sysroot/etc/hostname Jul 2 11:30:32.433485 kernel: audit: type=1130 audit(1719919832.228:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.433502 kernel: audit: type=1130 audit(1719919832.297:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.433510 kernel: audit: type=1131 audit(1719919832.297:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.433575 coreos-metadata[938]: Jul 02 11:30:32.084 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:30:32.433575 coreos-metadata[938]: Jul 02 11:30:32.108 INFO Fetch successful Jul 2 11:30:32.452355 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 11:30:32.166598 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 11:30:32.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.514534 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Jul 2 11:30:32.551441 kernel: audit: type=1130 audit(1719919832.485:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.228549 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 2 11:30:32.561469 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 11:30:32.228589 systemd[1]: Finished flatcar-static-network.service. Jul 2 11:30:32.579457 initrd-setup-root[973]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 11:30:32.297672 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 11:30:32.597450 ignition[1016]: INFO : Ignition 2.14.0 Jul 2 11:30:32.597450 ignition[1016]: INFO : Stage: mount Jul 2 11:30:32.597450 ignition[1016]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:30:32.597450 ignition[1016]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:30:32.597450 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:30:32.597450 ignition[1016]: INFO : mount: mount passed Jul 2 11:30:32.597450 ignition[1016]: INFO : POST message to Packet Timeline Jul 2 11:30:32.597450 ignition[1016]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:30:32.597450 ignition[1016]: INFO : GET result: OK Jul 2 11:30:32.420935 systemd[1]: Starting ignition-mount.service... Jul 2 11:30:32.440840 systemd[1]: Starting sysroot-boot.service... Jul 2 11:30:32.467288 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 11:30:32.467336 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 11:30:32.468063 systemd[1]: Finished sysroot-boot.service. Jul 2 11:30:32.746110 ignition[1016]: INFO : Ignition finished successfully Jul 2 11:30:32.748868 systemd[1]: Finished ignition-mount.service. Jul 2 11:30:32.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.765609 systemd[1]: Starting ignition-files.service... Jul 2 11:30:32.838423 kernel: audit: type=1130 audit(1719919832.763:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:32.832182 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 11:30:32.894723 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1030) Jul 2 11:30:32.894741 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jul 2 11:30:32.894749 kernel: BTRFS info (device sdb6): using free space tree Jul 2 11:30:32.917928 kernel: BTRFS info (device sdb6): has skinny extents Jul 2 11:30:32.966279 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jul 2 11:30:32.968099 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 11:30:32.984358 ignition[1049]: INFO : Ignition 2.14.0 Jul 2 11:30:32.984358 ignition[1049]: INFO : Stage: files Jul 2 11:30:32.984358 ignition[1049]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:30:32.984358 ignition[1049]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:30:32.984358 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:30:32.987918 unknown[1049]: wrote ssh authorized keys file for user: core Jul 2 11:30:33.053443 ignition[1049]: DEBUG : files: compiled without relabeling support, skipping Jul 2 11:30:33.053443 ignition[1049]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 11:30:33.053443 ignition[1049]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 11:30:33.053443 ignition[1049]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 11:30:33.053443 ignition[1049]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 11:30:33.053443 ignition[1049]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 11:30:33.053443 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 11:30:33.053443 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 11:30:33.158268 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 11:30:33.187751 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 11:30:33.204389 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 11:30:33.204389 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 11:30:33.749004 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 11:30:33.790635 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 11:30:33.790635 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 11:30:33.840501 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1052) Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 11:30:33.840524 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2272234695" Jul 2 11:30:33.840524 ignition[1049]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2272234695": device or resource busy Jul 2 11:30:34.105571 ignition[1049]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2272234695", trying btrfs: device or resource busy Jul 2 11:30:34.105571 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2272234695" Jul 2 11:30:34.105571 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2272234695" Jul 2 11:30:34.105571 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2272234695" Jul 2 11:30:34.105571 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2272234695" Jul 2 11:30:34.105571 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Jul 2 11:30:34.105571 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 11:30:34.105571 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 11:30:34.312560 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Jul 2 11:30:35.791398 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 11:30:35.791398 ignition[1049]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 11:30:35.791398 ignition[1049]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 11:30:35.791398 ignition[1049]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Jul 2 11:30:35.791398 ignition[1049]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 11:30:35.870395 ignition[1049]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 11:30:35.870395 ignition[1049]: INFO : files: op(15): [started] setting preset to enabled for "packet-phone-home.service" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: op(15): [finished] setting preset to enabled for "packet-phone-home.service" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 11:30:35.870395 ignition[1049]: INFO : files: files passed Jul 2 11:30:35.870395 ignition[1049]: INFO : POST message to Packet Timeline Jul 2 11:30:35.870395 ignition[1049]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:30:35.870395 ignition[1049]: INFO : GET result: OK Jul 2 11:30:36.077542 ignition[1049]: INFO : Ignition finished successfully Jul 2 11:30:36.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.076349 systemd[1]: Finished ignition-files.service. Jul 2 11:30:36.160497 kernel: audit: type=1130 audit(1719919836.085:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.092316 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 11:30:36.153522 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 11:30:36.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.206415 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 11:30:36.349637 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 11:30:36.349652 kernel: audit: type=1130 audit(1719919836.214:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.349660 kernel: audit: type=1131 audit(1719919836.214:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.153962 systemd[1]: Starting ignition-quench.service... Jul 2 11:30:36.168623 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 11:30:36.188671 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 11:30:36.495402 kernel: audit: type=1130 audit(1719919836.380:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.495416 kernel: audit: type=1131 audit(1719919836.380:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.188730 systemd[1]: Finished ignition-quench.service. Jul 2 11:30:36.214500 systemd[1]: Reached target ignition-complete.target. Jul 2 11:30:36.358886 systemd[1]: Starting initrd-parse-etc.service... Jul 2 11:30:36.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.371651 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 11:30:36.616447 kernel: audit: type=1130 audit(1719919836.543:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.371690 systemd[1]: Finished initrd-parse-etc.service. Jul 2 11:30:36.380626 systemd[1]: Reached target initrd-fs.target. Jul 2 11:30:36.504460 systemd[1]: Reached target initrd.target. Jul 2 11:30:36.504593 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 11:30:36.732340 kernel: audit: type=1131 audit(1719919836.673:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.504937 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 11:30:36.525650 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 11:30:36.544113 systemd[1]: Starting initrd-cleanup.service... Jul 2 11:30:36.611500 systemd[1]: Stopped target nss-lookup.target. Jul 2 11:30:36.625574 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 11:30:36.641613 systemd[1]: Stopped target timers.target. Jul 2 11:30:36.656600 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 11:30:36.656703 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 11:30:36.673739 systemd[1]: Stopped target initrd.target. Jul 2 11:30:36.739614 systemd[1]: Stopped target basic.target. Jul 2 11:30:36.755598 systemd[1]: Stopped target ignition-complete.target. Jul 2 11:30:36.771582 systemd[1]: Stopped target ignition-diskful.target. Jul 2 11:30:36.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.787593 systemd[1]: Stopped target initrd-root-device.target. Jul 2 11:30:37.000467 kernel: audit: type=1131 audit(1719919836.916:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.803618 systemd[1]: Stopped target remote-fs.target. Jul 2 11:30:37.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.821726 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 11:30:37.137004 kernel: audit: type=1131 audit(1719919837.008:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.137020 kernel: audit: type=1131 audit(1719919837.076:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.837979 systemd[1]: Stopped target sysinit.target. Jul 2 11:30:36.855935 systemd[1]: Stopped target local-fs.target. Jul 2 11:30:36.871901 systemd[1]: Stopped target local-fs-pre.target. Jul 2 11:30:36.886905 systemd[1]: Stopped target swap.target. Jul 2 11:30:36.900718 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 11:30:37.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.901073 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 11:30:37.281470 kernel: audit: type=1131 audit(1719919837.204:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.917125 systemd[1]: Stopped target cryptsetup.target. Jul 2 11:30:37.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.992581 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 11:30:37.313427 ignition[1100]: INFO : Ignition 2.14.0 Jul 2 11:30:37.313427 ignition[1100]: INFO : Stage: umount Jul 2 11:30:37.313427 ignition[1100]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:30:37.313427 ignition[1100]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:30:37.313427 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:30:37.313427 ignition[1100]: INFO : umount: umount passed Jul 2 11:30:37.313427 ignition[1100]: INFO : POST message to Packet Timeline Jul 2 11:30:37.313427 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:30:37.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.453959 iscsid[897]: iscsid shutting down. Jul 2 11:30:37.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:36.992667 systemd[1]: Stopped dracut-initqueue.service. Jul 2 11:30:37.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.482886 ignition[1100]: INFO : GET result: OK Jul 2 11:30:37.008657 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 11:30:37.008735 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 11:30:37.529512 ignition[1100]: INFO : Ignition finished successfully Jul 2 11:30:37.076591 systemd[1]: Stopped target paths.target. Jul 2 11:30:37.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.144307 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 11:30:37.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.146456 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 11:30:37.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.587000 audit: BPF prog-id=6 op=UNLOAD Jul 2 11:30:37.160581 systemd[1]: Stopped target slices.target. Jul 2 11:30:37.174523 systemd[1]: Stopped target sockets.target. Jul 2 11:30:37.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.181571 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 11:30:37.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.181655 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 11:30:37.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.204619 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 11:30:37.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.204731 systemd[1]: Stopped ignition-files.service. Jul 2 11:30:37.274652 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 11:30:37.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.274723 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 11:30:37.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.291146 systemd[1]: Stopping ignition-mount.service... Jul 2 11:30:37.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.304391 systemd[1]: Stopping iscsid.service... Jul 2 11:30:37.320890 systemd[1]: Stopping sysroot-boot.service... Jul 2 11:30:37.328394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 11:30:37.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.328520 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 11:30:37.341612 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 11:30:37.341722 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 11:30:37.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.365086 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 11:30:37.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.365462 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 11:30:37.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.365507 systemd[1]: Stopped iscsid.service. Jul 2 11:30:37.386857 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 11:30:37.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.386921 systemd[1]: Stopped sysroot-boot.service. Jul 2 11:30:37.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.404949 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 11:30:37.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.405040 systemd[1]: Closed iscsid.socket. Jul 2 11:30:37.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.426698 systemd[1]: Stopping iscsiuio.service... Jul 2 11:30:37.444034 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 11:30:37.444291 systemd[1]: Stopped iscsiuio.service. Jul 2 11:30:37.461187 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 11:30:37.461416 systemd[1]: Finished initrd-cleanup.service. Jul 2 11:30:37.477409 systemd[1]: Stopped target network.target. Jul 2 11:30:37.490505 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 11:30:37.490605 systemd[1]: Closed iscsiuio.socket. Jul 2 11:30:37.504857 systemd[1]: Stopping systemd-networkd.service... Jul 2 11:30:37.515426 systemd-networkd[871]: enp1s0f0np0: DHCPv6 lease lost Jul 2 11:30:37.521703 systemd[1]: Stopping systemd-resolved.service... Jul 2 11:30:37.527389 systemd-networkd[871]: enp1s0f1np1: DHCPv6 lease lost Jul 2 11:30:38.031000 audit: BPF prog-id=9 op=UNLOAD Jul 2 11:30:37.537048 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 11:30:37.537298 systemd[1]: Stopped systemd-resolved.service. Jul 2 11:30:38.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:37.554912 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 11:30:37.555160 systemd[1]: Stopped systemd-networkd.service. Jul 2 11:30:37.569057 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 11:30:37.569317 systemd[1]: Stopped ignition-mount.service. Jul 2 11:30:37.587923 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 11:30:37.588020 systemd[1]: Closed systemd-networkd.socket. Jul 2 11:30:37.605690 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 11:30:37.605843 systemd[1]: Stopped ignition-disks.service. Jul 2 11:30:37.622702 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 11:30:37.622851 systemd[1]: Stopped ignition-kargs.service. Jul 2 11:30:37.638725 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 11:30:37.638879 systemd[1]: Stopped ignition-setup.service. Jul 2 11:30:37.656732 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 11:30:37.656882 systemd[1]: Stopped initrd-setup-root.service. Jul 2 11:30:37.675527 systemd[1]: Stopping network-cleanup.service... Jul 2 11:30:37.688436 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 11:30:37.688683 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 11:30:37.703675 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 11:30:37.703808 systemd[1]: Stopped systemd-sysctl.service. Jul 2 11:30:37.721947 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 11:30:37.722100 systemd[1]: Stopped systemd-modules-load.service. Jul 2 11:30:37.738820 systemd[1]: Stopping systemd-udevd.service... Jul 2 11:30:37.757341 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 11:30:37.758015 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 11:30:37.758078 systemd[1]: Stopped systemd-udevd.service. Jul 2 11:30:37.774700 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 11:30:37.774759 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 11:30:37.793502 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 11:30:37.793539 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 11:30:37.809441 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 11:30:37.809468 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 11:30:37.827387 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 11:30:37.827432 systemd[1]: Stopped dracut-cmdline.service. Jul 2 11:30:37.842396 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 11:30:37.842452 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 11:30:37.859413 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 11:30:37.874377 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 11:30:37.874557 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 11:30:37.890824 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 11:30:37.890941 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 11:30:37.906459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 11:30:37.906571 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 11:30:37.924044 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 11:30:37.925287 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 11:30:37.925490 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 11:30:38.044750 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 11:30:38.044958 systemd[1]: Stopped network-cleanup.service. Jul 2 11:30:38.057777 systemd[1]: Reached target initrd-switch-root.target. Jul 2 11:30:38.074080 systemd[1]: Starting initrd-switch-root.service... Jul 2 11:30:38.111836 systemd[1]: Switching root. Jul 2 11:30:38.167187 systemd-journald[267]: Journal stopped Jul 2 11:30:42.114658 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Jul 2 11:30:42.114673 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 11:30:42.114682 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 11:30:42.114688 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 11:30:42.114693 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 11:30:42.114698 kernel: SELinux: policy capability open_perms=1 Jul 2 11:30:42.114704 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 11:30:42.114710 kernel: SELinux: policy capability always_check_network=0 Jul 2 11:30:42.114715 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 11:30:42.114721 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 11:30:42.114727 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 11:30:42.114732 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 11:30:42.114738 systemd[1]: Successfully loaded SELinux policy in 325.836ms. Jul 2 11:30:42.114744 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.326ms. Jul 2 11:30:42.114753 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 11:30:42.114759 systemd[1]: Detected architecture x86-64. Jul 2 11:30:42.114765 systemd[1]: Detected first boot. Jul 2 11:30:42.114771 systemd[1]: Hostname set to . Jul 2 11:30:42.114778 systemd[1]: Initializing machine ID from random generator. Jul 2 11:30:42.114784 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 11:30:42.114789 systemd[1]: Populated /etc with preset unit settings. Jul 2 11:30:42.114797 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:30:42.114803 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:30:42.114810 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:30:42.114816 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 11:30:42.114822 systemd[1]: Stopped initrd-switch-root.service. Jul 2 11:30:42.114828 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 11:30:42.114836 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 11:30:42.114842 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 11:30:42.114849 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 11:30:42.114855 systemd[1]: Created slice system-getty.slice. Jul 2 11:30:42.114861 systemd[1]: Created slice system-modprobe.slice. Jul 2 11:30:42.114867 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 11:30:42.114873 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 11:30:42.114879 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 11:30:42.114885 systemd[1]: Created slice user.slice. Jul 2 11:30:42.114892 systemd[1]: Started systemd-ask-password-console.path. Jul 2 11:30:42.114898 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 11:30:42.114904 systemd[1]: Set up automount boot.automount. Jul 2 11:30:42.114910 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 11:30:42.114918 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 11:30:42.114925 systemd[1]: Stopped target initrd-fs.target. Jul 2 11:30:42.114931 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 11:30:42.114937 systemd[1]: Reached target integritysetup.target. Jul 2 11:30:42.114945 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 11:30:42.114951 systemd[1]: Reached target remote-fs.target. Jul 2 11:30:42.114957 systemd[1]: Reached target slices.target. Jul 2 11:30:42.114964 systemd[1]: Reached target swap.target. Jul 2 11:30:42.114970 systemd[1]: Reached target torcx.target. Jul 2 11:30:42.114976 systemd[1]: Reached target veritysetup.target. Jul 2 11:30:42.114983 systemd[1]: Listening on systemd-coredump.socket. Jul 2 11:30:42.114989 systemd[1]: Listening on systemd-initctl.socket. Jul 2 11:30:42.114995 systemd[1]: Listening on systemd-networkd.socket. Jul 2 11:30:42.115003 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 11:30:42.115010 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 11:30:42.115017 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 11:30:42.115024 systemd[1]: Mounting dev-hugepages.mount... Jul 2 11:30:42.115031 systemd[1]: Mounting dev-mqueue.mount... Jul 2 11:30:42.115038 systemd[1]: Mounting media.mount... Jul 2 11:30:42.115044 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:30:42.115051 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 11:30:42.115057 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 11:30:42.115064 systemd[1]: Mounting tmp.mount... Jul 2 11:30:42.115070 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 11:30:42.115077 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:30:42.115083 systemd[1]: Starting kmod-static-nodes.service... Jul 2 11:30:42.115090 systemd[1]: Starting modprobe@configfs.service... Jul 2 11:30:42.115097 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:30:42.115104 systemd[1]: Starting modprobe@drm.service... Jul 2 11:30:42.115110 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:30:42.115117 systemd[1]: Starting modprobe@fuse.service... Jul 2 11:30:42.115124 kernel: fuse: init (API version 7.34) Jul 2 11:30:42.115130 systemd[1]: Starting modprobe@loop.service... Jul 2 11:30:42.115136 kernel: loop: module loaded Jul 2 11:30:42.115142 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 11:30:42.115150 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 11:30:42.115157 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 11:30:42.115163 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 11:30:42.115170 kernel: kauditd_printk_skb: 70 callbacks suppressed Jul 2 11:30:42.115176 kernel: audit: type=1131 audit(1719919841.756:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.115182 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 11:30:42.115188 kernel: audit: type=1131 audit(1719919841.844:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.115195 systemd[1]: Stopped systemd-journald.service. Jul 2 11:30:42.115202 kernel: audit: type=1130 audit(1719919841.908:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.115208 kernel: audit: type=1131 audit(1719919841.908:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.115214 kernel: audit: type=1334 audit(1719919841.994:119): prog-id=21 op=LOAD Jul 2 11:30:42.115220 kernel: audit: type=1334 audit(1719919842.012:120): prog-id=22 op=LOAD Jul 2 11:30:42.115228 kernel: audit: type=1334 audit(1719919842.030:121): prog-id=23 op=LOAD Jul 2 11:30:42.115234 kernel: audit: type=1334 audit(1719919842.048:122): prog-id=19 op=UNLOAD Jul 2 11:30:42.115240 systemd[1]: Starting systemd-journald.service... Jul 2 11:30:42.115247 kernel: audit: type=1334 audit(1719919842.048:123): prog-id=20 op=UNLOAD Jul 2 11:30:42.115253 systemd[1]: Starting systemd-modules-load.service... Jul 2 11:30:42.115260 kernel: audit: type=1305 audit(1719919842.112:124): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 11:30:42.115269 systemd-journald[1251]: Journal started Jul 2 11:30:42.115295 systemd-journald[1251]: Runtime Journal (/run/log/journal/01e5048e35d8447885896579d5b566cf) is 8.0M, max 640.1M, 632.1M free. Jul 2 11:30:38.562000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 11:30:38.833000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 11:30:38.835000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 11:30:38.835000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 11:30:38.835000 audit: BPF prog-id=10 op=LOAD Jul 2 11:30:38.835000 audit: BPF prog-id=10 op=UNLOAD Jul 2 11:30:38.835000 audit: BPF prog-id=11 op=LOAD Jul 2 11:30:38.835000 audit: BPF prog-id=11 op=UNLOAD Jul 2 11:30:38.936000 audit[1141]: AVC avc: denied { associate } for pid=1141 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 11:30:38.936000 audit[1141]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1124 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:30:38.936000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 11:30:38.963000 audit[1141]: AVC avc: denied { associate } for pid=1141 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 11:30:38.963000 audit[1141]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a79b9 a2=1ed a3=0 items=2 ppid=1124 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:30:38.963000 audit: CWD cwd="/" Jul 2 11:30:38.963000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:38.963000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:38.963000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 11:30:40.487000 audit: BPF prog-id=12 op=LOAD Jul 2 11:30:40.487000 audit: BPF prog-id=3 op=UNLOAD Jul 2 11:30:40.487000 audit: BPF prog-id=13 op=LOAD Jul 2 11:30:40.487000 audit: BPF prog-id=14 op=LOAD Jul 2 11:30:40.487000 audit: BPF prog-id=4 op=UNLOAD Jul 2 11:30:40.487000 audit: BPF prog-id=5 op=UNLOAD Jul 2 11:30:40.488000 audit: BPF prog-id=15 op=LOAD Jul 2 11:30:40.488000 audit: BPF prog-id=12 op=UNLOAD Jul 2 11:30:40.488000 audit: BPF prog-id=16 op=LOAD Jul 2 11:30:40.488000 audit: BPF prog-id=17 op=LOAD Jul 2 11:30:40.488000 audit: BPF prog-id=13 op=UNLOAD Jul 2 11:30:40.488000 audit: BPF prog-id=14 op=UNLOAD Jul 2 11:30:40.489000 audit: BPF prog-id=18 op=LOAD Jul 2 11:30:40.489000 audit: BPF prog-id=15 op=UNLOAD Jul 2 11:30:40.489000 audit: BPF prog-id=19 op=LOAD Jul 2 11:30:40.489000 audit: BPF prog-id=20 op=LOAD Jul 2 11:30:40.489000 audit: BPF prog-id=16 op=UNLOAD Jul 2 11:30:40.489000 audit: BPF prog-id=17 op=UNLOAD Jul 2 11:30:40.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:40.537000 audit: BPF prog-id=18 op=UNLOAD Jul 2 11:30:40.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:40.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:41.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:41.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:41.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:41.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:41.994000 audit: BPF prog-id=21 op=LOAD Jul 2 11:30:42.012000 audit: BPF prog-id=22 op=LOAD Jul 2 11:30:42.030000 audit: BPF prog-id=23 op=LOAD Jul 2 11:30:42.048000 audit: BPF prog-id=19 op=UNLOAD Jul 2 11:30:42.048000 audit: BPF prog-id=20 op=UNLOAD Jul 2 11:30:42.112000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 11:30:38.933703 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:30:40.486052 systemd[1]: Queued start job for default target multi-user.target. Jul 2 11:30:38.934221 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 11:30:40.489981 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 11:30:38.934261 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 11:30:38.934299 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 11:30:38.934313 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 11:30:38.934350 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 11:30:38.934365 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 11:30:38.934596 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 11:30:38.934645 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 11:30:38.934662 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 11:30:38.935881 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 11:30:38.935926 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 11:30:38.935950 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 11:30:38.935969 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 11:30:38.935990 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 11:30:38.936007 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 11:30:40.134764 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:40Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 11:30:40.134909 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:40Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 11:30:40.135177 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:40Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 11:30:40.135353 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:40Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 11:30:40.135386 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:40Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 11:30:40.135424 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:30:40Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 11:30:42.112000 audit[1251]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffec82bdbb0 a2=4000 a3=7ffec82bdc4c items=0 ppid=1 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:30:42.112000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 11:30:42.193439 systemd[1]: Starting systemd-network-generator.service... Jul 2 11:30:42.220235 systemd[1]: Starting systemd-remount-fs.service... Jul 2 11:30:42.247268 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 11:30:42.290045 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 11:30:42.290081 systemd[1]: Stopped verity-setup.service. Jul 2 11:30:42.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.355626 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:30:42.355655 systemd[1]: Started systemd-journald.service. Jul 2 11:30:42.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.365759 systemd[1]: Mounted dev-hugepages.mount. Jul 2 11:30:42.374477 systemd[1]: Mounted dev-mqueue.mount. Jul 2 11:30:42.381472 systemd[1]: Mounted media.mount. Jul 2 11:30:42.388495 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 11:30:42.397472 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 11:30:42.405478 systemd[1]: Mounted tmp.mount. Jul 2 11:30:42.412520 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 11:30:42.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.420564 systemd[1]: Finished kmod-static-nodes.service. Jul 2 11:30:42.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.428584 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 11:30:42.428716 systemd[1]: Finished modprobe@configfs.service. Jul 2 11:30:42.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.437675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:30:42.437853 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:30:42.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.446682 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 11:30:42.446829 systemd[1]: Finished modprobe@drm.service. Jul 2 11:30:42.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.455930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:30:42.456149 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:30:42.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.465085 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 11:30:42.465431 systemd[1]: Finished modprobe@fuse.service. Jul 2 11:30:42.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.474098 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:30:42.474437 systemd[1]: Finished modprobe@loop.service. Jul 2 11:30:42.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.483200 systemd[1]: Finished systemd-modules-load.service. Jul 2 11:30:42.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.492062 systemd[1]: Finished systemd-network-generator.service. Jul 2 11:30:42.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.501039 systemd[1]: Finished systemd-remount-fs.service. Jul 2 11:30:42.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.510050 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 11:30:42.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.519644 systemd[1]: Reached target network-pre.target. Jul 2 11:30:42.530977 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 11:30:42.539922 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 11:30:42.547440 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 11:30:42.548323 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 11:30:42.556905 systemd[1]: Starting systemd-journal-flush.service... Jul 2 11:30:42.560625 systemd-journald[1251]: Time spent on flushing to /var/log/journal/01e5048e35d8447885896579d5b566cf is 15.936ms for 1601 entries. Jul 2 11:30:42.560625 systemd-journald[1251]: System Journal (/var/log/journal/01e5048e35d8447885896579d5b566cf) is 8.0M, max 195.6M, 187.6M free. Jul 2 11:30:42.607159 systemd-journald[1251]: Received client request to flush runtime journal. Jul 2 11:30:42.573332 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:30:42.573906 systemd[1]: Starting systemd-random-seed.service... Jul 2 11:30:42.589330 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:30:42.590029 systemd[1]: Starting systemd-sysctl.service... Jul 2 11:30:42.597092 systemd[1]: Starting systemd-sysusers.service... Jul 2 11:30:42.604844 systemd[1]: Starting systemd-udev-settle.service... Jul 2 11:30:42.612486 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 11:30:42.620389 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 11:30:42.628454 systemd[1]: Finished systemd-journal-flush.service. Jul 2 11:30:42.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.636491 systemd[1]: Finished systemd-random-seed.service. Jul 2 11:30:42.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.644501 systemd[1]: Finished systemd-sysctl.service. Jul 2 11:30:42.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.652499 systemd[1]: Finished systemd-sysusers.service. Jul 2 11:30:42.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.661454 systemd[1]: Reached target first-boot-complete.target. Jul 2 11:30:42.669970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 11:30:42.679199 udevadm[1267]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 11:30:42.689206 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 11:30:42.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.860870 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 11:30:42.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.869000 audit: BPF prog-id=24 op=LOAD Jul 2 11:30:42.869000 audit: BPF prog-id=25 op=LOAD Jul 2 11:30:42.869000 audit: BPF prog-id=7 op=UNLOAD Jul 2 11:30:42.869000 audit: BPF prog-id=8 op=UNLOAD Jul 2 11:30:42.870468 systemd[1]: Starting systemd-udevd.service... Jul 2 11:30:42.881711 systemd-udevd[1270]: Using default interface naming scheme 'v252'. Jul 2 11:30:42.900410 systemd[1]: Started systemd-udevd.service. Jul 2 11:30:42.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:42.910494 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Jul 2 11:30:42.911000 audit: BPF prog-id=26 op=LOAD Jul 2 11:30:42.911824 systemd[1]: Starting systemd-networkd.service... Jul 2 11:30:42.936000 audit: BPF prog-id=27 op=LOAD Jul 2 11:30:42.954247 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 2 11:30:42.954362 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 11:30:42.954000 audit: BPF prog-id=28 op=LOAD Jul 2 11:30:42.954000 audit: BPF prog-id=29 op=LOAD Jul 2 11:30:42.954938 systemd[1]: Starting systemd-userdbd.service... Jul 2 11:30:42.975554 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 11:30:42.975592 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 11:30:42.976235 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1333) Jul 2 11:30:42.980233 kernel: ACPI: button: Power Button [PWRF] Jul 2 11:30:42.956000 audit[1340]: AVC avc: denied { confidentiality } for pid=1340 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 11:30:43.059287 kernel: IPMI message handler: version 39.2 Jul 2 11:30:43.068077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 11:30:43.081793 systemd[1]: Started systemd-userdbd.service. Jul 2 11:30:42.956000 audit[1340]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560f992edd00 a1=4d8bc a2=7f648e5b0bc5 a3=5 items=42 ppid=1270 pid=1340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:30:42.956000 audit: CWD cwd="/" Jul 2 11:30:42.956000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=1 name=(null) inode=12571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=2 name=(null) inode=12571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=3 name=(null) inode=12572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=4 name=(null) inode=12571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=5 name=(null) inode=12573 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=6 name=(null) inode=12571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=7 name=(null) inode=12574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=8 name=(null) inode=12574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=9 name=(null) inode=12575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=10 name=(null) inode=12574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=11 name=(null) inode=12576 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=12 name=(null) inode=12574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=13 name=(null) inode=12577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=14 name=(null) inode=12574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=15 name=(null) inode=12578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=16 name=(null) inode=12574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=17 name=(null) inode=12579 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=18 name=(null) inode=12571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=19 name=(null) inode=12580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=20 name=(null) inode=12580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=21 name=(null) inode=12581 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=22 name=(null) inode=12580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=23 name=(null) inode=12582 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=24 name=(null) inode=12580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=25 name=(null) inode=12583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=26 name=(null) inode=12580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=27 name=(null) inode=12584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=28 name=(null) inode=12580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=29 name=(null) inode=12585 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=30 name=(null) inode=12571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=31 name=(null) inode=12586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=32 name=(null) inode=12586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=33 name=(null) inode=12587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=34 name=(null) inode=12586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=35 name=(null) inode=12588 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=36 name=(null) inode=12586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=37 name=(null) inode=12589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=38 name=(null) inode=12586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=39 name=(null) inode=12590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=40 name=(null) inode=12586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PATH item=41 name=(null) inode=12591 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:30:42.956000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 11:30:43.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:43.122237 kernel: ipmi device interface Jul 2 11:30:43.123233 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 2 11:30:43.123398 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 2 11:30:43.123506 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 2 11:30:43.123593 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 2 11:30:43.124239 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Jul 2 11:30:43.265240 kernel: iTCO_vendor_support: vendor-support=0 Jul 2 11:30:43.265323 kernel: ipmi_si: IPMI System Interface driver Jul 2 11:30:43.302570 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 2 11:30:43.302819 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 2 11:30:43.342934 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 2 11:30:43.343117 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 2 11:30:43.383321 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 2 11:30:43.407236 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jul 2 11:30:43.407385 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 2 11:30:43.407453 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jul 2 11:30:43.423790 systemd-networkd[1307]: bond0: netdev ready Jul 2 11:30:43.425947 systemd-networkd[1307]: lo: Link UP Jul 2 11:30:43.425950 systemd-networkd[1307]: lo: Gained carrier Jul 2 11:30:43.426433 systemd-networkd[1307]: Enumeration completed Jul 2 11:30:43.426514 systemd[1]: Started systemd-networkd.service. Jul 2 11:30:43.426717 systemd-networkd[1307]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 2 11:30:43.429400 systemd-networkd[1307]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2d.network. Jul 2 11:30:43.448343 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 2 11:30:43.448375 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 2 11:30:43.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:43.554067 kernel: intel_rapl_common: Found RAPL domain package Jul 2 11:30:43.554103 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 2 11:30:43.554213 kernel: intel_rapl_common: Found RAPL domain core Jul 2 11:30:43.593087 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jul 2 11:30:43.593183 kernel: intel_rapl_common: Found RAPL domain dram Jul 2 11:30:43.696230 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 2 11:30:43.716276 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 2 11:30:43.719504 systemd[1]: Finished systemd-udev-settle.service. Jul 2 11:30:43.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:43.727963 systemd[1]: Starting lvm2-activation-early.service... Jul 2 11:30:43.743790 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 11:30:43.770639 systemd[1]: Finished lvm2-activation-early.service. Jul 2 11:30:43.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:43.779362 systemd[1]: Reached target cryptsetup.target. Jul 2 11:30:43.787860 systemd[1]: Starting lvm2-activation.service... Jul 2 11:30:43.790221 lvm[1376]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 11:30:43.818680 systemd[1]: Finished lvm2-activation.service. Jul 2 11:30:43.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:43.827367 systemd[1]: Reached target local-fs-pre.target. Jul 2 11:30:43.835307 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 11:30:43.835322 systemd[1]: Reached target local-fs.target. Jul 2 11:30:43.843316 systemd[1]: Reached target machines.target. Jul 2 11:30:43.851874 systemd[1]: Starting ldconfig.service... Jul 2 11:30:43.858633 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:30:43.858659 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:30:43.859150 systemd[1]: Starting systemd-boot-update.service... Jul 2 11:30:43.867735 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 11:30:43.878889 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 11:30:43.879540 systemd[1]: Starting systemd-sysext.service... Jul 2 11:30:43.879851 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1378 (bootctl) Jul 2 11:30:43.880634 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 11:30:43.888710 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 11:30:43.899659 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 11:30:43.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:43.900669 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 11:30:43.900802 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 11:30:43.944250 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 11:30:44.008790 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 11:30:44.009128 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 11:30:44.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.038230 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 11:30:44.038354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 11:30:44.055743 systemd-fsck[1387]: fsck.fat 4.2 (2021-01-31) Jul 2 11:30:44.055743 systemd-fsck[1387]: /dev/sdb1: 789 files, 119238/258078 clusters Jul 2 11:30:44.056270 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jul 2 11:30:44.056300 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 11:30:44.064026 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 11:30:44.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.114097 systemd[1]: Mounting boot.mount... Jul 2 11:30:44.115980 systemd-networkd[1307]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Jul 2 11:30:44.116229 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Jul 2 11:30:44.125359 systemd[1]: Mounted boot.mount. Jul 2 11:30:44.143418 systemd[1]: Finished systemd-boot-update.service. Jul 2 11:30:44.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.163233 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 11:30:44.175425 (sd-sysext)[1391]: Using extensions 'kubernetes'. Jul 2 11:30:44.175617 (sd-sysext)[1391]: Merged extensions into '/usr'. Jul 2 11:30:44.184823 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:30:44.185546 systemd[1]: Mounting usr-share-oem.mount... Jul 2 11:30:44.192433 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.193031 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:30:44.208747 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:30:44.221230 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 11:30:44.227815 systemd[1]: Starting modprobe@loop.service... Jul 2 11:30:44.234341 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.234408 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:30:44.234473 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:30:44.236038 systemd[1]: Mounted usr-share-oem.mount. Jul 2 11:30:44.242791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:30:44.242856 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:30:44.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.252486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:30:44.252798 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:30:44.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.262412 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:30:44.262724 systemd[1]: Finished modprobe@loop.service. Jul 2 11:30:44.292293 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 2 11:30:44.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.309191 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:30:44.309412 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.311019 systemd[1]: Finished systemd-sysext.service. Jul 2 11:30:44.318866 ldconfig[1377]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 11:30:44.324298 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jul 2 11:30:44.326025 systemd-networkd[1307]: bond0: Link UP Jul 2 11:30:44.326535 systemd-networkd[1307]: enp1s0f1np1: Link UP Jul 2 11:30:44.326888 systemd-networkd[1307]: enp1s0f1np1: Gained carrier Jul 2 11:30:44.329547 systemd-networkd[1307]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Jul 2 11:30:44.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.340534 systemd[1]: Finished ldconfig.service. Jul 2 11:30:44.350274 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 2 11:30:44.350309 kernel: bond0: active interface up! Jul 2 11:30:44.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.380940 systemd[1]: Starting ensure-sysext.service... Jul 2 11:30:44.386258 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Jul 2 11:30:44.392879 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 11:30:44.398808 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 11:30:44.399369 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 11:30:44.400438 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 11:30:44.403419 systemd[1]: Reloading. Jul 2 11:30:44.425890 /usr/lib/systemd/system-generators/torcx-generator[1417]: time="2024-07-02T11:30:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:30:44.425913 /usr/lib/systemd/system-generators/torcx-generator[1417]: time="2024-07-02T11:30:44Z" level=info msg="torcx already run" Jul 2 11:30:44.454242 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 11:30:44.477465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:30:44.477473 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:30:44.486672 systemd-networkd[1307]: bond0: Gained carrier Jul 2 11:30:44.486826 systemd-networkd[1307]: enp1s0f0np0: Link UP Jul 2 11:30:44.486962 systemd-networkd[1307]: enp1s0f0np0: Gained carrier Jul 2 11:30:44.488595 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:30:44.525978 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:30:44.526005 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Jul 2 11:30:44.529000 audit: BPF prog-id=30 op=LOAD Jul 2 11:30:44.529000 audit: BPF prog-id=31 op=LOAD Jul 2 11:30:44.529000 audit: BPF prog-id=24 op=UNLOAD Jul 2 11:30:44.529000 audit: BPF prog-id=25 op=UNLOAD Jul 2 11:30:44.530000 audit: BPF prog-id=32 op=LOAD Jul 2 11:30:44.530000 audit: BPF prog-id=26 op=UNLOAD Jul 2 11:30:44.531000 audit: BPF prog-id=33 op=LOAD Jul 2 11:30:44.531000 audit: BPF prog-id=27 op=UNLOAD Jul 2 11:30:44.531000 audit: BPF prog-id=34 op=LOAD Jul 2 11:30:44.531000 audit: BPF prog-id=35 op=LOAD Jul 2 11:30:44.531000 audit: BPF prog-id=28 op=UNLOAD Jul 2 11:30:44.531000 audit: BPF prog-id=29 op=UNLOAD Jul 2 11:30:44.531000 audit: BPF prog-id=36 op=LOAD Jul 2 11:30:44.531000 audit: BPF prog-id=21 op=UNLOAD Jul 2 11:30:44.531000 audit: BPF prog-id=37 op=LOAD Jul 2 11:30:44.532000 audit: BPF prog-id=38 op=LOAD Jul 2 11:30:44.532000 audit: BPF prog-id=22 op=UNLOAD Jul 2 11:30:44.532000 audit: BPF prog-id=23 op=UNLOAD Jul 2 11:30:44.532480 systemd-networkd[1307]: enp1s0f1np1: Link DOWN Jul 2 11:30:44.532483 systemd-networkd[1307]: enp1s0f1np1: Lost carrier Jul 2 11:30:44.533827 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 11:30:44.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:30:44.544321 systemd[1]: Starting audit-rules.service... Jul 2 11:30:44.552942 systemd[1]: Starting clean-ca-certificates.service... Jul 2 11:30:44.560000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 11:30:44.560000 audit[1495]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff75590930 a2=420 a3=0 items=0 ppid=1479 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:30:44.560000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 11:30:44.561193 augenrules[1495]: No rules Jul 2 11:30:44.561941 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 11:30:44.571280 systemd[1]: Starting systemd-resolved.service... Jul 2 11:30:44.579212 systemd[1]: Starting systemd-timesyncd.service... Jul 2 11:30:44.586840 systemd[1]: Starting systemd-update-utmp.service... Jul 2 11:30:44.593619 systemd[1]: Finished audit-rules.service. Jul 2 11:30:44.600444 systemd[1]: Finished clean-ca-certificates.service. Jul 2 11:30:44.608436 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 11:30:44.621513 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.622209 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:30:44.629890 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:30:44.636859 systemd[1]: Starting modprobe@loop.service... Jul 2 11:30:44.643292 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.643394 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:30:44.644140 systemd[1]: Starting systemd-update-done.service... Jul 2 11:30:44.649663 systemd-resolved[1501]: Positive Trust Anchors: Jul 2 11:30:44.649670 systemd-resolved[1501]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 11:30:44.649689 systemd-resolved[1501]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 11:30:44.651319 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 11:30:44.652114 systemd[1]: Started systemd-timesyncd.service. Jul 2 11:30:44.653860 systemd-resolved[1501]: Using system hostname 'ci-3510.3.5-a-b7736b5df5'. Jul 2 11:30:44.660694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:30:44.660765 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:30:44.669573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:30:44.669638 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:30:44.682581 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:30:44.682644 systemd[1]: Finished modprobe@loop.service. Jul 2 11:30:44.694229 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 11:30:44.708552 systemd[1]: Finished systemd-update-done.service. Jul 2 11:30:44.715228 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Jul 2 11:30:44.716719 systemd-networkd[1307]: enp1s0f1np1: Link UP Jul 2 11:30:44.716884 systemd-networkd[1307]: enp1s0f1np1: Gained carrier Jul 2 11:30:44.723501 systemd[1]: Started systemd-resolved.service. Jul 2 11:30:44.731630 systemd[1]: Reached target network.target. Jul 2 11:30:44.746387 systemd[1]: Reached target nss-lookup.target. Jul 2 11:30:44.756228 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 11:30:44.772400 systemd[1]: Reached target time-set.target. Jul 2 11:30:44.777227 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 2 11:30:44.785341 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:30:44.785399 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.785686 systemd[1]: Finished systemd-update-utmp.service. Jul 2 11:30:44.795549 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.796232 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:30:44.803853 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:30:44.810810 systemd[1]: Starting modprobe@loop.service... Jul 2 11:30:44.817329 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.817397 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:30:44.817454 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 11:30:44.817911 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:30:44.817974 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:30:44.826497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:30:44.826554 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:30:44.834485 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:30:44.834542 systemd[1]: Finished modprobe@loop.service. Jul 2 11:30:44.842468 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:30:44.842538 systemd[1]: Reached target sysinit.target. Jul 2 11:30:44.850390 systemd[1]: Started motdgen.path. Jul 2 11:30:44.857373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 11:30:44.867425 systemd[1]: Started logrotate.timer. Jul 2 11:30:44.874400 systemd[1]: Started mdadm.timer. Jul 2 11:30:44.881349 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 11:30:44.889323 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 11:30:44.889383 systemd[1]: Reached target paths.target. Jul 2 11:30:44.896341 systemd[1]: Reached target timers.target. Jul 2 11:30:44.903519 systemd[1]: Listening on dbus.socket. Jul 2 11:30:44.910845 systemd[1]: Starting docker.socket... Jul 2 11:30:44.918800 systemd[1]: Listening on sshd.socket. Jul 2 11:30:44.925443 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:30:44.925507 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.927064 systemd[1]: Listening on docker.socket. Jul 2 11:30:44.935151 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 11:30:44.935209 systemd[1]: Reached target sockets.target. Jul 2 11:30:44.943343 systemd[1]: Reached target basic.target. Jul 2 11:30:44.950338 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:30:44.950386 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.950437 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 11:30:44.950987 systemd[1]: Starting containerd.service... Jul 2 11:30:44.957803 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 11:30:44.966889 systemd[1]: Starting coreos-metadata.service... Jul 2 11:30:44.973869 systemd[1]: Starting dbus.service... Jul 2 11:30:44.979862 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 11:30:44.984588 jq[1523]: false Jul 2 11:30:44.986878 systemd[1]: Starting extend-filesystems.service... Jul 2 11:30:44.987951 coreos-metadata[1516]: Jul 02 11:30:44.987 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:30:44.992132 dbus-daemon[1522]: [system] SELinux support is enabled Jul 2 11:30:44.994330 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 11:30:44.995063 systemd[1]: Starting modprobe@drm.service... Jul 2 11:30:44.995707 extend-filesystems[1524]: Found loop1 Jul 2 11:30:45.017355 extend-filesystems[1524]: Found sda Jul 2 11:30:45.017355 extend-filesystems[1524]: Found sdb Jul 2 11:30:45.017355 extend-filesystems[1524]: Found sdb1 Jul 2 11:30:45.017355 extend-filesystems[1524]: Found sdb2 Jul 2 11:30:45.017355 extend-filesystems[1524]: Found sdb3 Jul 2 11:30:45.017355 extend-filesystems[1524]: Found usr Jul 2 11:30:45.017355 extend-filesystems[1524]: Found sdb4 Jul 2 11:30:45.017355 extend-filesystems[1524]: Found sdb6 Jul 2 11:30:45.017355 extend-filesystems[1524]: Found sdb7 Jul 2 11:30:45.017355 extend-filesystems[1524]: Found sdb9 Jul 2 11:30:45.017355 extend-filesystems[1524]: Checking size of /dev/sdb9 Jul 2 11:30:45.017355 extend-filesystems[1524]: Resized partition /dev/sdb9 Jul 2 11:30:45.151273 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Jul 2 11:30:45.151309 coreos-metadata[1519]: Jul 02 11:30:44.997 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:30:45.003028 systemd[1]: Starting motdgen.service... Jul 2 11:30:45.151554 extend-filesystems[1534]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 11:30:45.025080 systemd[1]: Starting prepare-helm.service... Jul 2 11:30:45.045140 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 11:30:45.060914 systemd[1]: Starting sshd-keygen.service... Jul 2 11:30:45.068943 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 11:30:45.092255 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:30:45.093002 systemd[1]: Starting tcsd.service... Jul 2 11:30:45.124465 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 11:30:45.167779 jq[1555]: true Jul 2 11:30:45.124870 systemd[1]: Starting update-engine.service... Jul 2 11:30:45.143813 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 11:30:45.159252 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:30:45.160367 systemd[1]: Started dbus.service. Jul 2 11:30:45.176068 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 11:30:45.176159 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 11:30:45.176413 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 11:30:45.176458 update_engine[1554]: I0702 11:30:45.175977 1554 main.cc:92] Flatcar Update Engine starting Jul 2 11:30:45.176479 systemd[1]: Finished modprobe@drm.service. Jul 2 11:30:45.179175 update_engine[1554]: I0702 11:30:45.179164 1554 update_check_scheduler.cc:74] Next update check in 7m29s Jul 2 11:30:45.184461 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 11:30:45.184530 systemd[1]: Finished motdgen.service. Jul 2 11:30:45.191824 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 11:30:45.191903 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 11:30:45.202979 jq[1559]: true Jul 2 11:30:45.203703 systemd[1]: Finished ensure-sysext.service. Jul 2 11:30:45.211796 env[1560]: time="2024-07-02T11:30:45.211743766Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 11:30:45.217496 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 2 11:30:45.217592 systemd[1]: Condition check resulted in tcsd.service being skipped. Jul 2 11:30:45.217776 tar[1557]: linux-amd64/helm Jul 2 11:30:45.220630 env[1560]: time="2024-07-02T11:30:45.220583226Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 11:30:45.220665 env[1560]: time="2024-07-02T11:30:45.220655401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:30:45.220981 systemd[1]: Started update-engine.service. Jul 2 11:30:45.221375 env[1560]: time="2024-07-02T11:30:45.221324120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 11:30:45.221375 env[1560]: time="2024-07-02T11:30:45.221345750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:30:45.221496 env[1560]: time="2024-07-02T11:30:45.221457239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 11:30:45.221496 env[1560]: time="2024-07-02T11:30:45.221467832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 11:30:45.221496 env[1560]: time="2024-07-02T11:30:45.221476107Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 11:30:45.221496 env[1560]: time="2024-07-02T11:30:45.221481486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 11:30:45.221579 env[1560]: time="2024-07-02T11:30:45.221522611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:30:45.221669 env[1560]: time="2024-07-02T11:30:45.221636933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:30:45.221740 env[1560]: time="2024-07-02T11:30:45.221703618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 11:30:45.221740 env[1560]: time="2024-07-02T11:30:45.221712796Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 11:30:45.221776 env[1560]: time="2024-07-02T11:30:45.221737848Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 11:30:45.221776 env[1560]: time="2024-07-02T11:30:45.221745570Z" level=info msg="metadata content store policy set" policy=shared Jul 2 11:30:45.232890 systemd[1]: Started locksmithd.service. Jul 2 11:30:45.235743 env[1560]: time="2024-07-02T11:30:45.235698615Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 11:30:45.235743 env[1560]: time="2024-07-02T11:30:45.235719655Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 11:30:45.235743 env[1560]: time="2024-07-02T11:30:45.235728129Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 11:30:45.235808 env[1560]: time="2024-07-02T11:30:45.235746266Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 11:30:45.235808 env[1560]: time="2024-07-02T11:30:45.235757161Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 11:30:45.235808 env[1560]: time="2024-07-02T11:30:45.235765237Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 11:30:45.235808 env[1560]: time="2024-07-02T11:30:45.235771653Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 11:30:45.235808 env[1560]: time="2024-07-02T11:30:45.235779152Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 11:30:45.235808 env[1560]: time="2024-07-02T11:30:45.235788409Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 11:30:45.235808 env[1560]: time="2024-07-02T11:30:45.235796179Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 11:30:45.235808 env[1560]: time="2024-07-02T11:30:45.235802651Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 11:30:45.237309 env[1560]: time="2024-07-02T11:30:45.235809607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 11:30:45.237309 env[1560]: time="2024-07-02T11:30:45.237239874Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 11:30:45.237309 env[1560]: time="2024-07-02T11:30:45.237287971Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 11:30:45.237456 env[1560]: time="2024-07-02T11:30:45.237419397Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 11:30:45.237456 env[1560]: time="2024-07-02T11:30:45.237436196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237456 env[1560]: time="2024-07-02T11:30:45.237443729Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 11:30:45.237516 env[1560]: time="2024-07-02T11:30:45.237471766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237516 env[1560]: time="2024-07-02T11:30:45.237479921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237516 env[1560]: time="2024-07-02T11:30:45.237486434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237516 env[1560]: time="2024-07-02T11:30:45.237492830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237516 env[1560]: time="2024-07-02T11:30:45.237499599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237516 env[1560]: time="2024-07-02T11:30:45.237506543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237516 env[1560]: time="2024-07-02T11:30:45.237513819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237622 env[1560]: time="2024-07-02T11:30:45.237520191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237622 env[1560]: time="2024-07-02T11:30:45.237528251Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 11:30:45.237674 env[1560]: time="2024-07-02T11:30:45.237631899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237674 env[1560]: time="2024-07-02T11:30:45.237640894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237674 env[1560]: time="2024-07-02T11:30:45.237648264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237674 env[1560]: time="2024-07-02T11:30:45.237654853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 11:30:45.237674 env[1560]: time="2024-07-02T11:30:45.237662776Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 11:30:45.237674 env[1560]: time="2024-07-02T11:30:45.237669720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 11:30:45.237766 env[1560]: time="2024-07-02T11:30:45.237679509Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 11:30:45.237766 env[1560]: time="2024-07-02T11:30:45.237701097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 11:30:45.237838 env[1560]: time="2024-07-02T11:30:45.237814157Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.237845516Z" level=info msg="Connect containerd service" Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.237864415Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.238126785Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.238220035Z" level=info msg="Start subscribing containerd event" Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.238252423Z" level=info msg="Start recovering state" Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.238251457Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.238275948Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.238284810Z" level=info msg="Start event monitor" Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.238291481Z" level=info msg="Start snapshots syncer" Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.238296917Z" level=info msg="Start cni network conf syncer for default" Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.238297654Z" level=info msg="containerd successfully booted in 0.026906s" Jul 2 11:30:45.245351 env[1560]: time="2024-07-02T11:30:45.238301128Z" level=info msg="Start streaming server" Jul 2 11:30:45.245546 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Jul 2 11:30:45.239313 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 11:30:45.239329 systemd[1]: Reached target system-config.target. Jul 2 11:30:45.248623 systemd[1]: Starting systemd-logind.service... Jul 2 11:30:45.255325 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 11:30:45.255344 systemd[1]: Reached target user-config.target. Jul 2 11:30:45.263409 systemd[1]: Started containerd.service. Jul 2 11:30:45.270480 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 11:30:45.272774 systemd-logind[1596]: Watching system buttons on /dev/input/event3 (Power Button) Jul 2 11:30:45.272785 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 11:30:45.272795 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 2 11:30:45.272906 systemd-logind[1596]: New seat seat0. Jul 2 11:30:45.280532 systemd[1]: Started systemd-logind.service. Jul 2 11:30:45.294109 locksmithd[1592]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 11:30:45.467089 tar[1557]: linux-amd64/LICENSE Jul 2 11:30:45.467164 tar[1557]: linux-amd64/README.md Jul 2 11:30:45.469754 systemd[1]: Finished prepare-helm.service. Jul 2 11:30:45.536232 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Jul 2 11:30:45.564050 extend-filesystems[1534]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Jul 2 11:30:45.564050 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 2 11:30:45.564050 extend-filesystems[1534]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Jul 2 11:30:45.590398 extend-filesystems[1524]: Resized filesystem in /dev/sdb9 Jul 2 11:30:45.564406 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 11:30:45.616408 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 11:30:45.564486 systemd[1]: Finished extend-filesystems.service. Jul 2 11:30:45.577326 systemd[1]: Finished sshd-keygen.service. Jul 2 11:30:45.608151 systemd[1]: Starting issuegen.service... Jul 2 11:30:45.623539 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 11:30:45.623616 systemd[1]: Finished issuegen.service. Jul 2 11:30:45.631075 systemd[1]: Starting systemd-user-sessions.service... Jul 2 11:30:45.639630 systemd[1]: Finished systemd-user-sessions.service. Jul 2 11:30:45.649047 systemd[1]: Started getty@tty1.service. Jul 2 11:30:45.657128 systemd[1]: Started serial-getty@ttyS1.service. Jul 2 11:30:45.665555 systemd[1]: Reached target getty.target. Jul 2 11:30:46.000329 systemd-networkd[1307]: bond0: Gained IPv6LL Jul 2 11:30:46.512742 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 11:30:46.523559 systemd[1]: Reached target network-online.target. Jul 2 11:30:46.532475 systemd[1]: Starting kubelet.service... Jul 2 11:30:47.139233 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Jul 2 11:30:47.185759 systemd[1]: Started kubelet.service. Jul 2 11:30:47.723453 kubelet[1624]: E0702 11:30:47.723354 1624 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 11:30:47.724698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 11:30:47.724771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 11:30:50.676452 login[1618]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 11:30:50.685701 systemd-logind[1596]: New session 1 of user core. Jul 2 11:30:50.686207 systemd[1]: Created slice user-500.slice. Jul 2 11:30:50.686488 login[1617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 11:30:50.686892 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 11:30:50.689238 systemd-logind[1596]: New session 2 of user core. Jul 2 11:30:50.692562 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 11:30:50.693227 systemd[1]: Starting user@500.service... Jul 2 11:30:50.695549 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:30:50.782421 systemd[1645]: Queued start job for default target default.target. Jul 2 11:30:50.782669 systemd[1645]: Reached target paths.target. Jul 2 11:30:50.782681 systemd[1645]: Reached target sockets.target. Jul 2 11:30:50.782689 systemd[1645]: Reached target timers.target. Jul 2 11:30:50.782696 systemd[1645]: Reached target basic.target. Jul 2 11:30:50.782717 systemd[1645]: Reached target default.target. Jul 2 11:30:50.782731 systemd[1645]: Startup finished in 83ms. Jul 2 11:30:50.782777 systemd[1]: Started user@500.service. Jul 2 11:30:50.783366 systemd[1]: Started session-1.scope. Jul 2 11:30:50.783719 systemd[1]: Started session-2.scope. Jul 2 11:30:50.800729 coreos-metadata[1516]: Jul 02 11:30:50.800 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Jul 2 11:30:50.800885 coreos-metadata[1519]: Jul 02 11:30:50.800 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Jul 2 11:30:51.801009 coreos-metadata[1516]: Jul 02 11:30:51.800 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 2 11:30:51.801816 coreos-metadata[1519]: Jul 02 11:30:51.800 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 2 11:30:52.381492 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Jul 2 11:30:52.381649 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Jul 2 11:30:52.875515 coreos-metadata[1516]: Jul 02 11:30:52.875 INFO Fetch successful Jul 2 11:30:52.875727 coreos-metadata[1519]: Jul 02 11:30:52.875 INFO Fetch successful Jul 2 11:30:52.893150 systemd[1]: Created slice system-sshd.slice. Jul 2 11:30:52.893887 systemd[1]: Started sshd@0-139.178.91.9:22-139.178.68.195:42096.service. Jul 2 11:30:52.912233 systemd[1]: Finished coreos-metadata.service. Jul 2 11:30:52.913178 systemd[1]: Started packet-phone-home.service. Jul 2 11:30:52.913607 unknown[1516]: wrote ssh authorized keys file for user: core Jul 2 11:30:52.918717 curl[1670]: % Total % Received % Xferd Average Speed Time Time Time Current Jul 2 11:30:52.918877 curl[1670]: Dload Upload Total Spent Left Speed Jul 2 11:30:52.924718 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Jul 2 11:30:52.924921 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 11:30:52.925106 systemd[1]: Reached target multi-user.target. Jul 2 11:30:52.925797 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 11:30:52.930002 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 11:30:52.930074 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 11:30:52.930158 systemd[1]: Startup finished in 1.916s (kernel) + 19.377s (initrd) + 14.711s (userspace) = 36.004s. Jul 2 11:30:52.938792 sshd[1666]: Accepted publickey for core from 139.178.68.195 port 42096 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:30:52.939572 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:30:52.942094 systemd-logind[1596]: New session 3 of user core. Jul 2 11:30:52.942554 systemd[1]: Started session-3.scope. Jul 2 11:30:52.993902 systemd[1]: Started sshd@1-139.178.91.9:22-139.178.68.195:53324.service. Jul 2 11:30:53.028350 sshd[1677]: Accepted publickey for core from 139.178.68.195 port 53324 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:30:53.029111 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:30:53.031229 systemd-logind[1596]: New session 4 of user core. Jul 2 11:30:53.031743 systemd[1]: Started session-4.scope. Jul 2 11:30:53.082066 sshd[1677]: pam_unix(sshd:session): session closed for user core Jul 2 11:30:53.084551 systemd[1]: sshd@1-139.178.91.9:22-139.178.68.195:53324.service: Deactivated successfully. Jul 2 11:30:53.085123 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 11:30:53.085832 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit. Jul 2 11:30:53.086833 systemd[1]: Started sshd@2-139.178.91.9:22-139.178.68.195:53330.service. Jul 2 11:30:53.087718 systemd-logind[1596]: Removed session 4. Jul 2 11:30:53.102986 curl[1670]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Jul 2 11:30:53.104027 systemd[1]: packet-phone-home.service: Deactivated successfully. Jul 2 11:30:53.131309 sshd[1683]: Accepted publickey for core from 139.178.68.195 port 53330 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:30:53.132094 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:30:53.134827 systemd-logind[1596]: New session 5 of user core. Jul 2 11:30:53.135321 systemd[1]: Started session-5.scope. Jul 2 11:30:53.189146 sshd[1683]: pam_unix(sshd:session): session closed for user core Jul 2 11:30:53.196131 systemd[1]: sshd@2-139.178.91.9:22-139.178.68.195:53330.service: Deactivated successfully. Jul 2 11:30:53.197761 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 11:30:53.199532 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit. Jul 2 11:30:53.202155 systemd[1]: Started sshd@3-139.178.91.9:22-139.178.68.195:53346.service. Jul 2 11:30:53.205042 systemd-logind[1596]: Removed session 5. Jul 2 11:30:52.781349 systemd-resolved[1501]: Clock change detected. Flushing caches. Jul 2 11:30:52.818774 systemd-journald[1251]: Time jumped backwards, rotating. Jul 2 11:30:52.781529 systemd-timesyncd[1502]: Contacted time server 209.38.132.42:123 (0.flatcar.pool.ntp.org). Jul 2 11:30:52.781656 systemd-timesyncd[1502]: Initial clock synchronization to Tue 2024-07-02 11:30:52.781194 UTC. Jul 2 11:30:52.838166 sshd[1689]: Accepted publickey for core from 139.178.68.195 port 53346 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:30:52.838876 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:30:52.841177 systemd-logind[1596]: New session 6 of user core. Jul 2 11:30:52.841652 systemd[1]: Started session-6.scope. Jul 2 11:30:52.893095 sshd[1689]: pam_unix(sshd:session): session closed for user core Jul 2 11:30:52.896119 systemd[1]: sshd@3-139.178.91.9:22-139.178.68.195:53346.service: Deactivated successfully. Jul 2 11:30:52.896860 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 11:30:52.897659 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit. Jul 2 11:30:52.899021 systemd[1]: Started sshd@4-139.178.91.9:22-139.178.68.195:53356.service. Jul 2 11:30:52.900164 systemd-logind[1596]: Removed session 6. Jul 2 11:30:52.948267 sshd[1696]: Accepted publickey for core from 139.178.68.195 port 53356 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:30:52.948941 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:30:52.951236 systemd-logind[1596]: New session 7 of user core. Jul 2 11:30:52.951695 systemd[1]: Started session-7.scope. Jul 2 11:30:53.019340 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 11:30:53.020038 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 11:30:53.074645 systemd[1]: Starting docker.service... Jul 2 11:30:53.134471 env[1714]: time="2024-07-02T11:30:53.134335612Z" level=info msg="Starting up" Jul 2 11:30:53.136415 env[1714]: time="2024-07-02T11:30:53.136329559Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 11:30:53.136415 env[1714]: time="2024-07-02T11:30:53.136367164Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 11:30:53.136415 env[1714]: time="2024-07-02T11:30:53.136410827Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 11:30:53.136635 env[1714]: time="2024-07-02T11:30:53.136445185Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 11:30:53.139189 env[1714]: time="2024-07-02T11:30:53.139115964Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 11:30:53.139189 env[1714]: time="2024-07-02T11:30:53.139147517Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 11:30:53.139189 env[1714]: time="2024-07-02T11:30:53.139173726Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 11:30:53.139189 env[1714]: time="2024-07-02T11:30:53.139197480Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 11:30:53.233871 env[1714]: time="2024-07-02T11:30:53.233757928Z" level=info msg="Loading containers: start." Jul 2 11:30:53.400343 kernel: Initializing XFRM netlink socket Jul 2 11:30:53.461657 env[1714]: time="2024-07-02T11:30:53.461634420Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 11:30:53.568942 systemd-networkd[1307]: docker0: Link UP Jul 2 11:30:53.587921 env[1714]: time="2024-07-02T11:30:53.587830368Z" level=info msg="Loading containers: done." Jul 2 11:30:53.606914 env[1714]: time="2024-07-02T11:30:53.606805722Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 11:30:53.607237 env[1714]: time="2024-07-02T11:30:53.607188291Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 11:30:53.607539 env[1714]: time="2024-07-02T11:30:53.607458604Z" level=info msg="Daemon has completed initialization" Jul 2 11:30:53.615141 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck415510220-merged.mount: Deactivated successfully. Jul 2 11:30:53.633336 systemd[1]: Started docker.service. Jul 2 11:30:53.649159 env[1714]: time="2024-07-02T11:30:53.649010856Z" level=info msg="API listen on /run/docker.sock" Jul 2 11:30:54.776889 env[1560]: time="2024-07-02T11:30:54.776750579Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 11:30:55.606034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4016781591.mount: Deactivated successfully. Jul 2 11:30:57.551302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 11:30:57.551474 systemd[1]: Stopped kubelet.service. Jul 2 11:30:57.552303 systemd[1]: Starting kubelet.service... Jul 2 11:30:57.746950 systemd[1]: Started kubelet.service. Jul 2 11:30:57.757892 env[1560]: time="2024-07-02T11:30:57.757864403Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:30:57.758631 env[1560]: time="2024-07-02T11:30:57.758611332Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:30:57.759626 env[1560]: time="2024-07-02T11:30:57.759613130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:30:57.760549 env[1560]: time="2024-07-02T11:30:57.760533473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:30:57.761036 env[1560]: time="2024-07-02T11:30:57.761022329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 11:30:57.767820 env[1560]: time="2024-07-02T11:30:57.767766377Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 11:30:57.791789 kubelet[1879]: E0702 11:30:57.791723 1879 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 11:30:57.794135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 11:30:57.794213 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 11:30:59.710945 env[1560]: time="2024-07-02T11:30:59.710890205Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:30:59.711576 env[1560]: time="2024-07-02T11:30:59.711516670Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:30:59.712657 env[1560]: time="2024-07-02T11:30:59.712618115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:30:59.713588 env[1560]: time="2024-07-02T11:30:59.713547532Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:30:59.714020 env[1560]: time="2024-07-02T11:30:59.713970096Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 11:30:59.722443 env[1560]: time="2024-07-02T11:30:59.722382162Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 11:31:01.050795 env[1560]: time="2024-07-02T11:31:01.050745391Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:01.051648 env[1560]: time="2024-07-02T11:31:01.051610866Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:01.052667 env[1560]: time="2024-07-02T11:31:01.052628460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:01.053623 env[1560]: time="2024-07-02T11:31:01.053582140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:01.054101 env[1560]: time="2024-07-02T11:31:01.054057391Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 11:31:01.059695 env[1560]: time="2024-07-02T11:31:01.059665290Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 11:31:02.282791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778916749.mount: Deactivated successfully. Jul 2 11:31:02.664902 env[1560]: time="2024-07-02T11:31:02.664820557Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:02.665401 env[1560]: time="2024-07-02T11:31:02.665361879Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:02.666049 env[1560]: time="2024-07-02T11:31:02.666034137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:02.666876 env[1560]: time="2024-07-02T11:31:02.666829199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:02.667483 env[1560]: time="2024-07-02T11:31:02.667439513Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 11:31:02.672978 env[1560]: time="2024-07-02T11:31:02.672962772Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 11:31:03.178026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount444995663.mount: Deactivated successfully. Jul 2 11:31:03.880064 env[1560]: time="2024-07-02T11:31:03.880013729Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:03.880616 env[1560]: time="2024-07-02T11:31:03.880582852Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:03.882852 env[1560]: time="2024-07-02T11:31:03.882796708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:03.884032 env[1560]: time="2024-07-02T11:31:03.883980890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:03.884582 env[1560]: time="2024-07-02T11:31:03.884537065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 11:31:03.890166 env[1560]: time="2024-07-02T11:31:03.890145824Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 11:31:04.376249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320141121.mount: Deactivated successfully. Jul 2 11:31:04.377805 env[1560]: time="2024-07-02T11:31:04.377766910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:04.378470 env[1560]: time="2024-07-02T11:31:04.378430748Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:04.379410 env[1560]: time="2024-07-02T11:31:04.379349243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:04.380068 env[1560]: time="2024-07-02T11:31:04.380024845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:04.380730 env[1560]: time="2024-07-02T11:31:04.380694305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 11:31:04.386974 env[1560]: time="2024-07-02T11:31:04.386953576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 11:31:04.932674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864865398.mount: Deactivated successfully. Jul 2 11:31:07.450642 env[1560]: time="2024-07-02T11:31:07.450584747Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:07.452098 env[1560]: time="2024-07-02T11:31:07.452080538Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:07.453337 env[1560]: time="2024-07-02T11:31:07.453308532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:07.454409 env[1560]: time="2024-07-02T11:31:07.454381097Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:07.454899 env[1560]: time="2024-07-02T11:31:07.454885423Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 11:31:07.990172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 11:31:07.990330 systemd[1]: Stopped kubelet.service. Jul 2 11:31:07.991211 systemd[1]: Starting kubelet.service... Jul 2 11:31:08.172738 systemd[1]: Started kubelet.service. Jul 2 11:31:08.213253 kubelet[2048]: E0702 11:31:08.213226 2048 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 11:31:08.214389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 11:31:08.214460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 11:31:09.393336 systemd[1]: Stopped kubelet.service. Jul 2 11:31:09.394641 systemd[1]: Starting kubelet.service... Jul 2 11:31:09.404525 systemd[1]: Reloading. Jul 2 11:31:09.437897 /usr/lib/systemd/system-generators/torcx-generator[2091]: time="2024-07-02T11:31:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:31:09.437920 /usr/lib/systemd/system-generators/torcx-generator[2091]: time="2024-07-02T11:31:09Z" level=info msg="torcx already run" Jul 2 11:31:09.491711 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:31:09.491722 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:31:09.503474 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:31:09.562389 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 11:31:09.562439 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 11:31:09.562562 systemd[1]: Stopped kubelet.service. Jul 2 11:31:09.563498 systemd[1]: Starting kubelet.service... Jul 2 11:31:09.753485 systemd[1]: Started kubelet.service. Jul 2 11:31:09.791078 kubelet[2158]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:31:09.791078 kubelet[2158]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 11:31:09.791078 kubelet[2158]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:31:09.791394 kubelet[2158]: I0702 11:31:09.791097 2158 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 11:31:09.971309 kubelet[2158]: I0702 11:31:09.971274 2158 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 11:31:09.971309 kubelet[2158]: I0702 11:31:09.971290 2158 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 11:31:09.971465 kubelet[2158]: I0702 11:31:09.971427 2158 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 11:31:09.991783 kubelet[2158]: E0702 11:31:09.991735 2158 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.91.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:09.992759 kubelet[2158]: I0702 11:31:09.992733 2158 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 11:31:10.017271 kubelet[2158]: I0702 11:31:10.017205 2158 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 11:31:10.017351 kubelet[2158]: I0702 11:31:10.017344 2158 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 11:31:10.017451 kubelet[2158]: I0702 11:31:10.017441 2158 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 11:31:10.017543 kubelet[2158]: I0702 11:31:10.017457 2158 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 11:31:10.017543 kubelet[2158]: I0702 11:31:10.017463 2158 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 11:31:10.018141 kubelet[2158]: I0702 11:31:10.018110 2158 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:31:10.018171 kubelet[2158]: I0702 11:31:10.018165 2158 kubelet.go:396] "Attempting to sync node with API server" Jul 2 11:31:10.018188 kubelet[2158]: I0702 11:31:10.018174 2158 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 11:31:10.018210 kubelet[2158]: I0702 11:31:10.018188 2158 kubelet.go:312] "Adding apiserver pod source" Jul 2 11:31:10.018210 kubelet[2158]: I0702 11:31:10.018195 2158 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 11:31:10.018488 kubelet[2158]: W0702 11:31:10.018463 2158 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.91.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:10.018530 kubelet[2158]: E0702 11:31:10.018498 2158 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.91.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:10.018530 kubelet[2158]: W0702 11:31:10.018489 2158 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://139.178.91.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-b7736b5df5&limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:10.018530 kubelet[2158]: E0702 11:31:10.018524 2158 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.91.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-b7736b5df5&limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:10.019319 kubelet[2158]: I0702 11:31:10.019310 2158 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 11:31:10.024936 kubelet[2158]: I0702 11:31:10.024926 2158 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 11:31:10.024978 kubelet[2158]: W0702 11:31:10.024960 2158 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 11:31:10.025242 kubelet[2158]: I0702 11:31:10.025234 2158 server.go:1256] "Started kubelet" Jul 2 11:31:10.025284 kubelet[2158]: I0702 11:31:10.025276 2158 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 11:31:10.025347 kubelet[2158]: I0702 11:31:10.025333 2158 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 11:31:10.025493 kubelet[2158]: I0702 11:31:10.025482 2158 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 11:31:10.034958 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 11:31:10.035017 kubelet[2158]: I0702 11:31:10.035006 2158 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 11:31:10.035064 kubelet[2158]: I0702 11:31:10.035047 2158 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 11:31:10.035096 kubelet[2158]: I0702 11:31:10.035070 2158 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 11:31:10.035133 kubelet[2158]: I0702 11:31:10.035123 2158 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 11:31:10.035244 kubelet[2158]: E0702 11:31:10.035234 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.91.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-b7736b5df5?timeout=10s\": dial tcp 139.178.91.9:6443: connect: connection refused" interval="200ms" Jul 2 11:31:10.035348 kubelet[2158]: W0702 11:31:10.035316 2158 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://139.178.91.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:10.035416 kubelet[2158]: I0702 11:31:10.035349 2158 server.go:461] "Adding debug handlers to kubelet server" Jul 2 11:31:10.035416 kubelet[2158]: E0702 11:31:10.035358 2158 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.91.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:10.035543 kubelet[2158]: E0702 11:31:10.035507 2158 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 11:31:10.041745 kubelet[2158]: I0702 11:31:10.041734 2158 factory.go:221] Registration of the systemd container factory successfully Jul 2 11:31:10.041799 kubelet[2158]: I0702 11:31:10.041789 2158 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 11:31:10.041831 kubelet[2158]: E0702 11:31:10.041799 2158 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.91.9:6443/api/v1/namespaces/default/events\": dial tcp 139.178.91.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.5-a-b7736b5df5.17de62025efd4435 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.5-a-b7736b5df5,UID:ci-3510.3.5-a-b7736b5df5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.5-a-b7736b5df5,},FirstTimestamp:2024-07-02 11:31:10.025221173 +0000 UTC m=+0.268306885,LastTimestamp:2024-07-02 11:31:10.025221173 +0000 UTC m=+0.268306885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.5-a-b7736b5df5,}" Jul 2 11:31:10.043114 kubelet[2158]: I0702 11:31:10.043103 2158 factory.go:221] Registration of the containerd container factory successfully Jul 2 11:31:10.054430 kubelet[2158]: I0702 11:31:10.054389 2158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 11:31:10.054925 kubelet[2158]: I0702 11:31:10.054917 2158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 11:31:10.054969 kubelet[2158]: I0702 11:31:10.054933 2158 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 11:31:10.054969 kubelet[2158]: I0702 11:31:10.054945 2158 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 11:31:10.055026 kubelet[2158]: E0702 11:31:10.054979 2158 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 11:31:10.055187 kubelet[2158]: W0702 11:31:10.055164 2158 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.91.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:10.055217 kubelet[2158]: E0702 11:31:10.055195 2158 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.91.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:10.125556 kubelet[2158]: I0702 11:31:10.125506 2158 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 11:31:10.125556 kubelet[2158]: I0702 11:31:10.125545 2158 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 11:31:10.125766 kubelet[2158]: I0702 11:31:10.125591 2158 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:31:10.127415 kubelet[2158]: I0702 11:31:10.127351 2158 policy_none.go:49] "None policy: Start" Jul 2 11:31:10.128590 kubelet[2158]: I0702 11:31:10.128541 2158 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 11:31:10.128776 kubelet[2158]: I0702 11:31:10.128606 2158 state_mem.go:35] "Initializing new in-memory state store" Jul 2 11:31:10.139456 systemd[1]: Created slice kubepods.slice. Jul 2 11:31:10.139857 kubelet[2158]: I0702 11:31:10.139771 2158 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.140611 kubelet[2158]: E0702 11:31:10.140526 2158 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.91.9:6443/api/v1/nodes\": dial tcp 139.178.91.9:6443: connect: connection refused" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.150096 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 11:31:10.155476 kubelet[2158]: E0702 11:31:10.155393 2158 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 11:31:10.157152 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 11:31:10.178285 kubelet[2158]: I0702 11:31:10.178188 2158 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 11:31:10.178818 kubelet[2158]: I0702 11:31:10.178741 2158 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 11:31:10.180776 kubelet[2158]: E0702 11:31:10.180735 2158 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.5-a-b7736b5df5\" not found" Jul 2 11:31:10.237011 kubelet[2158]: E0702 11:31:10.236942 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.91.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-b7736b5df5?timeout=10s\": dial tcp 139.178.91.9:6443: connect: connection refused" interval="400ms" Jul 2 11:31:10.345106 kubelet[2158]: I0702 11:31:10.345048 2158 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.345822 kubelet[2158]: E0702 11:31:10.345746 2158 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.91.9:6443/api/v1/nodes\": dial tcp 139.178.91.9:6443: connect: connection refused" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.356016 kubelet[2158]: I0702 11:31:10.355963 2158 topology_manager.go:215] "Topology Admit Handler" podUID="02ed262fe84437cd81ebaedd272d74b5" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.359229 kubelet[2158]: I0702 11:31:10.359184 2158 topology_manager.go:215] "Topology Admit Handler" podUID="e04dca812dbd712352e221c81873c97e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.362656 kubelet[2158]: I0702 11:31:10.362609 2158 topology_manager.go:215] "Topology Admit Handler" podUID="d817d27a869392afba8a091ff67b27ea" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.375028 systemd[1]: Created slice kubepods-burstable-pod02ed262fe84437cd81ebaedd272d74b5.slice. Jul 2 11:31:10.417940 systemd[1]: Created slice kubepods-burstable-pode04dca812dbd712352e221c81873c97e.slice. Jul 2 11:31:10.419847 systemd[1]: Created slice kubepods-burstable-podd817d27a869392afba8a091ff67b27ea.slice. Jul 2 11:31:10.437112 kubelet[2158]: I0702 11:31:10.437076 2158 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02ed262fe84437cd81ebaedd272d74b5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-b7736b5df5\" (UID: \"02ed262fe84437cd81ebaedd272d74b5\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.437112 kubelet[2158]: I0702 11:31:10.437104 2158 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e04dca812dbd712352e221c81873c97e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" (UID: \"e04dca812dbd712352e221c81873c97e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.437196 kubelet[2158]: I0702 11:31:10.437121 2158 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e04dca812dbd712352e221c81873c97e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" (UID: \"e04dca812dbd712352e221c81873c97e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.437196 kubelet[2158]: I0702 11:31:10.437135 2158 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e04dca812dbd712352e221c81873c97e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" (UID: \"e04dca812dbd712352e221c81873c97e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.437196 kubelet[2158]: I0702 11:31:10.437152 2158 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e04dca812dbd712352e221c81873c97e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" (UID: \"e04dca812dbd712352e221c81873c97e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.437196 kubelet[2158]: I0702 11:31:10.437190 2158 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02ed262fe84437cd81ebaedd272d74b5-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-b7736b5df5\" (UID: \"02ed262fe84437cd81ebaedd272d74b5\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.437295 kubelet[2158]: I0702 11:31:10.437214 2158 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e04dca812dbd712352e221c81873c97e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" (UID: \"e04dca812dbd712352e221c81873c97e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.437295 kubelet[2158]: I0702 11:31:10.437281 2158 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d817d27a869392afba8a091ff67b27ea-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-b7736b5df5\" (UID: \"d817d27a869392afba8a091ff67b27ea\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.437339 kubelet[2158]: I0702 11:31:10.437312 2158 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02ed262fe84437cd81ebaedd272d74b5-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-b7736b5df5\" (UID: \"02ed262fe84437cd81ebaedd272d74b5\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.638305 kubelet[2158]: E0702 11:31:10.638061 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.91.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-b7736b5df5?timeout=10s\": dial tcp 139.178.91.9:6443: connect: connection refused" interval="800ms" Jul 2 11:31:10.718092 env[1560]: time="2024-07-02T11:31:10.717943785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-b7736b5df5,Uid:02ed262fe84437cd81ebaedd272d74b5,Namespace:kube-system,Attempt:0,}" Jul 2 11:31:10.720143 env[1560]: time="2024-07-02T11:31:10.720028577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-b7736b5df5,Uid:e04dca812dbd712352e221c81873c97e,Namespace:kube-system,Attempt:0,}" Jul 2 11:31:10.722198 env[1560]: time="2024-07-02T11:31:10.722081901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-b7736b5df5,Uid:d817d27a869392afba8a091ff67b27ea,Namespace:kube-system,Attempt:0,}" Jul 2 11:31:10.749613 kubelet[2158]: I0702 11:31:10.749552 2158 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.750329 kubelet[2158]: E0702 11:31:10.750241 2158 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.91.9:6443/api/v1/nodes\": dial tcp 139.178.91.9:6443: connect: connection refused" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:10.871660 kubelet[2158]: W0702 11:31:10.871501 2158 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.91.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:10.871660 kubelet[2158]: E0702 11:31:10.871626 2158 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.91.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:11.009741 kubelet[2158]: W0702 11:31:11.009513 2158 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.91.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:11.009741 kubelet[2158]: E0702 11:31:11.009607 2158 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.91.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.91.9:6443: connect: connection refused Jul 2 11:31:11.187106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961504329.mount: Deactivated successfully. Jul 2 11:31:11.188479 env[1560]: time="2024-07-02T11:31:11.188432193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.189471 env[1560]: time="2024-07-02T11:31:11.189422019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.190236 env[1560]: time="2024-07-02T11:31:11.190193669Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.190875 env[1560]: time="2024-07-02T11:31:11.190834093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.191737 env[1560]: time="2024-07-02T11:31:11.191696953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.193041 env[1560]: time="2024-07-02T11:31:11.192997423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.193485 env[1560]: time="2024-07-02T11:31:11.193444442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.194993 env[1560]: time="2024-07-02T11:31:11.194949902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.195793 env[1560]: time="2024-07-02T11:31:11.195740743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.196610 env[1560]: time="2024-07-02T11:31:11.196568873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.197019 env[1560]: time="2024-07-02T11:31:11.196972208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.197390 env[1560]: time="2024-07-02T11:31:11.197350753Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:11.202742 env[1560]: time="2024-07-02T11:31:11.202693887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:31:11.202742 env[1560]: time="2024-07-02T11:31:11.202728981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:31:11.202856 env[1560]: time="2024-07-02T11:31:11.202741056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:31:11.202856 env[1560]: time="2024-07-02T11:31:11.202816380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:31:11.202856 env[1560]: time="2024-07-02T11:31:11.202835864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:31:11.202856 env[1560]: time="2024-07-02T11:31:11.202843522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:31:11.202972 env[1560]: time="2024-07-02T11:31:11.202890179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66b3191920df092f0808aba140b35d8d3454599558216fa359b0e21f1cbb2100 pid=2214 runtime=io.containerd.runc.v2 Jul 2 11:31:11.202972 env[1560]: time="2024-07-02T11:31:11.202906719Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7531794a75262b82229448b2c5040116aad613d7f3e8df2cc488aaac73653a5a pid=2213 runtime=io.containerd.runc.v2 Jul 2 11:31:11.206014 env[1560]: time="2024-07-02T11:31:11.205978966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:31:11.206014 env[1560]: time="2024-07-02T11:31:11.206001799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:31:11.206014 env[1560]: time="2024-07-02T11:31:11.206009360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:31:11.206121 env[1560]: time="2024-07-02T11:31:11.206077609Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afd6cf5e8ea31e554f18725cc2c99512b1e6b794132c8bf3037e39458b8bce3a pid=2243 runtime=io.containerd.runc.v2 Jul 2 11:31:11.210872 systemd[1]: Started cri-containerd-66b3191920df092f0808aba140b35d8d3454599558216fa359b0e21f1cbb2100.scope. Jul 2 11:31:11.212331 systemd[1]: Started cri-containerd-7531794a75262b82229448b2c5040116aad613d7f3e8df2cc488aaac73653a5a.scope. Jul 2 11:31:11.216010 systemd[1]: Started cri-containerd-afd6cf5e8ea31e554f18725cc2c99512b1e6b794132c8bf3037e39458b8bce3a.scope. Jul 2 11:31:11.242238 env[1560]: time="2024-07-02T11:31:11.242203960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-b7736b5df5,Uid:d817d27a869392afba8a091ff67b27ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"afd6cf5e8ea31e554f18725cc2c99512b1e6b794132c8bf3037e39458b8bce3a\"" Jul 2 11:31:11.244136 env[1560]: time="2024-07-02T11:31:11.244102162Z" level=info msg="CreateContainer within sandbox \"afd6cf5e8ea31e554f18725cc2c99512b1e6b794132c8bf3037e39458b8bce3a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 11:31:11.244260 env[1560]: time="2024-07-02T11:31:11.244228866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-b7736b5df5,Uid:02ed262fe84437cd81ebaedd272d74b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7531794a75262b82229448b2c5040116aad613d7f3e8df2cc488aaac73653a5a\"" Jul 2 11:31:11.244367 env[1560]: time="2024-07-02T11:31:11.244350139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-b7736b5df5,Uid:e04dca812dbd712352e221c81873c97e,Namespace:kube-system,Attempt:0,} returns sandbox id \"66b3191920df092f0808aba140b35d8d3454599558216fa359b0e21f1cbb2100\"" Jul 2 11:31:11.245375 env[1560]: time="2024-07-02T11:31:11.245359850Z" level=info msg="CreateContainer within sandbox \"7531794a75262b82229448b2c5040116aad613d7f3e8df2cc488aaac73653a5a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 11:31:11.245425 env[1560]: time="2024-07-02T11:31:11.245413981Z" level=info msg="CreateContainer within sandbox \"66b3191920df092f0808aba140b35d8d3454599558216fa359b0e21f1cbb2100\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 11:31:11.251862 env[1560]: time="2024-07-02T11:31:11.251813347Z" level=info msg="CreateContainer within sandbox \"afd6cf5e8ea31e554f18725cc2c99512b1e6b794132c8bf3037e39458b8bce3a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a1a5c34582c4cb24358ae13ac21ac414826ea16353028194046757ab71fd9e27\"" Jul 2 11:31:11.252087 env[1560]: time="2024-07-02T11:31:11.252072868Z" level=info msg="StartContainer for \"a1a5c34582c4cb24358ae13ac21ac414826ea16353028194046757ab71fd9e27\"" Jul 2 11:31:11.252608 env[1560]: time="2024-07-02T11:31:11.252565752Z" level=info msg="CreateContainer within sandbox \"7531794a75262b82229448b2c5040116aad613d7f3e8df2cc488aaac73653a5a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"79270793b9e387a5be325649450d9a1e88eb3e2fa66ee87a3a3e7995f7c09d6a\"" Jul 2 11:31:11.252758 env[1560]: time="2024-07-02T11:31:11.252740084Z" level=info msg="StartContainer for \"79270793b9e387a5be325649450d9a1e88eb3e2fa66ee87a3a3e7995f7c09d6a\"" Jul 2 11:31:11.253512 env[1560]: time="2024-07-02T11:31:11.253495334Z" level=info msg="CreateContainer within sandbox \"66b3191920df092f0808aba140b35d8d3454599558216fa359b0e21f1cbb2100\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc92ec7cd6c1f505120bfe25e5fb0ca4c93dacf752d973ab1f5700e8e9805381\"" Jul 2 11:31:11.253651 env[1560]: time="2024-07-02T11:31:11.253637218Z" level=info msg="StartContainer for \"dc92ec7cd6c1f505120bfe25e5fb0ca4c93dacf752d973ab1f5700e8e9805381\"" Jul 2 11:31:11.262001 systemd[1]: Started cri-containerd-79270793b9e387a5be325649450d9a1e88eb3e2fa66ee87a3a3e7995f7c09d6a.scope. Jul 2 11:31:11.262883 systemd[1]: Started cri-containerd-a1a5c34582c4cb24358ae13ac21ac414826ea16353028194046757ab71fd9e27.scope. Jul 2 11:31:11.263609 systemd[1]: Started cri-containerd-dc92ec7cd6c1f505120bfe25e5fb0ca4c93dacf752d973ab1f5700e8e9805381.scope. Jul 2 11:31:11.288453 env[1560]: time="2024-07-02T11:31:11.288424763Z" level=info msg="StartContainer for \"a1a5c34582c4cb24358ae13ac21ac414826ea16353028194046757ab71fd9e27\" returns successfully" Jul 2 11:31:11.288587 env[1560]: time="2024-07-02T11:31:11.288539867Z" level=info msg="StartContainer for \"dc92ec7cd6c1f505120bfe25e5fb0ca4c93dacf752d973ab1f5700e8e9805381\" returns successfully" Jul 2 11:31:11.290167 env[1560]: time="2024-07-02T11:31:11.290143412Z" level=info msg="StartContainer for \"79270793b9e387a5be325649450d9a1e88eb3e2fa66ee87a3a3e7995f7c09d6a\" returns successfully" Jul 2 11:31:11.551742 kubelet[2158]: I0702 11:31:11.551728 2158 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:11.871958 kubelet[2158]: E0702 11:31:11.871908 2158 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.5-a-b7736b5df5\" not found" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:11.971833 kubelet[2158]: I0702 11:31:11.971809 2158 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:12.018806 kubelet[2158]: I0702 11:31:12.018787 2158 apiserver.go:52] "Watching apiserver" Jul 2 11:31:12.036103 kubelet[2158]: I0702 11:31:12.036093 2158 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 11:31:12.061755 kubelet[2158]: E0702 11:31:12.061740 2158 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:12.061755 kubelet[2158]: E0702 11:31:12.061746 2158 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-b7736b5df5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:12.061852 kubelet[2158]: E0702 11:31:12.061740 2158 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.5-a-b7736b5df5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:13.069020 kubelet[2158]: W0702 11:31:13.068945 2158 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:31:14.905455 systemd[1]: Reloading. Jul 2 11:31:14.951200 /usr/lib/systemd/system-generators/torcx-generator[2491]: time="2024-07-02T11:31:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:31:14.951226 /usr/lib/systemd/system-generators/torcx-generator[2491]: time="2024-07-02T11:31:14Z" level=info msg="torcx already run" Jul 2 11:31:15.003963 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:31:15.003972 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:31:15.015850 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:31:15.082910 kubelet[2158]: I0702 11:31:15.082889 2158 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 11:31:15.083010 systemd[1]: Stopping kubelet.service... Jul 2 11:31:15.098662 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 11:31:15.098764 systemd[1]: Stopped kubelet.service. Jul 2 11:31:15.099620 systemd[1]: Starting kubelet.service... Jul 2 11:31:15.271205 systemd[1]: Started kubelet.service. Jul 2 11:31:15.295149 kubelet[2556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:31:15.295149 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 11:31:15.295149 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:31:15.295406 kubelet[2556]: I0702 11:31:15.295179 2556 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 11:31:15.297712 kubelet[2556]: I0702 11:31:15.297681 2556 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 11:31:15.297712 kubelet[2556]: I0702 11:31:15.297692 2556 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 11:31:15.297825 kubelet[2556]: I0702 11:31:15.297795 2556 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 11:31:15.298652 kubelet[2556]: I0702 11:31:15.298645 2556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 11:31:15.299623 kubelet[2556]: I0702 11:31:15.299611 2556 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 11:31:15.318844 kubelet[2556]: I0702 11:31:15.318797 2556 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 11:31:15.318921 kubelet[2556]: I0702 11:31:15.318915 2556 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 11:31:15.319046 kubelet[2556]: I0702 11:31:15.319011 2556 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 11:31:15.319046 kubelet[2556]: I0702 11:31:15.319024 2556 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 11:31:15.319046 kubelet[2556]: I0702 11:31:15.319030 2556 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 11:31:15.319046 kubelet[2556]: I0702 11:31:15.319045 2556 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:31:15.319171 kubelet[2556]: I0702 11:31:15.319090 2556 kubelet.go:396] "Attempting to sync node with API server" Jul 2 11:31:15.319171 kubelet[2556]: I0702 11:31:15.319099 2556 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 11:31:15.319171 kubelet[2556]: I0702 11:31:15.319115 2556 kubelet.go:312] "Adding apiserver pod source" Jul 2 11:31:15.319171 kubelet[2556]: I0702 11:31:15.319125 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 11:31:15.319497 kubelet[2556]: I0702 11:31:15.319443 2556 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 11:31:15.319582 kubelet[2556]: I0702 11:31:15.319541 2556 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 11:31:15.319749 kubelet[2556]: I0702 11:31:15.319743 2556 server.go:1256] "Started kubelet" Jul 2 11:31:15.319807 kubelet[2556]: I0702 11:31:15.319782 2556 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 11:31:15.319835 kubelet[2556]: I0702 11:31:15.319824 2556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 11:31:15.320321 kubelet[2556]: I0702 11:31:15.320305 2556 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 11:31:15.321132 kubelet[2556]: I0702 11:31:15.321116 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 11:31:15.321191 kubelet[2556]: I0702 11:31:15.321148 2556 server.go:461] "Adding debug handlers to kubelet server" Jul 2 11:31:15.321323 kubelet[2556]: I0702 11:31:15.321306 2556 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 11:31:15.321438 kubelet[2556]: I0702 11:31:15.321423 2556 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 11:31:15.322007 kubelet[2556]: I0702 11:31:15.321992 2556 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 11:31:15.322155 kubelet[2556]: I0702 11:31:15.322144 2556 factory.go:221] Registration of the systemd container factory successfully Jul 2 11:31:15.322198 kubelet[2556]: E0702 11:31:15.322169 2556 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 11:31:15.322289 kubelet[2556]: I0702 11:31:15.322275 2556 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 11:31:15.324729 kubelet[2556]: I0702 11:31:15.324711 2556 factory.go:221] Registration of the containerd container factory successfully Jul 2 11:31:15.326926 kubelet[2556]: I0702 11:31:15.326908 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 11:31:15.327500 kubelet[2556]: I0702 11:31:15.327489 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 11:31:15.327568 kubelet[2556]: I0702 11:31:15.327509 2556 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 11:31:15.327568 kubelet[2556]: I0702 11:31:15.327532 2556 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 11:31:15.327606 kubelet[2556]: E0702 11:31:15.327586 2556 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 11:31:15.332553 sudo[2597]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 11:31:15.332749 sudo[2597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 11:31:15.338311 kubelet[2556]: I0702 11:31:15.338268 2556 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 11:31:15.338311 kubelet[2556]: I0702 11:31:15.338284 2556 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 11:31:15.338311 kubelet[2556]: I0702 11:31:15.338296 2556 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:31:15.338400 kubelet[2556]: I0702 11:31:15.338384 2556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 11:31:15.338400 kubelet[2556]: I0702 11:31:15.338398 2556 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 11:31:15.338449 kubelet[2556]: I0702 11:31:15.338403 2556 policy_none.go:49] "None policy: Start" Jul 2 11:31:15.338639 kubelet[2556]: I0702 11:31:15.338627 2556 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 11:31:15.338639 kubelet[2556]: I0702 11:31:15.338638 2556 state_mem.go:35] "Initializing new in-memory state store" Jul 2 11:31:15.338764 kubelet[2556]: I0702 11:31:15.338724 2556 state_mem.go:75] "Updated machine memory state" Jul 2 11:31:15.340596 kubelet[2556]: I0702 11:31:15.340555 2556 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 11:31:15.340702 kubelet[2556]: I0702 11:31:15.340671 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 11:31:15.422982 kubelet[2556]: I0702 11:31:15.422934 2556 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.427862 kubelet[2556]: I0702 11:31:15.427817 2556 topology_manager.go:215] "Topology Admit Handler" podUID="02ed262fe84437cd81ebaedd272d74b5" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.427862 kubelet[2556]: I0702 11:31:15.427862 2556 topology_manager.go:215] "Topology Admit Handler" podUID="e04dca812dbd712352e221c81873c97e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.427932 kubelet[2556]: I0702 11:31:15.427882 2556 topology_manager.go:215] "Topology Admit Handler" podUID="d817d27a869392afba8a091ff67b27ea" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.428014 kubelet[2556]: I0702 11:31:15.428004 2556 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.428049 kubelet[2556]: I0702 11:31:15.428042 2556 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.430009 kubelet[2556]: W0702 11:31:15.429999 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:31:15.430937 kubelet[2556]: W0702 11:31:15.430919 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:31:15.431486 kubelet[2556]: W0702 11:31:15.431479 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:31:15.431525 kubelet[2556]: E0702 11:31:15.431515 2556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-b7736b5df5\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.523071 kubelet[2556]: I0702 11:31:15.522992 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e04dca812dbd712352e221c81873c97e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" (UID: \"e04dca812dbd712352e221c81873c97e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.523071 kubelet[2556]: I0702 11:31:15.523017 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d817d27a869392afba8a091ff67b27ea-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-b7736b5df5\" (UID: \"d817d27a869392afba8a091ff67b27ea\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.523071 kubelet[2556]: I0702 11:31:15.523033 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02ed262fe84437cd81ebaedd272d74b5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-b7736b5df5\" (UID: \"02ed262fe84437cd81ebaedd272d74b5\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.523071 kubelet[2556]: I0702 11:31:15.523045 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e04dca812dbd712352e221c81873c97e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" (UID: \"e04dca812dbd712352e221c81873c97e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.523071 kubelet[2556]: I0702 11:31:15.523064 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e04dca812dbd712352e221c81873c97e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" (UID: \"e04dca812dbd712352e221c81873c97e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.523225 kubelet[2556]: I0702 11:31:15.523075 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e04dca812dbd712352e221c81873c97e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" (UID: \"e04dca812dbd712352e221c81873c97e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.523225 kubelet[2556]: I0702 11:31:15.523087 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02ed262fe84437cd81ebaedd272d74b5-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-b7736b5df5\" (UID: \"02ed262fe84437cd81ebaedd272d74b5\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.523225 kubelet[2556]: I0702 11:31:15.523098 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02ed262fe84437cd81ebaedd272d74b5-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-b7736b5df5\" (UID: \"02ed262fe84437cd81ebaedd272d74b5\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.523225 kubelet[2556]: I0702 11:31:15.523109 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e04dca812dbd712352e221c81873c97e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" (UID: \"e04dca812dbd712352e221c81873c97e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:15.662206 sudo[2597]: pam_unix(sudo:session): session closed for user root Jul 2 11:31:16.319630 kubelet[2556]: I0702 11:31:16.319519 2556 apiserver.go:52] "Watching apiserver" Jul 2 11:31:16.340681 kubelet[2556]: W0702 11:31:16.340633 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:31:16.340681 kubelet[2556]: W0702 11:31:16.340674 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:31:16.340967 kubelet[2556]: E0702 11:31:16.340763 2556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-b7736b5df5\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:16.340967 kubelet[2556]: E0702 11:31:16.340847 2556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.5-a-b7736b5df5\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" Jul 2 11:31:16.373168 kubelet[2556]: I0702 11:31:16.373121 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.5-a-b7736b5df5" podStartSLOduration=1.373036383 podStartE2EDuration="1.373036383s" podCreationTimestamp="2024-07-02 11:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:31:16.372975506 +0000 UTC m=+1.099031396" watchObservedRunningTime="2024-07-02 11:31:16.373036383 +0000 UTC m=+1.099092280" Jul 2 11:31:16.391625 kubelet[2556]: I0702 11:31:16.391592 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.5-a-b7736b5df5" podStartSLOduration=3.391544289 podStartE2EDuration="3.391544289s" podCreationTimestamp="2024-07-02 11:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:31:16.383055456 +0000 UTC m=+1.109111331" watchObservedRunningTime="2024-07-02 11:31:16.391544289 +0000 UTC m=+1.117600196" Jul 2 11:31:16.401557 kubelet[2556]: I0702 11:31:16.401509 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-b7736b5df5" podStartSLOduration=1.40144209 podStartE2EDuration="1.40144209s" podCreationTimestamp="2024-07-02 11:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:31:16.391736672 +0000 UTC m=+1.117792549" watchObservedRunningTime="2024-07-02 11:31:16.40144209 +0000 UTC m=+1.127497958" Jul 2 11:31:16.422670 kubelet[2556]: I0702 11:31:16.422607 2556 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 11:31:17.322014 sudo[1699]: pam_unix(sudo:session): session closed for user root Jul 2 11:31:17.322995 sshd[1696]: pam_unix(sshd:session): session closed for user core Jul 2 11:31:17.324802 systemd[1]: sshd@4-139.178.91.9:22-139.178.68.195:53356.service: Deactivated successfully. Jul 2 11:31:17.325314 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 11:31:17.325418 systemd[1]: session-7.scope: Consumed 3.394s CPU time. Jul 2 11:31:17.325833 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit. Jul 2 11:31:17.326515 systemd-logind[1596]: Removed session 7. Jul 2 11:31:27.307146 kubelet[2556]: I0702 11:31:27.307085 2556 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 11:31:27.308228 env[1560]: time="2024-07-02T11:31:27.307951309Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 11:31:27.308962 kubelet[2556]: I0702 11:31:27.308381 2556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 11:31:28.147747 kubelet[2556]: I0702 11:31:28.147679 2556 topology_manager.go:215] "Topology Admit Handler" podUID="7fc41ed7-812c-459e-8a0f-a0eba0ca8c29" podNamespace="kube-system" podName="kube-proxy-q4stp" Jul 2 11:31:28.153503 kubelet[2556]: I0702 11:31:28.153469 2556 topology_manager.go:215] "Topology Admit Handler" podUID="44048ba2-0631-4981-82ab-ae0664e621af" podNamespace="kube-system" podName="cilium-tjtnj" Jul 2 11:31:28.153957 systemd[1]: Created slice kubepods-besteffort-pod7fc41ed7_812c_459e_8a0f_a0eba0ca8c29.slice. Jul 2 11:31:28.168355 systemd[1]: Created slice kubepods-burstable-pod44048ba2_0631_4981_82ab_ae0664e621af.slice. Jul 2 11:31:28.202394 kubelet[2556]: I0702 11:31:28.202374 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fc41ed7-812c-459e-8a0f-a0eba0ca8c29-xtables-lock\") pod \"kube-proxy-q4stp\" (UID: \"7fc41ed7-812c-459e-8a0f-a0eba0ca8c29\") " pod="kube-system/kube-proxy-q4stp" Jul 2 11:31:28.202394 kubelet[2556]: I0702 11:31:28.202399 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44048ba2-0631-4981-82ab-ae0664e621af-cilium-config-path\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202521 kubelet[2556]: I0702 11:31:28.202412 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-host-proc-sys-net\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202521 kubelet[2556]: I0702 11:31:28.202425 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fc41ed7-812c-459e-8a0f-a0eba0ca8c29-lib-modules\") pod \"kube-proxy-q4stp\" (UID: \"7fc41ed7-812c-459e-8a0f-a0eba0ca8c29\") " pod="kube-system/kube-proxy-q4stp" Jul 2 11:31:28.202521 kubelet[2556]: I0702 11:31:28.202436 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cilium-run\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202521 kubelet[2556]: I0702 11:31:28.202449 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7fc41ed7-812c-459e-8a0f-a0eba0ca8c29-kube-proxy\") pod \"kube-proxy-q4stp\" (UID: \"7fc41ed7-812c-459e-8a0f-a0eba0ca8c29\") " pod="kube-system/kube-proxy-q4stp" Jul 2 11:31:28.202521 kubelet[2556]: I0702 11:31:28.202460 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffjz\" (UniqueName: \"kubernetes.io/projected/7fc41ed7-812c-459e-8a0f-a0eba0ca8c29-kube-api-access-gffjz\") pod \"kube-proxy-q4stp\" (UID: \"7fc41ed7-812c-459e-8a0f-a0eba0ca8c29\") " pod="kube-system/kube-proxy-q4stp" Jul 2 11:31:28.202521 kubelet[2556]: I0702 11:31:28.202472 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cni-path\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202650 kubelet[2556]: I0702 11:31:28.202513 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-hostproc\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202650 kubelet[2556]: I0702 11:31:28.202545 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-etc-cni-netd\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202650 kubelet[2556]: I0702 11:31:28.202571 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-lib-modules\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202650 kubelet[2556]: I0702 11:31:28.202591 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44048ba2-0631-4981-82ab-ae0664e621af-hubble-tls\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202650 kubelet[2556]: I0702 11:31:28.202605 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85rgm\" (UniqueName: \"kubernetes.io/projected/44048ba2-0631-4981-82ab-ae0664e621af-kube-api-access-85rgm\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202650 kubelet[2556]: I0702 11:31:28.202631 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cilium-cgroup\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202754 kubelet[2556]: I0702 11:31:28.202649 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44048ba2-0631-4981-82ab-ae0664e621af-clustermesh-secrets\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202754 kubelet[2556]: I0702 11:31:28.202675 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-xtables-lock\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202754 kubelet[2556]: I0702 11:31:28.202693 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-host-proc-sys-kernel\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.202754 kubelet[2556]: I0702 11:31:28.202715 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-bpf-maps\") pod \"cilium-tjtnj\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " pod="kube-system/cilium-tjtnj" Jul 2 11:31:28.460808 kubelet[2556]: I0702 11:31:28.460609 2556 topology_manager.go:215] "Topology Admit Handler" podUID="ce505a09-3146-4c46-a4ca-2a1ea4651d76" podNamespace="kube-system" podName="cilium-operator-5cc964979-qnbb4" Jul 2 11:31:28.465613 env[1560]: time="2024-07-02T11:31:28.465431635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q4stp,Uid:7fc41ed7-812c-459e-8a0f-a0eba0ca8c29,Namespace:kube-system,Attempt:0,}" Jul 2 11:31:28.471410 env[1560]: time="2024-07-02T11:31:28.471339670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tjtnj,Uid:44048ba2-0631-4981-82ab-ae0664e621af,Namespace:kube-system,Attempt:0,}" Jul 2 11:31:28.472823 systemd[1]: Created slice kubepods-besteffort-podce505a09_3146_4c46_a4ca_2a1ea4651d76.slice. Jul 2 11:31:28.481647 env[1560]: time="2024-07-02T11:31:28.481569201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:31:28.481647 env[1560]: time="2024-07-02T11:31:28.481631959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:31:28.481815 env[1560]: time="2024-07-02T11:31:28.481659848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:31:28.481895 env[1560]: time="2024-07-02T11:31:28.481856120Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11c3b7e0733a560e2b8c0739b6b4f262f4307840c2d3562b191d634570c568cb pid=2710 runtime=io.containerd.runc.v2 Jul 2 11:31:28.482766 env[1560]: time="2024-07-02T11:31:28.482715826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:31:28.482766 env[1560]: time="2024-07-02T11:31:28.482746155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:31:28.482766 env[1560]: time="2024-07-02T11:31:28.482758448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:31:28.482959 env[1560]: time="2024-07-02T11:31:28.482869662Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513 pid=2718 runtime=io.containerd.runc.v2 Jul 2 11:31:28.493639 systemd[1]: Started cri-containerd-11c3b7e0733a560e2b8c0739b6b4f262f4307840c2d3562b191d634570c568cb.scope. Jul 2 11:31:28.494855 systemd[1]: Started cri-containerd-40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513.scope. Jul 2 11:31:28.505973 kubelet[2556]: I0702 11:31:28.505949 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpjqb\" (UniqueName: \"kubernetes.io/projected/ce505a09-3146-4c46-a4ca-2a1ea4651d76-kube-api-access-gpjqb\") pod \"cilium-operator-5cc964979-qnbb4\" (UID: \"ce505a09-3146-4c46-a4ca-2a1ea4651d76\") " pod="kube-system/cilium-operator-5cc964979-qnbb4" Jul 2 11:31:28.506135 kubelet[2556]: I0702 11:31:28.505999 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce505a09-3146-4c46-a4ca-2a1ea4651d76-cilium-config-path\") pod \"cilium-operator-5cc964979-qnbb4\" (UID: \"ce505a09-3146-4c46-a4ca-2a1ea4651d76\") " pod="kube-system/cilium-operator-5cc964979-qnbb4" Jul 2 11:31:28.506327 env[1560]: time="2024-07-02T11:31:28.506299737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q4stp,Uid:7fc41ed7-812c-459e-8a0f-a0eba0ca8c29,Namespace:kube-system,Attempt:0,} returns sandbox id \"11c3b7e0733a560e2b8c0739b6b4f262f4307840c2d3562b191d634570c568cb\"" Jul 2 11:31:28.506568 env[1560]: time="2024-07-02T11:31:28.506546256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tjtnj,Uid:44048ba2-0631-4981-82ab-ae0664e621af,Namespace:kube-system,Attempt:0,} returns sandbox id \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\"" Jul 2 11:31:28.507283 env[1560]: time="2024-07-02T11:31:28.507267968Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 11:31:28.507767 env[1560]: time="2024-07-02T11:31:28.507753293Z" level=info msg="CreateContainer within sandbox \"11c3b7e0733a560e2b8c0739b6b4f262f4307840c2d3562b191d634570c568cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 11:31:28.513668 env[1560]: time="2024-07-02T11:31:28.513616689Z" level=info msg="CreateContainer within sandbox \"11c3b7e0733a560e2b8c0739b6b4f262f4307840c2d3562b191d634570c568cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d81aff68bccd0d83cd8fa9627f48f7ed344b7e96b9cdd9e325647926d0be7413\"" Jul 2 11:31:28.513907 env[1560]: time="2024-07-02T11:31:28.513883004Z" level=info msg="StartContainer for \"d81aff68bccd0d83cd8fa9627f48f7ed344b7e96b9cdd9e325647926d0be7413\"" Jul 2 11:31:28.522564 systemd[1]: Started cri-containerd-d81aff68bccd0d83cd8fa9627f48f7ed344b7e96b9cdd9e325647926d0be7413.scope. Jul 2 11:31:28.536408 env[1560]: time="2024-07-02T11:31:28.536378051Z" level=info msg="StartContainer for \"d81aff68bccd0d83cd8fa9627f48f7ed344b7e96b9cdd9e325647926d0be7413\" returns successfully" Jul 2 11:31:28.777120 env[1560]: time="2024-07-02T11:31:28.776899861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qnbb4,Uid:ce505a09-3146-4c46-a4ca-2a1ea4651d76,Namespace:kube-system,Attempt:0,}" Jul 2 11:31:28.802113 env[1560]: time="2024-07-02T11:31:28.801893770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:31:28.802113 env[1560]: time="2024-07-02T11:31:28.802015133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:31:28.802113 env[1560]: time="2024-07-02T11:31:28.802061162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:31:28.802688 env[1560]: time="2024-07-02T11:31:28.802538399Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693 pid=2901 runtime=io.containerd.runc.v2 Jul 2 11:31:28.833390 systemd[1]: Started cri-containerd-540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693.scope. Jul 2 11:31:28.871264 env[1560]: time="2024-07-02T11:31:28.871226307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qnbb4,Uid:ce505a09-3146-4c46-a4ca-2a1ea4651d76,Namespace:kube-system,Attempt:0,} returns sandbox id \"540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693\"" Jul 2 11:31:29.366077 kubelet[2556]: I0702 11:31:29.366056 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-q4stp" podStartSLOduration=1.366031398 podStartE2EDuration="1.366031398s" podCreationTimestamp="2024-07-02 11:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:31:29.365757081 +0000 UTC m=+14.091812933" watchObservedRunningTime="2024-07-02 11:31:29.366031398 +0000 UTC m=+14.092087247" Jul 2 11:31:30.502355 update_engine[1554]: I0702 11:31:30.502333 1554 update_attempter.cc:509] Updating boot flags... Jul 2 11:31:32.043633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735307298.mount: Deactivated successfully. Jul 2 11:31:33.741680 env[1560]: time="2024-07-02T11:31:33.741623683Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:33.742202 env[1560]: time="2024-07-02T11:31:33.742170124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:33.743160 env[1560]: time="2024-07-02T11:31:33.743119914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:33.743551 env[1560]: time="2024-07-02T11:31:33.743510117Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 11:31:33.744140 env[1560]: time="2024-07-02T11:31:33.744076114Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 11:31:33.744806 env[1560]: time="2024-07-02T11:31:33.744789703Z" level=info msg="CreateContainer within sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 11:31:33.749140 env[1560]: time="2024-07-02T11:31:33.749119156Z" level=info msg="CreateContainer within sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\"" Jul 2 11:31:33.749408 env[1560]: time="2024-07-02T11:31:33.749391783Z" level=info msg="StartContainer for \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\"" Jul 2 11:31:33.750393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586193546.mount: Deactivated successfully. Jul 2 11:31:33.759313 systemd[1]: Started cri-containerd-228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df.scope. Jul 2 11:31:33.770550 env[1560]: time="2024-07-02T11:31:33.770522457Z" level=info msg="StartContainer for \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\" returns successfully" Jul 2 11:31:33.776072 systemd[1]: cri-containerd-228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df.scope: Deactivated successfully. Jul 2 11:31:34.752930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df-rootfs.mount: Deactivated successfully. Jul 2 11:31:34.878040 env[1560]: time="2024-07-02T11:31:34.877890880Z" level=info msg="shim disconnected" id=228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df Jul 2 11:31:34.878040 env[1560]: time="2024-07-02T11:31:34.877995727Z" level=warning msg="cleaning up after shim disconnected" id=228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df namespace=k8s.io Jul 2 11:31:34.878040 env[1560]: time="2024-07-02T11:31:34.878024915Z" level=info msg="cleaning up dead shim" Jul 2 11:31:34.893032 env[1560]: time="2024-07-02T11:31:34.892922330Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:31:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3053 runtime=io.containerd.runc.v2\n" Jul 2 11:31:35.386154 env[1560]: time="2024-07-02T11:31:35.386059719Z" level=info msg="CreateContainer within sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 11:31:35.400118 env[1560]: time="2024-07-02T11:31:35.400027972Z" level=info msg="CreateContainer within sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\"" Jul 2 11:31:35.401008 env[1560]: time="2024-07-02T11:31:35.400906169Z" level=info msg="StartContainer for \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\"" Jul 2 11:31:35.432867 systemd[1]: Started cri-containerd-521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6.scope. Jul 2 11:31:35.454235 env[1560]: time="2024-07-02T11:31:35.454185511Z" level=info msg="StartContainer for \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\" returns successfully" Jul 2 11:31:35.465589 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 11:31:35.465782 systemd[1]: Stopped systemd-sysctl.service. Jul 2 11:31:35.465916 systemd[1]: Stopping systemd-sysctl.service... Jul 2 11:31:35.467081 systemd[1]: Starting systemd-sysctl.service... Jul 2 11:31:35.467978 systemd[1]: cri-containerd-521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6.scope: Deactivated successfully. Jul 2 11:31:35.472318 systemd[1]: Finished systemd-sysctl.service. Jul 2 11:31:35.509623 env[1560]: time="2024-07-02T11:31:35.509585633Z" level=info msg="shim disconnected" id=521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6 Jul 2 11:31:35.509623 env[1560]: time="2024-07-02T11:31:35.509613444Z" level=warning msg="cleaning up after shim disconnected" id=521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6 namespace=k8s.io Jul 2 11:31:35.509623 env[1560]: time="2024-07-02T11:31:35.509619611Z" level=info msg="cleaning up dead shim" Jul 2 11:31:35.513365 env[1560]: time="2024-07-02T11:31:35.513317537Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:31:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3115 runtime=io.containerd.runc.v2\n" Jul 2 11:31:35.748938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6-rootfs.mount: Deactivated successfully. Jul 2 11:31:35.998305 env[1560]: time="2024-07-02T11:31:35.998247357Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:35.998901 env[1560]: time="2024-07-02T11:31:35.998858504Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:35.999606 env[1560]: time="2024-07-02T11:31:35.999516621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:31:36.000110 env[1560]: time="2024-07-02T11:31:36.000068469Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 11:31:36.001187 env[1560]: time="2024-07-02T11:31:36.001172005Z" level=info msg="CreateContainer within sandbox \"540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 11:31:36.005843 env[1560]: time="2024-07-02T11:31:36.005826346Z" level=info msg="CreateContainer within sandbox \"540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\"" Jul 2 11:31:36.006183 env[1560]: time="2024-07-02T11:31:36.006169682Z" level=info msg="StartContainer for \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\"" Jul 2 11:31:36.016089 systemd[1]: Started cri-containerd-3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba.scope. Jul 2 11:31:36.027794 env[1560]: time="2024-07-02T11:31:36.027731788Z" level=info msg="StartContainer for \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\" returns successfully" Jul 2 11:31:36.384970 env[1560]: time="2024-07-02T11:31:36.384938780Z" level=info msg="CreateContainer within sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 11:31:36.392001 env[1560]: time="2024-07-02T11:31:36.391948171Z" level=info msg="CreateContainer within sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\"" Jul 2 11:31:36.392295 env[1560]: time="2024-07-02T11:31:36.392281393Z" level=info msg="StartContainer for \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\"" Jul 2 11:31:36.400498 kubelet[2556]: I0702 11:31:36.400473 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-qnbb4" podStartSLOduration=1.272205337 podStartE2EDuration="8.400416804s" podCreationTimestamp="2024-07-02 11:31:28 +0000 UTC" firstStartedPulling="2024-07-02 11:31:28.871980233 +0000 UTC m=+13.598036091" lastFinishedPulling="2024-07-02 11:31:36.000191703 +0000 UTC m=+20.726247558" observedRunningTime="2024-07-02 11:31:36.400059757 +0000 UTC m=+21.126115612" watchObservedRunningTime="2024-07-02 11:31:36.400416804 +0000 UTC m=+21.126472655" Jul 2 11:31:36.402058 systemd[1]: Started cri-containerd-b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6.scope. Jul 2 11:31:36.416957 env[1560]: time="2024-07-02T11:31:36.416929583Z" level=info msg="StartContainer for \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\" returns successfully" Jul 2 11:31:36.418624 systemd[1]: cri-containerd-b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6.scope: Deactivated successfully. Jul 2 11:31:36.580466 env[1560]: time="2024-07-02T11:31:36.580433877Z" level=info msg="shim disconnected" id=b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6 Jul 2 11:31:36.580466 env[1560]: time="2024-07-02T11:31:36.580462574Z" level=warning msg="cleaning up after shim disconnected" id=b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6 namespace=k8s.io Jul 2 11:31:36.580466 env[1560]: time="2024-07-02T11:31:36.580471379Z" level=info msg="cleaning up dead shim" Jul 2 11:31:36.584106 env[1560]: time="2024-07-02T11:31:36.584052978Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:31:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3222 runtime=io.containerd.runc.v2\n" Jul 2 11:31:37.397499 env[1560]: time="2024-07-02T11:31:37.397401389Z" level=info msg="CreateContainer within sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 11:31:37.415049 env[1560]: time="2024-07-02T11:31:37.414887337Z" level=info msg="CreateContainer within sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\"" Jul 2 11:31:37.416041 env[1560]: time="2024-07-02T11:31:37.415931191Z" level=info msg="StartContainer for \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\"" Jul 2 11:31:37.445724 systemd[1]: Started cri-containerd-9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2.scope. Jul 2 11:31:37.464667 systemd[1]: cri-containerd-9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2.scope: Deactivated successfully. Jul 2 11:31:37.468037 env[1560]: time="2024-07-02T11:31:37.467980130Z" level=info msg="StartContainer for \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\" returns successfully" Jul 2 11:31:37.495811 env[1560]: time="2024-07-02T11:31:37.495742800Z" level=info msg="shim disconnected" id=9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2 Jul 2 11:31:37.495811 env[1560]: time="2024-07-02T11:31:37.495779971Z" level=warning msg="cleaning up after shim disconnected" id=9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2 namespace=k8s.io Jul 2 11:31:37.495811 env[1560]: time="2024-07-02T11:31:37.495787758Z" level=info msg="cleaning up dead shim" Jul 2 11:31:37.500589 env[1560]: time="2024-07-02T11:31:37.500540188Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:31:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3276 runtime=io.containerd.runc.v2\n" Jul 2 11:31:37.753431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2-rootfs.mount: Deactivated successfully. Jul 2 11:31:38.407844 env[1560]: time="2024-07-02T11:31:38.407750240Z" level=info msg="CreateContainer within sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 11:31:38.425974 env[1560]: time="2024-07-02T11:31:38.425851308Z" level=info msg="CreateContainer within sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\"" Jul 2 11:31:38.426877 env[1560]: time="2024-07-02T11:31:38.426770369Z" level=info msg="StartContainer for \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\"" Jul 2 11:31:38.451838 systemd[1]: Started cri-containerd-7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e.scope. Jul 2 11:31:38.476225 env[1560]: time="2024-07-02T11:31:38.476170376Z" level=info msg="StartContainer for \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\" returns successfully" Jul 2 11:31:38.557330 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 11:31:38.615756 kubelet[2556]: I0702 11:31:38.615740 2556 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 11:31:38.626490 kubelet[2556]: I0702 11:31:38.626471 2556 topology_manager.go:215] "Topology Admit Handler" podUID="ecb0f163-aaa0-4d2b-9e37-1b41ec921baf" podNamespace="kube-system" podName="coredns-76f75df574-wdmzx" Jul 2 11:31:38.627273 kubelet[2556]: I0702 11:31:38.627249 2556 topology_manager.go:215] "Topology Admit Handler" podUID="a3f188cb-f101-4b71-bc51-262a9d456d9a" podNamespace="kube-system" podName="coredns-76f75df574-wt5l2" Jul 2 11:31:38.629674 systemd[1]: Created slice kubepods-burstable-podecb0f163_aaa0_4d2b_9e37_1b41ec921baf.slice. Jul 2 11:31:38.632218 systemd[1]: Created slice kubepods-burstable-poda3f188cb_f101_4b71_bc51_262a9d456d9a.slice. Jul 2 11:31:38.708351 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 11:31:38.772632 kubelet[2556]: I0702 11:31:38.772579 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb0f163-aaa0-4d2b-9e37-1b41ec921baf-config-volume\") pod \"coredns-76f75df574-wdmzx\" (UID: \"ecb0f163-aaa0-4d2b-9e37-1b41ec921baf\") " pod="kube-system/coredns-76f75df574-wdmzx" Jul 2 11:31:38.772632 kubelet[2556]: I0702 11:31:38.772615 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3f188cb-f101-4b71-bc51-262a9d456d9a-config-volume\") pod \"coredns-76f75df574-wt5l2\" (UID: \"a3f188cb-f101-4b71-bc51-262a9d456d9a\") " pod="kube-system/coredns-76f75df574-wt5l2" Jul 2 11:31:38.772632 kubelet[2556]: I0702 11:31:38.772634 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdbl2\" (UniqueName: \"kubernetes.io/projected/a3f188cb-f101-4b71-bc51-262a9d456d9a-kube-api-access-qdbl2\") pod \"coredns-76f75df574-wt5l2\" (UID: \"a3f188cb-f101-4b71-bc51-262a9d456d9a\") " pod="kube-system/coredns-76f75df574-wt5l2" Jul 2 11:31:38.772770 kubelet[2556]: I0702 11:31:38.772648 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtnkx\" (UniqueName: \"kubernetes.io/projected/ecb0f163-aaa0-4d2b-9e37-1b41ec921baf-kube-api-access-mtnkx\") pod \"coredns-76f75df574-wdmzx\" (UID: \"ecb0f163-aaa0-4d2b-9e37-1b41ec921baf\") " pod="kube-system/coredns-76f75df574-wdmzx" Jul 2 11:31:38.933119 env[1560]: time="2024-07-02T11:31:38.933018093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wdmzx,Uid:ecb0f163-aaa0-4d2b-9e37-1b41ec921baf,Namespace:kube-system,Attempt:0,}" Jul 2 11:31:38.935304 env[1560]: time="2024-07-02T11:31:38.935209842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wt5l2,Uid:a3f188cb-f101-4b71-bc51-262a9d456d9a,Namespace:kube-system,Attempt:0,}" Jul 2 11:31:39.444516 kubelet[2556]: I0702 11:31:39.444444 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tjtnj" podStartSLOduration=6.207443393 podStartE2EDuration="11.444307031s" podCreationTimestamp="2024-07-02 11:31:28 +0000 UTC" firstStartedPulling="2024-07-02 11:31:28.507038773 +0000 UTC m=+13.233094622" lastFinishedPulling="2024-07-02 11:31:33.743902409 +0000 UTC m=+18.469958260" observedRunningTime="2024-07-02 11:31:39.443682522 +0000 UTC m=+24.169738458" watchObservedRunningTime="2024-07-02 11:31:39.444307031 +0000 UTC m=+24.170362947" Jul 2 11:31:40.302925 systemd-networkd[1307]: cilium_host: Link UP Jul 2 11:31:40.303015 systemd-networkd[1307]: cilium_net: Link UP Jul 2 11:31:40.317349 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 11:31:40.317591 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 11:31:40.317876 systemd-networkd[1307]: cilium_net: Gained carrier Jul 2 11:31:40.318118 systemd-networkd[1307]: cilium_host: Gained carrier Jul 2 11:31:40.365093 systemd-networkd[1307]: cilium_vxlan: Link UP Jul 2 11:31:40.365097 systemd-networkd[1307]: cilium_vxlan: Gained carrier Jul 2 11:31:40.499269 kernel: NET: Registered PF_ALG protocol family Jul 2 11:31:40.503315 systemd-networkd[1307]: cilium_host: Gained IPv6LL Jul 2 11:31:40.687340 systemd-networkd[1307]: cilium_net: Gained IPv6LL Jul 2 11:31:41.044699 systemd-networkd[1307]: lxc_health: Link UP Jul 2 11:31:41.069208 systemd-networkd[1307]: lxc_health: Gained carrier Jul 2 11:31:41.069332 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 11:31:41.484154 systemd-networkd[1307]: lxc423f910ae595: Link UP Jul 2 11:31:41.503275 kernel: eth0: renamed from tmp83282 Jul 2 11:31:41.527945 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 11:31:41.528040 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc423f910ae595: link becomes ready Jul 2 11:31:41.539266 kernel: eth0: renamed from tmp9914c Jul 2 11:31:41.566273 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc138d35390469: link becomes ready Jul 2 11:31:41.566414 systemd-networkd[1307]: lxc138d35390469: Link UP Jul 2 11:31:41.566524 systemd-networkd[1307]: lxc423f910ae595: Gained carrier Jul 2 11:31:41.566770 systemd-networkd[1307]: lxc138d35390469: Gained carrier Jul 2 11:31:41.959404 systemd-networkd[1307]: cilium_vxlan: Gained IPv6LL Jul 2 11:31:42.983504 systemd-networkd[1307]: lxc_health: Gained IPv6LL Jul 2 11:31:43.175454 systemd-networkd[1307]: lxc423f910ae595: Gained IPv6LL Jul 2 11:31:43.559430 systemd-networkd[1307]: lxc138d35390469: Gained IPv6LL Jul 2 11:31:43.831405 env[1560]: time="2024-07-02T11:31:43.831340398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:31:43.831405 env[1560]: time="2024-07-02T11:31:43.831362970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:31:43.831631 env[1560]: time="2024-07-02T11:31:43.831370664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:31:43.831631 env[1560]: time="2024-07-02T11:31:43.831475494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9914cc5b62c69e53e0b95d4ca7e9836b00f113f2c3729d5b5edb3745c00eebd2 pid=3964 runtime=io.containerd.runc.v2 Jul 2 11:31:43.834206 env[1560]: time="2024-07-02T11:31:43.834176908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:31:43.834206 env[1560]: time="2024-07-02T11:31:43.834199175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:31:43.834300 env[1560]: time="2024-07-02T11:31:43.834208733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:31:43.834300 env[1560]: time="2024-07-02T11:31:43.834283533Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8328252ef4e4795908d04a24575e6b5c5665c831c3ddc25aa75119b4b36f1672 pid=3984 runtime=io.containerd.runc.v2 Jul 2 11:31:43.839495 systemd[1]: Started cri-containerd-9914cc5b62c69e53e0b95d4ca7e9836b00f113f2c3729d5b5edb3745c00eebd2.scope. Jul 2 11:31:43.842457 systemd[1]: Started cri-containerd-8328252ef4e4795908d04a24575e6b5c5665c831c3ddc25aa75119b4b36f1672.scope. Jul 2 11:31:43.861668 env[1560]: time="2024-07-02T11:31:43.861611103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wt5l2,Uid:a3f188cb-f101-4b71-bc51-262a9d456d9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9914cc5b62c69e53e0b95d4ca7e9836b00f113f2c3729d5b5edb3745c00eebd2\"" Jul 2 11:31:43.862872 env[1560]: time="2024-07-02T11:31:43.862851702Z" level=info msg="CreateContainer within sandbox \"9914cc5b62c69e53e0b95d4ca7e9836b00f113f2c3729d5b5edb3745c00eebd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 11:31:43.864571 env[1560]: time="2024-07-02T11:31:43.864547320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wdmzx,Uid:ecb0f163-aaa0-4d2b-9e37-1b41ec921baf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8328252ef4e4795908d04a24575e6b5c5665c831c3ddc25aa75119b4b36f1672\"" Jul 2 11:31:43.865907 env[1560]: time="2024-07-02T11:31:43.865890701Z" level=info msg="CreateContainer within sandbox \"8328252ef4e4795908d04a24575e6b5c5665c831c3ddc25aa75119b4b36f1672\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 11:31:43.868226 env[1560]: time="2024-07-02T11:31:43.868209210Z" level=info msg="CreateContainer within sandbox \"9914cc5b62c69e53e0b95d4ca7e9836b00f113f2c3729d5b5edb3745c00eebd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f10721be251f1dda609c320be458c0f4567224a70b2b913f73152b0222c78e71\"" Jul 2 11:31:43.868481 env[1560]: time="2024-07-02T11:31:43.868424941Z" level=info msg="StartContainer for \"f10721be251f1dda609c320be458c0f4567224a70b2b913f73152b0222c78e71\"" Jul 2 11:31:43.870992 env[1560]: time="2024-07-02T11:31:43.870967267Z" level=info msg="CreateContainer within sandbox \"8328252ef4e4795908d04a24575e6b5c5665c831c3ddc25aa75119b4b36f1672\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e01edb89422e5813f88832d2027162139a1bd4000e40464f3dbf45ba5a5c383b\"" Jul 2 11:31:43.871214 env[1560]: time="2024-07-02T11:31:43.871199030Z" level=info msg="StartContainer for \"e01edb89422e5813f88832d2027162139a1bd4000e40464f3dbf45ba5a5c383b\"" Jul 2 11:31:43.876582 systemd[1]: Started cri-containerd-f10721be251f1dda609c320be458c0f4567224a70b2b913f73152b0222c78e71.scope. Jul 2 11:31:43.889648 env[1560]: time="2024-07-02T11:31:43.889618522Z" level=info msg="StartContainer for \"f10721be251f1dda609c320be458c0f4567224a70b2b913f73152b0222c78e71\" returns successfully" Jul 2 11:31:43.892759 systemd[1]: Started cri-containerd-e01edb89422e5813f88832d2027162139a1bd4000e40464f3dbf45ba5a5c383b.scope. Jul 2 11:31:43.906835 env[1560]: time="2024-07-02T11:31:43.906800921Z" level=info msg="StartContainer for \"e01edb89422e5813f88832d2027162139a1bd4000e40464f3dbf45ba5a5c383b\" returns successfully" Jul 2 11:31:44.441396 kubelet[2556]: I0702 11:31:44.441341 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wt5l2" podStartSLOduration=16.441227008 podStartE2EDuration="16.441227008s" podCreationTimestamp="2024-07-02 11:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:31:44.440411124 +0000 UTC m=+29.166467060" watchObservedRunningTime="2024-07-02 11:31:44.441227008 +0000 UTC m=+29.167282901" Jul 2 11:31:44.459574 kubelet[2556]: I0702 11:31:44.459470 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wdmzx" podStartSLOduration=16.459380366 podStartE2EDuration="16.459380366s" podCreationTimestamp="2024-07-02 11:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:31:44.458674149 +0000 UTC m=+29.184730075" watchObservedRunningTime="2024-07-02 11:31:44.459380366 +0000 UTC m=+29.185436259" Jul 2 11:33:45.320759 systemd[1]: Started sshd@5-139.178.91.9:22-165.227.202.123:36348.service. Jul 2 11:33:45.607482 sshd[4161]: Invalid user weather from 165.227.202.123 port 36348 Jul 2 11:33:45.682029 sshd[4161]: pam_faillock(sshd:auth): User unknown Jul 2 11:33:45.682463 sshd[4161]: pam_unix(sshd:auth): check pass; user unknown Jul 2 11:33:45.682531 sshd[4161]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.202.123 Jul 2 11:33:45.682895 sshd[4161]: pam_faillock(sshd:auth): User unknown Jul 2 11:33:48.166810 sshd[4161]: Failed password for invalid user weather from 165.227.202.123 port 36348 ssh2 Jul 2 11:33:48.532659 sshd[4161]: Connection closed by invalid user weather 165.227.202.123 port 36348 [preauth] Jul 2 11:33:48.535231 systemd[1]: sshd@5-139.178.91.9:22-165.227.202.123:36348.service: Deactivated successfully. Jul 2 11:38:11.511042 systemd[1]: Started sshd@6-139.178.91.9:22-139.178.68.195:50110.service. Jul 2 11:38:11.567929 sshd[4195]: Accepted publickey for core from 139.178.68.195 port 50110 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:11.571536 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:11.583157 systemd-logind[1596]: New session 8 of user core. Jul 2 11:38:11.586445 systemd[1]: Started session-8.scope. Jul 2 11:38:11.733705 sshd[4195]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:11.739783 systemd[1]: sshd@6-139.178.91.9:22-139.178.68.195:50110.service: Deactivated successfully. Jul 2 11:38:11.741688 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 11:38:11.743352 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit. Jul 2 11:38:11.745691 systemd-logind[1596]: Removed session 8. Jul 2 11:38:14.504560 update_engine[1554]: I0702 11:38:14.504441 1554 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 11:38:14.504560 update_engine[1554]: I0702 11:38:14.504519 1554 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 11:38:14.517472 update_engine[1554]: I0702 11:38:14.517388 1554 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 11:38:14.518615 update_engine[1554]: I0702 11:38:14.518522 1554 omaha_request_params.cc:62] Current group set to lts Jul 2 11:38:14.518898 update_engine[1554]: I0702 11:38:14.518818 1554 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 11:38:14.518898 update_engine[1554]: I0702 11:38:14.518838 1554 update_attempter.cc:643] Scheduling an action processor start. Jul 2 11:38:14.518898 update_engine[1554]: I0702 11:38:14.518872 1554 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 11:38:14.519307 update_engine[1554]: I0702 11:38:14.518940 1554 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 11:38:14.519307 update_engine[1554]: I0702 11:38:14.519116 1554 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 11:38:14.519307 update_engine[1554]: I0702 11:38:14.519136 1554 omaha_request_action.cc:271] Request: Jul 2 11:38:14.519307 update_engine[1554]: Jul 2 11:38:14.519307 update_engine[1554]: Jul 2 11:38:14.519307 update_engine[1554]: Jul 2 11:38:14.519307 update_engine[1554]: Jul 2 11:38:14.519307 update_engine[1554]: Jul 2 11:38:14.519307 update_engine[1554]: Jul 2 11:38:14.519307 update_engine[1554]: Jul 2 11:38:14.519307 update_engine[1554]: Jul 2 11:38:14.519307 update_engine[1554]: I0702 11:38:14.519151 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:38:14.520558 locksmithd[1592]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 11:38:14.522425 update_engine[1554]: I0702 11:38:14.522377 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:38:14.522630 update_engine[1554]: E0702 11:38:14.522609 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:38:14.522846 update_engine[1554]: I0702 11:38:14.522768 1554 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 11:38:16.745100 systemd[1]: Started sshd@7-139.178.91.9:22-139.178.68.195:38040.service. Jul 2 11:38:16.784392 sshd[4224]: Accepted publickey for core from 139.178.68.195 port 38040 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:16.785168 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:16.787835 systemd-logind[1596]: New session 9 of user core. Jul 2 11:38:16.788481 systemd[1]: Started session-9.scope. Jul 2 11:38:16.873934 sshd[4224]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:16.875388 systemd[1]: sshd@7-139.178.91.9:22-139.178.68.195:38040.service: Deactivated successfully. Jul 2 11:38:16.875831 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 11:38:16.876126 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit. Jul 2 11:38:16.876674 systemd-logind[1596]: Removed session 9. Jul 2 11:38:21.883654 systemd[1]: Started sshd@8-139.178.91.9:22-139.178.68.195:38046.service. Jul 2 11:38:21.918977 sshd[4253]: Accepted publickey for core from 139.178.68.195 port 38046 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:21.919819 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:21.922679 systemd-logind[1596]: New session 10 of user core. Jul 2 11:38:21.923413 systemd[1]: Started session-10.scope. Jul 2 11:38:22.011106 sshd[4253]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:22.012600 systemd[1]: sshd@8-139.178.91.9:22-139.178.68.195:38046.service: Deactivated successfully. Jul 2 11:38:22.013036 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 11:38:22.013376 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit. Jul 2 11:38:22.013983 systemd-logind[1596]: Removed session 10. Jul 2 11:38:24.504573 update_engine[1554]: I0702 11:38:24.504454 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:38:24.505538 update_engine[1554]: I0702 11:38:24.505043 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:38:24.505538 update_engine[1554]: E0702 11:38:24.505367 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:38:24.505768 update_engine[1554]: I0702 11:38:24.505620 1554 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 11:38:27.015490 systemd[1]: Started sshd@9-139.178.91.9:22-139.178.68.195:46730.service. Jul 2 11:38:27.051026 sshd[4279]: Accepted publickey for core from 139.178.68.195 port 46730 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:27.051792 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:27.054331 systemd-logind[1596]: New session 11 of user core. Jul 2 11:38:27.055017 systemd[1]: Started session-11.scope. Jul 2 11:38:27.139985 sshd[4279]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:27.141996 systemd[1]: sshd@9-139.178.91.9:22-139.178.68.195:46730.service: Deactivated successfully. Jul 2 11:38:27.142376 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 11:38:27.142782 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit. Jul 2 11:38:27.143523 systemd[1]: Started sshd@10-139.178.91.9:22-139.178.68.195:46736.service. Jul 2 11:38:27.144017 systemd-logind[1596]: Removed session 11. Jul 2 11:38:27.179582 sshd[4304]: Accepted publickey for core from 139.178.68.195 port 46736 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:27.180441 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:27.183211 systemd-logind[1596]: New session 12 of user core. Jul 2 11:38:27.183850 systemd[1]: Started session-12.scope. Jul 2 11:38:27.329868 sshd[4304]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:27.331796 systemd[1]: sshd@10-139.178.91.9:22-139.178.68.195:46736.service: Deactivated successfully. Jul 2 11:38:27.332198 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 11:38:27.332639 systemd-logind[1596]: Session 12 logged out. Waiting for processes to exit. Jul 2 11:38:27.333279 systemd[1]: Started sshd@11-139.178.91.9:22-139.178.68.195:46748.service. Jul 2 11:38:27.333878 systemd-logind[1596]: Removed session 12. Jul 2 11:38:27.402325 sshd[4327]: Accepted publickey for core from 139.178.68.195 port 46748 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:27.404203 sshd[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:27.410093 systemd-logind[1596]: New session 13 of user core. Jul 2 11:38:27.411488 systemd[1]: Started session-13.scope. Jul 2 11:38:27.558795 sshd[4327]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:27.560161 systemd[1]: sshd@11-139.178.91.9:22-139.178.68.195:46748.service: Deactivated successfully. Jul 2 11:38:27.560596 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 11:38:27.561021 systemd-logind[1596]: Session 13 logged out. Waiting for processes to exit. Jul 2 11:38:27.561671 systemd-logind[1596]: Removed session 13. Jul 2 11:38:32.568880 systemd[1]: Started sshd@12-139.178.91.9:22-139.178.68.195:56012.service. Jul 2 11:38:32.604480 sshd[4356]: Accepted publickey for core from 139.178.68.195 port 56012 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:32.605439 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:32.608636 systemd-logind[1596]: New session 14 of user core. Jul 2 11:38:32.609322 systemd[1]: Started session-14.scope. Jul 2 11:38:32.698671 sshd[4356]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:32.700489 systemd[1]: sshd@12-139.178.91.9:22-139.178.68.195:56012.service: Deactivated successfully. Jul 2 11:38:32.700822 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 11:38:32.701135 systemd-logind[1596]: Session 14 logged out. Waiting for processes to exit. Jul 2 11:38:32.701738 systemd[1]: Started sshd@13-139.178.91.9:22-139.178.68.195:56016.service. Jul 2 11:38:32.702130 systemd-logind[1596]: Removed session 14. Jul 2 11:38:32.737501 sshd[4380]: Accepted publickey for core from 139.178.68.195 port 56016 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:32.738369 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:32.741268 systemd-logind[1596]: New session 15 of user core. Jul 2 11:38:32.742053 systemd[1]: Started session-15.scope. Jul 2 11:38:32.888102 sshd[4380]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:32.890157 systemd[1]: sshd@13-139.178.91.9:22-139.178.68.195:56016.service: Deactivated successfully. Jul 2 11:38:32.890506 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 11:38:32.890863 systemd-logind[1596]: Session 15 logged out. Waiting for processes to exit. Jul 2 11:38:32.891412 systemd[1]: Started sshd@14-139.178.91.9:22-139.178.68.195:56018.service. Jul 2 11:38:32.891830 systemd-logind[1596]: Removed session 15. Jul 2 11:38:32.927090 sshd[4403]: Accepted publickey for core from 139.178.68.195 port 56018 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:32.927814 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:32.930340 systemd-logind[1596]: New session 16 of user core. Jul 2 11:38:32.930833 systemd[1]: Started session-16.scope. Jul 2 11:38:34.026544 sshd[4403]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:34.034175 systemd[1]: sshd@14-139.178.91.9:22-139.178.68.195:56018.service: Deactivated successfully. Jul 2 11:38:34.035707 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 11:38:34.037005 systemd-logind[1596]: Session 16 logged out. Waiting for processes to exit. Jul 2 11:38:34.039165 systemd[1]: Started sshd@15-139.178.91.9:22-139.178.68.195:56028.service. Jul 2 11:38:34.040854 systemd-logind[1596]: Removed session 16. Jul 2 11:38:34.119103 sshd[4436]: Accepted publickey for core from 139.178.68.195 port 56028 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:34.120621 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:34.125280 systemd-logind[1596]: New session 17 of user core. Jul 2 11:38:34.126323 systemd[1]: Started session-17.scope. Jul 2 11:38:34.313941 sshd[4436]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:34.315707 systemd[1]: sshd@15-139.178.91.9:22-139.178.68.195:56028.service: Deactivated successfully. Jul 2 11:38:34.316031 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 11:38:34.316313 systemd-logind[1596]: Session 17 logged out. Waiting for processes to exit. Jul 2 11:38:34.316932 systemd[1]: Started sshd@16-139.178.91.9:22-139.178.68.195:56044.service. Jul 2 11:38:34.317364 systemd-logind[1596]: Removed session 17. Jul 2 11:38:34.352148 sshd[4461]: Accepted publickey for core from 139.178.68.195 port 56044 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:34.352942 sshd[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:34.355738 systemd-logind[1596]: New session 18 of user core. Jul 2 11:38:34.356226 systemd[1]: Started session-18.scope. Jul 2 11:38:34.484793 sshd[4461]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:34.486262 systemd[1]: sshd@16-139.178.91.9:22-139.178.68.195:56044.service: Deactivated successfully. Jul 2 11:38:34.486696 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 11:38:34.487011 systemd-logind[1596]: Session 18 logged out. Waiting for processes to exit. Jul 2 11:38:34.487515 systemd-logind[1596]: Removed session 18. Jul 2 11:38:34.501611 update_engine[1554]: I0702 11:38:34.501569 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:38:34.501754 update_engine[1554]: I0702 11:38:34.501662 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:38:34.501754 update_engine[1554]: E0702 11:38:34.501709 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:38:34.501754 update_engine[1554]: I0702 11:38:34.501744 1554 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 11:38:39.494754 systemd[1]: Started sshd@17-139.178.91.9:22-139.178.68.195:56050.service. Jul 2 11:38:39.530118 sshd[4491]: Accepted publickey for core from 139.178.68.195 port 56050 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:39.530811 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:39.533219 systemd-logind[1596]: New session 19 of user core. Jul 2 11:38:39.533704 systemd[1]: Started session-19.scope. Jul 2 11:38:39.619933 sshd[4491]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:39.621389 systemd[1]: sshd@17-139.178.91.9:22-139.178.68.195:56050.service: Deactivated successfully. Jul 2 11:38:39.621807 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 11:38:39.622175 systemd-logind[1596]: Session 19 logged out. Waiting for processes to exit. Jul 2 11:38:39.622835 systemd-logind[1596]: Removed session 19. Jul 2 11:38:44.506809 update_engine[1554]: I0702 11:38:44.506683 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:38:44.507740 update_engine[1554]: I0702 11:38:44.507194 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:38:44.507740 update_engine[1554]: E0702 11:38:44.507436 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:38:44.507740 update_engine[1554]: I0702 11:38:44.507598 1554 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 11:38:44.507740 update_engine[1554]: I0702 11:38:44.507615 1554 omaha_request_action.cc:621] Omaha request response: Jul 2 11:38:44.508111 update_engine[1554]: E0702 11:38:44.507757 1554 omaha_request_action.cc:640] Omaha request network transfer failed. Jul 2 11:38:44.508111 update_engine[1554]: I0702 11:38:44.507786 1554 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 11:38:44.508111 update_engine[1554]: I0702 11:38:44.507795 1554 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 11:38:44.508111 update_engine[1554]: I0702 11:38:44.507804 1554 update_attempter.cc:306] Processing Done. Jul 2 11:38:44.508111 update_engine[1554]: E0702 11:38:44.507833 1554 update_attempter.cc:619] Update failed. Jul 2 11:38:44.508111 update_engine[1554]: I0702 11:38:44.507843 1554 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 11:38:44.508111 update_engine[1554]: I0702 11:38:44.507853 1554 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 11:38:44.508111 update_engine[1554]: I0702 11:38:44.507862 1554 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 11:38:44.508111 update_engine[1554]: I0702 11:38:44.508013 1554 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 11:38:44.508111 update_engine[1554]: I0702 11:38:44.508064 1554 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 11:38:44.508111 update_engine[1554]: I0702 11:38:44.508072 1554 omaha_request_action.cc:271] Request: Jul 2 11:38:44.508111 update_engine[1554]: Jul 2 11:38:44.508111 update_engine[1554]: Jul 2 11:38:44.508111 update_engine[1554]: Jul 2 11:38:44.508111 update_engine[1554]: Jul 2 11:38:44.508111 update_engine[1554]: Jul 2 11:38:44.508111 update_engine[1554]: Jul 2 11:38:44.508111 update_engine[1554]: I0702 11:38:44.508082 1554 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:38:44.509679 update_engine[1554]: I0702 11:38:44.508472 1554 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:38:44.509679 update_engine[1554]: E0702 11:38:44.508647 1554 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:38:44.509679 update_engine[1554]: I0702 11:38:44.508782 1554 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 11:38:44.509679 update_engine[1554]: I0702 11:38:44.508797 1554 omaha_request_action.cc:621] Omaha request response: Jul 2 11:38:44.509679 update_engine[1554]: I0702 11:38:44.508808 1554 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 11:38:44.509679 update_engine[1554]: I0702 11:38:44.508816 1554 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 11:38:44.509679 update_engine[1554]: I0702 11:38:44.508824 1554 update_attempter.cc:306] Processing Done. Jul 2 11:38:44.509679 update_engine[1554]: I0702 11:38:44.508833 1554 update_attempter.cc:310] Error event sent. Jul 2 11:38:44.509679 update_engine[1554]: I0702 11:38:44.508854 1554 update_check_scheduler.cc:74] Next update check in 44m22s Jul 2 11:38:44.510473 locksmithd[1592]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 11:38:44.510473 locksmithd[1592]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 11:38:44.628954 systemd[1]: Started sshd@18-139.178.91.9:22-139.178.68.195:37666.service. Jul 2 11:38:44.664668 sshd[4513]: Accepted publickey for core from 139.178.68.195 port 37666 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:44.665719 sshd[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:44.669058 systemd-logind[1596]: New session 20 of user core. Jul 2 11:38:44.669687 systemd[1]: Started session-20.scope. Jul 2 11:38:44.755949 sshd[4513]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:44.757483 systemd[1]: sshd@18-139.178.91.9:22-139.178.68.195:37666.service: Deactivated successfully. Jul 2 11:38:44.757920 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 11:38:44.758226 systemd-logind[1596]: Session 20 logged out. Waiting for processes to exit. Jul 2 11:38:44.758901 systemd-logind[1596]: Removed session 20. Jul 2 11:38:49.765469 systemd[1]: Started sshd@19-139.178.91.9:22-139.178.68.195:37674.service. Jul 2 11:38:49.850730 sshd[4538]: Accepted publickey for core from 139.178.68.195 port 37674 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:49.852238 sshd[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:49.857112 systemd-logind[1596]: New session 21 of user core. Jul 2 11:38:49.858182 systemd[1]: Started session-21.scope. Jul 2 11:38:49.945970 sshd[4538]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:49.947899 systemd[1]: sshd@19-139.178.91.9:22-139.178.68.195:37674.service: Deactivated successfully. Jul 2 11:38:49.948271 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 11:38:49.948651 systemd-logind[1596]: Session 21 logged out. Waiting for processes to exit. Jul 2 11:38:49.949326 systemd[1]: Started sshd@20-139.178.91.9:22-139.178.68.195:37686.service. Jul 2 11:38:49.949829 systemd-logind[1596]: Removed session 21. Jul 2 11:38:49.984692 sshd[4561]: Accepted publickey for core from 139.178.68.195 port 37686 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:49.985435 sshd[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:49.987927 systemd-logind[1596]: New session 22 of user core. Jul 2 11:38:49.988541 systemd[1]: Started session-22.scope. Jul 2 11:38:50.140526 systemd[1]: Started sshd@21-139.178.91.9:22-165.227.202.123:40848.service. Jul 2 11:38:50.410933 sshd[4585]: Invalid user lab from 165.227.202.123 port 40848 Jul 2 11:38:50.482445 sshd[4585]: pam_faillock(sshd:auth): User unknown Jul 2 11:38:50.482818 sshd[4585]: pam_unix(sshd:auth): check pass; user unknown Jul 2 11:38:50.482840 sshd[4585]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.227.202.123 Jul 2 11:38:50.483054 sshd[4585]: pam_faillock(sshd:auth): User unknown Jul 2 11:38:51.364521 env[1560]: time="2024-07-02T11:38:51.364472279Z" level=info msg="StopContainer for \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\" with timeout 30 (s)" Jul 2 11:38:51.364848 env[1560]: time="2024-07-02T11:38:51.364813561Z" level=info msg="Stop container \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\" with signal terminated" Jul 2 11:38:51.369818 systemd[1]: cri-containerd-3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba.scope: Deactivated successfully. Jul 2 11:38:51.376332 env[1560]: time="2024-07-02T11:38:51.376298143Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 11:38:51.379110 env[1560]: time="2024-07-02T11:38:51.379093612Z" level=info msg="StopContainer for \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\" with timeout 2 (s)" Jul 2 11:38:51.379197 env[1560]: time="2024-07-02T11:38:51.379185197Z" level=info msg="Stop container \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\" with signal terminated" Jul 2 11:38:51.379285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba-rootfs.mount: Deactivated successfully. Jul 2 11:38:51.382355 systemd-networkd[1307]: lxc_health: Link DOWN Jul 2 11:38:51.382360 systemd-networkd[1307]: lxc_health: Lost carrier Jul 2 11:38:51.445765 env[1560]: time="2024-07-02T11:38:51.445640859Z" level=info msg="shim disconnected" id=3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba Jul 2 11:38:51.446239 env[1560]: time="2024-07-02T11:38:51.445766249Z" level=warning msg="cleaning up after shim disconnected" id=3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba namespace=k8s.io Jul 2 11:38:51.446239 env[1560]: time="2024-07-02T11:38:51.445814306Z" level=info msg="cleaning up dead shim" Jul 2 11:38:51.463654 env[1560]: time="2024-07-02T11:38:51.463558987Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:38:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4633 runtime=io.containerd.runc.v2\n" Jul 2 11:38:51.471206 systemd[1]: cri-containerd-7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e.scope: Deactivated successfully. Jul 2 11:38:51.471823 systemd[1]: cri-containerd-7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e.scope: Consumed 6.851s CPU time. Jul 2 11:38:51.482753 env[1560]: time="2024-07-02T11:38:51.482667406Z" level=info msg="StopContainer for \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\" returns successfully" Jul 2 11:38:51.483961 env[1560]: time="2024-07-02T11:38:51.483897369Z" level=info msg="StopPodSandbox for \"540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693\"" Jul 2 11:38:51.484168 env[1560]: time="2024-07-02T11:38:51.484047246Z" level=info msg="Container to stop \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:38:51.490351 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693-shm.mount: Deactivated successfully. Jul 2 11:38:51.500418 systemd[1]: cri-containerd-540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693.scope: Deactivated successfully. Jul 2 11:38:51.512337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e-rootfs.mount: Deactivated successfully. Jul 2 11:38:51.521817 env[1560]: time="2024-07-02T11:38:51.521760146Z" level=info msg="shim disconnected" id=7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e Jul 2 11:38:51.522009 env[1560]: time="2024-07-02T11:38:51.521819562Z" level=warning msg="cleaning up after shim disconnected" id=7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e namespace=k8s.io Jul 2 11:38:51.522009 env[1560]: time="2024-07-02T11:38:51.521835903Z" level=info msg="cleaning up dead shim" Jul 2 11:38:51.527421 env[1560]: time="2024-07-02T11:38:51.527315272Z" level=info msg="shim disconnected" id=540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693 Jul 2 11:38:51.527421 env[1560]: time="2024-07-02T11:38:51.527397588Z" level=warning msg="cleaning up after shim disconnected" id=540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693 namespace=k8s.io Jul 2 11:38:51.527675 env[1560]: time="2024-07-02T11:38:51.527424558Z" level=info msg="cleaning up dead shim" Jul 2 11:38:51.529051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693-rootfs.mount: Deactivated successfully. Jul 2 11:38:51.531514 env[1560]: time="2024-07-02T11:38:51.531473261Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:38:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4678 runtime=io.containerd.runc.v2\n" Jul 2 11:38:51.532984 env[1560]: time="2024-07-02T11:38:51.532914762Z" level=info msg="StopContainer for \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\" returns successfully" Jul 2 11:38:51.533594 env[1560]: time="2024-07-02T11:38:51.533551644Z" level=info msg="StopPodSandbox for \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\"" Jul 2 11:38:51.533731 env[1560]: time="2024-07-02T11:38:51.533638759Z" level=info msg="Container to stop \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:38:51.533731 env[1560]: time="2024-07-02T11:38:51.533663108Z" level=info msg="Container to stop \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:38:51.533731 env[1560]: time="2024-07-02T11:38:51.533682092Z" level=info msg="Container to stop \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:38:51.533731 env[1560]: time="2024-07-02T11:38:51.533698227Z" level=info msg="Container to stop \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:38:51.533731 env[1560]: time="2024-07-02T11:38:51.533713873Z" level=info msg="Container to stop \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:38:51.536130 env[1560]: time="2024-07-02T11:38:51.536060533Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:38:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4688 runtime=io.containerd.runc.v2\n" Jul 2 11:38:51.536518 env[1560]: time="2024-07-02T11:38:51.536449954Z" level=info msg="TearDown network for sandbox \"540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693\" successfully" Jul 2 11:38:51.536518 env[1560]: time="2024-07-02T11:38:51.536480417Z" level=info msg="StopPodSandbox for \"540b26c0ab01b5c99eeb474e4e6d3a2bfe888e8002c71b5621fc7aec59d41693\" returns successfully" Jul 2 11:38:51.540705 systemd[1]: cri-containerd-40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513.scope: Deactivated successfully. Jul 2 11:38:51.577749 env[1560]: time="2024-07-02T11:38:51.577680155Z" level=info msg="shim disconnected" id=40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513 Jul 2 11:38:51.577919 env[1560]: time="2024-07-02T11:38:51.577754896Z" level=warning msg="cleaning up after shim disconnected" id=40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513 namespace=k8s.io Jul 2 11:38:51.577919 env[1560]: time="2024-07-02T11:38:51.577778098Z" level=info msg="cleaning up dead shim" Jul 2 11:38:51.585449 env[1560]: time="2024-07-02T11:38:51.585382628Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:38:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4721 runtime=io.containerd.runc.v2\n" Jul 2 11:38:51.585742 env[1560]: time="2024-07-02T11:38:51.585686082Z" level=info msg="TearDown network for sandbox \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" successfully" Jul 2 11:38:51.585742 env[1560]: time="2024-07-02T11:38:51.585712572Z" level=info msg="StopPodSandbox for \"40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513\" returns successfully" Jul 2 11:38:51.652240 kubelet[2556]: I0702 11:38:51.652041 2556 scope.go:117] "RemoveContainer" containerID="3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba" Jul 2 11:38:51.655312 env[1560]: time="2024-07-02T11:38:51.655212592Z" level=info msg="RemoveContainer for \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\"" Jul 2 11:38:51.660766 env[1560]: time="2024-07-02T11:38:51.660686568Z" level=info msg="RemoveContainer for \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\" returns successfully" Jul 2 11:38:51.661249 kubelet[2556]: I0702 11:38:51.661201 2556 scope.go:117] "RemoveContainer" containerID="3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba" Jul 2 11:38:51.661829 env[1560]: time="2024-07-02T11:38:51.661634798Z" level=error msg="ContainerStatus for \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\": not found" Jul 2 11:38:51.662164 kubelet[2556]: E0702 11:38:51.662103 2556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\": not found" containerID="3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba" Jul 2 11:38:51.662364 kubelet[2556]: I0702 11:38:51.662333 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba"} err="failed to get container status \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"3213eba0e142ebd3eb927c7e7b765badcd94ab737c0a4e8d555415f74b3c81ba\": not found" Jul 2 11:38:51.662494 kubelet[2556]: I0702 11:38:51.662375 2556 scope.go:117] "RemoveContainer" containerID="7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e" Jul 2 11:38:51.664957 env[1560]: time="2024-07-02T11:38:51.664850580Z" level=info msg="RemoveContainer for \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\"" Jul 2 11:38:51.669185 env[1560]: time="2024-07-02T11:38:51.669086816Z" level=info msg="RemoveContainer for \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\" returns successfully" Jul 2 11:38:51.669555 kubelet[2556]: I0702 11:38:51.669465 2556 scope.go:117] "RemoveContainer" containerID="9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2" Jul 2 11:38:51.671908 env[1560]: time="2024-07-02T11:38:51.671805268Z" level=info msg="RemoveContainer for \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\"" Jul 2 11:38:51.676774 env[1560]: time="2024-07-02T11:38:51.676667934Z" level=info msg="RemoveContainer for \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\" returns successfully" Jul 2 11:38:51.677108 kubelet[2556]: I0702 11:38:51.677047 2556 scope.go:117] "RemoveContainer" containerID="b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6" Jul 2 11:38:51.679391 env[1560]: time="2024-07-02T11:38:51.679281297Z" level=info msg="RemoveContainer for \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\"" Jul 2 11:38:51.683518 env[1560]: time="2024-07-02T11:38:51.683411811Z" level=info msg="RemoveContainer for \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\" returns successfully" Jul 2 11:38:51.683858 kubelet[2556]: I0702 11:38:51.683775 2556 scope.go:117] "RemoveContainer" containerID="521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6" Jul 2 11:38:51.686328 env[1560]: time="2024-07-02T11:38:51.686205799Z" level=info msg="RemoveContainer for \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\"" Jul 2 11:38:51.690644 env[1560]: time="2024-07-02T11:38:51.690531172Z" level=info msg="RemoveContainer for \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\" returns successfully" Jul 2 11:38:51.690960 kubelet[2556]: I0702 11:38:51.690883 2556 scope.go:117] "RemoveContainer" containerID="228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df" Jul 2 11:38:51.693365 env[1560]: time="2024-07-02T11:38:51.693288956Z" level=info msg="RemoveContainer for \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\"" Jul 2 11:38:51.697744 env[1560]: time="2024-07-02T11:38:51.697634689Z" level=info msg="RemoveContainer for \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\" returns successfully" Jul 2 11:38:51.698082 kubelet[2556]: I0702 11:38:51.698019 2556 scope.go:117] "RemoveContainer" containerID="7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e" Jul 2 11:38:51.698639 env[1560]: time="2024-07-02T11:38:51.698466709Z" level=error msg="ContainerStatus for \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\": not found" Jul 2 11:38:51.698902 kubelet[2556]: E0702 11:38:51.698855 2556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\": not found" containerID="7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e" Jul 2 11:38:51.699058 kubelet[2556]: I0702 11:38:51.698938 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e"} err="failed to get container status \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ebf4fd87aaf47e3ee87d53f606524fb08fc4d87da7fed31f27425faae04493e\": not found" Jul 2 11:38:51.699058 kubelet[2556]: I0702 11:38:51.698977 2556 scope.go:117] "RemoveContainer" containerID="9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2" Jul 2 11:38:51.699625 env[1560]: time="2024-07-02T11:38:51.699415892Z" level=error msg="ContainerStatus for \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\": not found" Jul 2 11:38:51.699860 kubelet[2556]: E0702 11:38:51.699822 2556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\": not found" containerID="9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2" Jul 2 11:38:51.700001 kubelet[2556]: I0702 11:38:51.699904 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2"} err="failed to get container status \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9469e125effa3f184c95ecaeb9bd8e43e7990ee13a83687e990f683befb6c0b2\": not found" Jul 2 11:38:51.700001 kubelet[2556]: I0702 11:38:51.699942 2556 scope.go:117] "RemoveContainer" containerID="b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6" Jul 2 11:38:51.700632 env[1560]: time="2024-07-02T11:38:51.700454127Z" level=error msg="ContainerStatus for \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\": not found" Jul 2 11:38:51.700856 kubelet[2556]: E0702 11:38:51.700826 2556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\": not found" containerID="b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6" Jul 2 11:38:51.700994 kubelet[2556]: I0702 11:38:51.700902 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6"} err="failed to get container status \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2557cdaa002106c5217b478144b06ccb628e048b77e98e70fbbae1e87510ba6\": not found" Jul 2 11:38:51.700994 kubelet[2556]: I0702 11:38:51.700935 2556 scope.go:117] "RemoveContainer" containerID="521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6" Jul 2 11:38:51.701600 kubelet[2556]: I0702 11:38:51.701522 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-host-proc-sys-kernel\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.701600 kubelet[2556]: I0702 11:38:51.701600 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-lib-modules\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.702027 env[1560]: time="2024-07-02T11:38:51.701438618Z" level=error msg="ContainerStatus for \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\": not found" Jul 2 11:38:51.702223 kubelet[2556]: I0702 11:38:51.701671 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpjqb\" (UniqueName: \"kubernetes.io/projected/ce505a09-3146-4c46-a4ca-2a1ea4651d76-kube-api-access-gpjqb\") pod \"ce505a09-3146-4c46-a4ca-2a1ea4651d76\" (UID: \"ce505a09-3146-4c46-a4ca-2a1ea4651d76\") " Jul 2 11:38:51.702223 kubelet[2556]: I0702 11:38:51.701669 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:51.702223 kubelet[2556]: I0702 11:38:51.701730 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-host-proc-sys-net\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.702223 kubelet[2556]: I0702 11:38:51.701742 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:51.702223 kubelet[2556]: I0702 11:38:51.701785 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-bpf-maps\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.703032 kubelet[2556]: I0702 11:38:51.701892 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44048ba2-0631-4981-82ab-ae0664e621af-hubble-tls\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.703032 kubelet[2556]: I0702 11:38:51.701902 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:51.703032 kubelet[2556]: E0702 11:38:51.701957 2556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\": not found" containerID="521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6" Jul 2 11:38:51.703032 kubelet[2556]: I0702 11:38:51.702063 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6"} err="failed to get container status \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"521fe5d9ad916537fe0680910c8a459f4a953f6c9938ef95453590c335a6b9d6\": not found" Jul 2 11:38:51.703032 kubelet[2556]: I0702 11:38:51.701960 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44048ba2-0631-4981-82ab-ae0664e621af-cilium-config-path\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.703032 kubelet[2556]: I0702 11:38:51.702097 2556 scope.go:117] "RemoveContainer" containerID="228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df" Jul 2 11:38:51.703956 env[1560]: time="2024-07-02T11:38:51.702668201Z" level=error msg="ContainerStatus for \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\": not found" Jul 2 11:38:51.704081 kubelet[2556]: I0702 11:38:51.701894 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:51.704081 kubelet[2556]: I0702 11:38:51.702204 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-xtables-lock\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.704081 kubelet[2556]: I0702 11:38:51.702321 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cilium-run\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.704081 kubelet[2556]: I0702 11:38:51.702342 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:51.704081 kubelet[2556]: I0702 11:38:51.702434 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44048ba2-0631-4981-82ab-ae0664e621af-clustermesh-secrets\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.704631 kubelet[2556]: I0702 11:38:51.702417 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:51.704631 kubelet[2556]: I0702 11:38:51.702517 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cni-path\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.704631 kubelet[2556]: I0702 11:38:51.702588 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-hostproc\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.704631 kubelet[2556]: I0702 11:38:51.702594 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cni-path" (OuterVolumeSpecName: "cni-path") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:51.704631 kubelet[2556]: I0702 11:38:51.702699 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85rgm\" (UniqueName: \"kubernetes.io/projected/44048ba2-0631-4981-82ab-ae0664e621af-kube-api-access-85rgm\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.705120 kubelet[2556]: I0702 11:38:51.702689 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-hostproc" (OuterVolumeSpecName: "hostproc") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:51.705120 kubelet[2556]: I0702 11:38:51.702792 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-etc-cni-netd\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.705120 kubelet[2556]: I0702 11:38:51.702881 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cilium-cgroup\") pod \"44048ba2-0631-4981-82ab-ae0664e621af\" (UID: \"44048ba2-0631-4981-82ab-ae0664e621af\") " Jul 2 11:38:51.705120 kubelet[2556]: I0702 11:38:51.702884 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:51.705120 kubelet[2556]: I0702 11:38:51.702997 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce505a09-3146-4c46-a4ca-2a1ea4651d76-cilium-config-path\") pod \"ce505a09-3146-4c46-a4ca-2a1ea4651d76\" (UID: \"ce505a09-3146-4c46-a4ca-2a1ea4651d76\") " Jul 2 11:38:51.705679 kubelet[2556]: I0702 11:38:51.702999 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:51.705679 kubelet[2556]: I0702 11:38:51.703166 2556 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.705679 kubelet[2556]: E0702 11:38:51.703182 2556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\": not found" containerID="228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df" Jul 2 11:38:51.705679 kubelet[2556]: I0702 11:38:51.703239 2556 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-lib-modules\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.705679 kubelet[2556]: I0702 11:38:51.703321 2556 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-bpf-maps\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.705679 kubelet[2556]: I0702 11:38:51.703319 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df"} err="failed to get container status \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\": rpc error: code = NotFound desc = an error occurred when try to find container \"228f73c5aa94448c2b0e3214a4b95743ab4be8ddabda51dd38df054cfb6db4df\": not found" Jul 2 11:38:51.706280 kubelet[2556]: I0702 11:38:51.703384 2556 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-host-proc-sys-net\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.706280 kubelet[2556]: I0702 11:38:51.703424 2556 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-xtables-lock\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.706280 kubelet[2556]: I0702 11:38:51.703457 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cilium-run\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.706280 kubelet[2556]: I0702 11:38:51.703493 2556 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-hostproc\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.706280 kubelet[2556]: I0702 11:38:51.703543 2556 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cni-path\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.706280 kubelet[2556]: I0702 11:38:51.703584 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-cilium-cgroup\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.706280 kubelet[2556]: I0702 11:38:51.703618 2556 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44048ba2-0631-4981-82ab-ae0664e621af-etc-cni-netd\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.708152 kubelet[2556]: I0702 11:38:51.708059 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44048ba2-0631-4981-82ab-ae0664e621af-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 11:38:51.709099 kubelet[2556]: I0702 11:38:51.709003 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44048ba2-0631-4981-82ab-ae0664e621af-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:38:51.709312 kubelet[2556]: I0702 11:38:51.709150 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce505a09-3146-4c46-a4ca-2a1ea4651d76-kube-api-access-gpjqb" (OuterVolumeSpecName: "kube-api-access-gpjqb") pod "ce505a09-3146-4c46-a4ca-2a1ea4651d76" (UID: "ce505a09-3146-4c46-a4ca-2a1ea4651d76"). InnerVolumeSpecName "kube-api-access-gpjqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:38:51.709312 kubelet[2556]: I0702 11:38:51.709177 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce505a09-3146-4c46-a4ca-2a1ea4651d76-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce505a09-3146-4c46-a4ca-2a1ea4651d76" (UID: "ce505a09-3146-4c46-a4ca-2a1ea4651d76"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 11:38:51.709793 kubelet[2556]: I0702 11:38:51.709304 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44048ba2-0631-4981-82ab-ae0664e621af-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 11:38:51.709793 kubelet[2556]: I0702 11:38:51.709533 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44048ba2-0631-4981-82ab-ae0664e621af-kube-api-access-85rgm" (OuterVolumeSpecName: "kube-api-access-85rgm") pod "44048ba2-0631-4981-82ab-ae0664e621af" (UID: "44048ba2-0631-4981-82ab-ae0664e621af"). InnerVolumeSpecName "kube-api-access-85rgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:38:51.804401 kubelet[2556]: I0702 11:38:51.804323 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44048ba2-0631-4981-82ab-ae0664e621af-cilium-config-path\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.804716 kubelet[2556]: I0702 11:38:51.804414 2556 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44048ba2-0631-4981-82ab-ae0664e621af-clustermesh-secrets\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.804716 kubelet[2556]: I0702 11:38:51.804498 2556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-85rgm\" (UniqueName: \"kubernetes.io/projected/44048ba2-0631-4981-82ab-ae0664e621af-kube-api-access-85rgm\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.804716 kubelet[2556]: I0702 11:38:51.804559 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce505a09-3146-4c46-a4ca-2a1ea4651d76-cilium-config-path\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.804716 kubelet[2556]: I0702 11:38:51.804620 2556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gpjqb\" (UniqueName: \"kubernetes.io/projected/ce505a09-3146-4c46-a4ca-2a1ea4651d76-kube-api-access-gpjqb\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.804716 kubelet[2556]: I0702 11:38:51.804677 2556 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44048ba2-0631-4981-82ab-ae0664e621af-hubble-tls\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:51.963449 systemd[1]: Removed slice kubepods-besteffort-podce505a09_3146_4c46_a4ca_2a1ea4651d76.slice. Jul 2 11:38:51.963891 systemd[1]: kubepods-besteffort-podce505a09_3146_4c46_a4ca_2a1ea4651d76.slice: Consumed 1.013s CPU time. Jul 2 11:38:51.970636 systemd[1]: Removed slice kubepods-burstable-pod44048ba2_0631_4981_82ab_ae0664e621af.slice. Jul 2 11:38:51.970980 systemd[1]: kubepods-burstable-pod44048ba2_0631_4981_82ab_ae0664e621af.slice: Consumed 6.913s CPU time. Jul 2 11:38:52.169902 sshd[4585]: Failed password for invalid user lab from 165.227.202.123 port 40848 ssh2 Jul 2 11:38:52.372353 systemd[1]: var-lib-kubelet-pods-ce505a09\x2d3146\x2d4c46\x2da4ca\x2d2a1ea4651d76-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgpjqb.mount: Deactivated successfully. Jul 2 11:38:52.372410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513-rootfs.mount: Deactivated successfully. Jul 2 11:38:52.372444 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40c8b246690bfba68ac4d71e8e313f8fd65a7cad0ac63c4748e2efcc09b5d513-shm.mount: Deactivated successfully. Jul 2 11:38:52.372477 systemd[1]: var-lib-kubelet-pods-44048ba2\x2d0631\x2d4981\x2d82ab\x2dae0664e621af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d85rgm.mount: Deactivated successfully. Jul 2 11:38:52.372508 systemd[1]: var-lib-kubelet-pods-44048ba2\x2d0631\x2d4981\x2d82ab\x2dae0664e621af-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 11:38:52.372538 systemd[1]: var-lib-kubelet-pods-44048ba2\x2d0631\x2d4981\x2d82ab\x2dae0664e621af-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 11:38:52.430660 sshd[4585]: Connection closed by invalid user lab 165.227.202.123 port 40848 [preauth] Jul 2 11:38:52.432154 systemd[1]: sshd@21-139.178.91.9:22-165.227.202.123:40848.service: Deactivated successfully. Jul 2 11:38:53.323478 sshd[4561]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:53.330790 systemd[1]: sshd@20-139.178.91.9:22-139.178.68.195:37686.service: Deactivated successfully. Jul 2 11:38:53.332530 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 11:38:53.334571 systemd-logind[1596]: Session 22 logged out. Waiting for processes to exit. Jul 2 11:38:53.335206 kubelet[2556]: I0702 11:38:53.335145 2556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="44048ba2-0631-4981-82ab-ae0664e621af" path="/var/lib/kubelet/pods/44048ba2-0631-4981-82ab-ae0664e621af/volumes" Jul 2 11:38:53.337328 kubelet[2556]: I0702 11:38:53.337224 2556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ce505a09-3146-4c46-a4ca-2a1ea4651d76" path="/var/lib/kubelet/pods/ce505a09-3146-4c46-a4ca-2a1ea4651d76/volumes" Jul 2 11:38:53.337623 systemd[1]: Started sshd@22-139.178.91.9:22-139.178.68.195:59746.service. Jul 2 11:38:53.340232 systemd-logind[1596]: Removed session 22. Jul 2 11:38:53.376514 sshd[4741]: Accepted publickey for core from 139.178.68.195 port 59746 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:53.377268 sshd[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:53.379795 systemd-logind[1596]: New session 23 of user core. Jul 2 11:38:53.380284 systemd[1]: Started session-23.scope. Jul 2 11:38:53.695910 sshd[4741]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:53.698363 systemd[1]: sshd@22-139.178.91.9:22-139.178.68.195:59746.service: Deactivated successfully. Jul 2 11:38:53.698735 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 11:38:53.699052 systemd-logind[1596]: Session 23 logged out. Waiting for processes to exit. Jul 2 11:38:53.700065 systemd[1]: Started sshd@23-139.178.91.9:22-139.178.68.195:59762.service. Jul 2 11:38:53.700725 systemd-logind[1596]: Removed session 23. Jul 2 11:38:53.702982 kubelet[2556]: I0702 11:38:53.702960 2556 topology_manager.go:215] "Topology Admit Handler" podUID="19a0435e-30ea-412e-b214-9d3f571d084c" podNamespace="kube-system" podName="cilium-dzsmt" Jul 2 11:38:53.703054 kubelet[2556]: E0702 11:38:53.702999 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44048ba2-0631-4981-82ab-ae0664e621af" containerName="apply-sysctl-overwrites" Jul 2 11:38:53.703054 kubelet[2556]: E0702 11:38:53.703007 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44048ba2-0631-4981-82ab-ae0664e621af" containerName="cilium-agent" Jul 2 11:38:53.703054 kubelet[2556]: E0702 11:38:53.703011 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44048ba2-0631-4981-82ab-ae0664e621af" containerName="mount-bpf-fs" Jul 2 11:38:53.703054 kubelet[2556]: E0702 11:38:53.703015 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44048ba2-0631-4981-82ab-ae0664e621af" containerName="clean-cilium-state" Jul 2 11:38:53.703054 kubelet[2556]: E0702 11:38:53.703023 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44048ba2-0631-4981-82ab-ae0664e621af" containerName="mount-cgroup" Jul 2 11:38:53.703054 kubelet[2556]: E0702 11:38:53.703027 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce505a09-3146-4c46-a4ca-2a1ea4651d76" containerName="cilium-operator" Jul 2 11:38:53.703054 kubelet[2556]: I0702 11:38:53.703042 2556 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce505a09-3146-4c46-a4ca-2a1ea4651d76" containerName="cilium-operator" Jul 2 11:38:53.703054 kubelet[2556]: I0702 11:38:53.703046 2556 memory_manager.go:354] "RemoveStaleState removing state" podUID="44048ba2-0631-4981-82ab-ae0664e621af" containerName="cilium-agent" Jul 2 11:38:53.707183 systemd[1]: Created slice kubepods-burstable-pod19a0435e_30ea_412e_b214_9d3f571d084c.slice. Jul 2 11:38:53.737166 sshd[4764]: Accepted publickey for core from 139.178.68.195 port 59762 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:53.738014 sshd[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:53.740487 systemd-logind[1596]: New session 24 of user core. Jul 2 11:38:53.741006 systemd[1]: Started session-24.scope. Jul 2 11:38:53.815912 kubelet[2556]: I0702 11:38:53.815841 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-cgroup\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816099 kubelet[2556]: I0702 11:38:53.815948 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-etc-cni-netd\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816099 kubelet[2556]: I0702 11:38:53.816022 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-host-proc-sys-kernel\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816099 kubelet[2556]: I0702 11:38:53.816059 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cni-path\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816279 kubelet[2556]: I0702 11:38:53.816139 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-xtables-lock\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816279 kubelet[2556]: I0702 11:38:53.816209 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-hostproc\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816397 kubelet[2556]: I0702 11:38:53.816291 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-ipsec-secrets\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816397 kubelet[2556]: I0702 11:38:53.816365 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19a0435e-30ea-412e-b214-9d3f571d084c-hubble-tls\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816507 kubelet[2556]: I0702 11:38:53.816414 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-run\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816507 kubelet[2556]: I0702 11:38:53.816476 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-lib-modules\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816607 kubelet[2556]: I0702 11:38:53.816544 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19a0435e-30ea-412e-b214-9d3f571d084c-clustermesh-secrets\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816607 kubelet[2556]: I0702 11:38:53.816596 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-config-path\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816719 kubelet[2556]: I0702 11:38:53.816663 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-bpf-maps\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816772 kubelet[2556]: I0702 11:38:53.816738 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-host-proc-sys-net\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.816824 kubelet[2556]: I0702 11:38:53.816801 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhh6t\" (UniqueName: \"kubernetes.io/projected/19a0435e-30ea-412e-b214-9d3f571d084c-kube-api-access-xhh6t\") pod \"cilium-dzsmt\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " pod="kube-system/cilium-dzsmt" Jul 2 11:38:53.857243 sshd[4764]: pam_unix(sshd:session): session closed for user core Jul 2 11:38:53.859246 systemd[1]: sshd@23-139.178.91.9:22-139.178.68.195:59762.service: Deactivated successfully. Jul 2 11:38:53.859776 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 11:38:53.860135 systemd-logind[1596]: Session 24 logged out. Waiting for processes to exit. Jul 2 11:38:53.860837 systemd[1]: Started sshd@24-139.178.91.9:22-139.178.68.195:59768.service. Jul 2 11:38:53.861270 systemd-logind[1596]: Removed session 24. Jul 2 11:38:53.863806 kubelet[2556]: E0702 11:38:53.863789 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-xhh6t lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-dzsmt" podUID="19a0435e-30ea-412e-b214-9d3f571d084c" Jul 2 11:38:53.897055 sshd[4789]: Accepted publickey for core from 139.178.68.195 port 59768 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:38:53.897943 sshd[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:38:53.900797 systemd-logind[1596]: New session 25 of user core. Jul 2 11:38:53.901362 systemd[1]: Started session-25.scope. Jul 2 11:38:54.722608 kubelet[2556]: I0702 11:38:54.722506 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cni-path\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.723470 kubelet[2556]: I0702 11:38:54.722616 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cni-path" (OuterVolumeSpecName: "cni-path") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:54.723470 kubelet[2556]: I0702 11:38:54.722663 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19a0435e-30ea-412e-b214-9d3f571d084c-clustermesh-secrets\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.723470 kubelet[2556]: I0702 11:38:54.722759 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-host-proc-sys-net\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.723470 kubelet[2556]: I0702 11:38:54.722858 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-bpf-maps\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.723470 kubelet[2556]: I0702 11:38:54.722863 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:54.724271 kubelet[2556]: I0702 11:38:54.722963 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-xtables-lock\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.724271 kubelet[2556]: I0702 11:38:54.723006 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:54.724271 kubelet[2556]: I0702 11:38:54.723034 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:54.724271 kubelet[2556]: I0702 11:38:54.723078 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhh6t\" (UniqueName: \"kubernetes.io/projected/19a0435e-30ea-412e-b214-9d3f571d084c-kube-api-access-xhh6t\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.724271 kubelet[2556]: I0702 11:38:54.723180 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-run\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.724271 kubelet[2556]: I0702 11:38:54.723299 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-cgroup\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.724994 kubelet[2556]: I0702 11:38:54.723293 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:54.724994 kubelet[2556]: I0702 11:38:54.723399 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-host-proc-sys-kernel\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.724994 kubelet[2556]: I0702 11:38:54.723408 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:54.724994 kubelet[2556]: I0702 11:38:54.723509 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-config-path\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.724994 kubelet[2556]: I0702 11:38:54.723534 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:54.725530 kubelet[2556]: I0702 11:38:54.723599 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-hostproc\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.725530 kubelet[2556]: I0702 11:38:54.723668 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-ipsec-secrets\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.725530 kubelet[2556]: I0702 11:38:54.723728 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19a0435e-30ea-412e-b214-9d3f571d084c-hubble-tls\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.725530 kubelet[2556]: I0702 11:38:54.723713 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-hostproc" (OuterVolumeSpecName: "hostproc") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:54.725530 kubelet[2556]: I0702 11:38:54.723799 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-etc-cni-netd\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.725530 kubelet[2556]: I0702 11:38:54.723865 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-lib-modules\") pod \"19a0435e-30ea-412e-b214-9d3f571d084c\" (UID: \"19a0435e-30ea-412e-b214-9d3f571d084c\") " Jul 2 11:38:54.726111 kubelet[2556]: I0702 11:38:54.723955 2556 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-host-proc-sys-net\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.726111 kubelet[2556]: I0702 11:38:54.723995 2556 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-xtables-lock\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.726111 kubelet[2556]: I0702 11:38:54.723959 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:54.726111 kubelet[2556]: I0702 11:38:54.724031 2556 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-bpf-maps\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.726111 kubelet[2556]: I0702 11:38:54.724025 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:38:54.726111 kubelet[2556]: I0702 11:38:54.724066 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-run\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.726111 kubelet[2556]: I0702 11:38:54.724104 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-cgroup\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.726790 kubelet[2556]: I0702 11:38:54.724138 2556 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.726790 kubelet[2556]: I0702 11:38:54.724171 2556 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-hostproc\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.726790 kubelet[2556]: I0702 11:38:54.724301 2556 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-cni-path\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.728446 kubelet[2556]: I0702 11:38:54.728341 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 11:38:54.729504 kubelet[2556]: I0702 11:38:54.729438 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a0435e-30ea-412e-b214-9d3f571d084c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 11:38:54.729667 kubelet[2556]: I0702 11:38:54.729653 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a0435e-30ea-412e-b214-9d3f571d084c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:38:54.729725 kubelet[2556]: I0702 11:38:54.729714 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 11:38:54.729725 kubelet[2556]: I0702 11:38:54.729714 2556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a0435e-30ea-412e-b214-9d3f571d084c-kube-api-access-xhh6t" (OuterVolumeSpecName: "kube-api-access-xhh6t") pod "19a0435e-30ea-412e-b214-9d3f571d084c" (UID: "19a0435e-30ea-412e-b214-9d3f571d084c"). InnerVolumeSpecName "kube-api-access-xhh6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:38:54.730495 systemd[1]: var-lib-kubelet-pods-19a0435e\x2d30ea\x2d412e\x2db214\x2d9d3f571d084c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhh6t.mount: Deactivated successfully. Jul 2 11:38:54.730555 systemd[1]: var-lib-kubelet-pods-19a0435e\x2d30ea\x2d412e\x2db214\x2d9d3f571d084c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 11:38:54.730591 systemd[1]: var-lib-kubelet-pods-19a0435e\x2d30ea\x2d412e\x2db214\x2d9d3f571d084c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 11:38:54.730624 systemd[1]: var-lib-kubelet-pods-19a0435e\x2d30ea\x2d412e\x2db214\x2d9d3f571d084c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 11:38:54.824943 kubelet[2556]: I0702 11:38:54.824837 2556 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19a0435e-30ea-412e-b214-9d3f571d084c-clustermesh-secrets\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.824943 kubelet[2556]: I0702 11:38:54.824911 2556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xhh6t\" (UniqueName: \"kubernetes.io/projected/19a0435e-30ea-412e-b214-9d3f571d084c-kube-api-access-xhh6t\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.824943 kubelet[2556]: I0702 11:38:54.824947 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-config-path\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.825416 kubelet[2556]: I0702 11:38:54.824983 2556 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-etc-cni-netd\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.825416 kubelet[2556]: I0702 11:38:54.825021 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/19a0435e-30ea-412e-b214-9d3f571d084c-cilium-ipsec-secrets\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.825416 kubelet[2556]: I0702 11:38:54.825054 2556 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19a0435e-30ea-412e-b214-9d3f571d084c-hubble-tls\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:54.825416 kubelet[2556]: I0702 11:38:54.825088 2556 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19a0435e-30ea-412e-b214-9d3f571d084c-lib-modules\") on node \"ci-3510.3.5-a-b7736b5df5\" DevicePath \"\"" Jul 2 11:38:55.330973 systemd[1]: Removed slice kubepods-burstable-pod19a0435e_30ea_412e_b214_9d3f571d084c.slice. Jul 2 11:38:55.483894 kubelet[2556]: E0702 11:38:55.483823 2556 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 11:38:55.708404 kubelet[2556]: I0702 11:38:55.707764 2556 topology_manager.go:215] "Topology Admit Handler" podUID="8022c05a-1143-44ea-87f3-14aee1e2b256" podNamespace="kube-system" podName="cilium-hwkqn" Jul 2 11:38:55.716648 systemd[1]: Created slice kubepods-burstable-pod8022c05a_1143_44ea_87f3_14aee1e2b256.slice. Jul 2 11:38:55.731472 kubelet[2556]: I0702 11:38:55.731453 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8022c05a-1143-44ea-87f3-14aee1e2b256-cni-path\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731689 kubelet[2556]: I0702 11:38:55.731480 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8022c05a-1143-44ea-87f3-14aee1e2b256-cilium-config-path\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731689 kubelet[2556]: I0702 11:38:55.731494 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8022c05a-1143-44ea-87f3-14aee1e2b256-host-proc-sys-net\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731689 kubelet[2556]: I0702 11:38:55.731534 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8022c05a-1143-44ea-87f3-14aee1e2b256-etc-cni-netd\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731689 kubelet[2556]: I0702 11:38:55.731565 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8022c05a-1143-44ea-87f3-14aee1e2b256-clustermesh-secrets\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731689 kubelet[2556]: I0702 11:38:55.731579 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8022c05a-1143-44ea-87f3-14aee1e2b256-host-proc-sys-kernel\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731797 kubelet[2556]: I0702 11:38:55.731601 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8022c05a-1143-44ea-87f3-14aee1e2b256-bpf-maps\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731797 kubelet[2556]: I0702 11:38:55.731635 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8022c05a-1143-44ea-87f3-14aee1e2b256-cilium-ipsec-secrets\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731797 kubelet[2556]: I0702 11:38:55.731674 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8022c05a-1143-44ea-87f3-14aee1e2b256-hubble-tls\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731797 kubelet[2556]: I0702 11:38:55.731702 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8022c05a-1143-44ea-87f3-14aee1e2b256-hostproc\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731797 kubelet[2556]: I0702 11:38:55.731716 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8022c05a-1143-44ea-87f3-14aee1e2b256-lib-modules\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731797 kubelet[2556]: I0702 11:38:55.731742 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs9bp\" (UniqueName: \"kubernetes.io/projected/8022c05a-1143-44ea-87f3-14aee1e2b256-kube-api-access-hs9bp\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731913 kubelet[2556]: I0702 11:38:55.731764 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8022c05a-1143-44ea-87f3-14aee1e2b256-cilium-run\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731913 kubelet[2556]: I0702 11:38:55.731776 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8022c05a-1143-44ea-87f3-14aee1e2b256-xtables-lock\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:55.731913 kubelet[2556]: I0702 11:38:55.731787 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8022c05a-1143-44ea-87f3-14aee1e2b256-cilium-cgroup\") pod \"cilium-hwkqn\" (UID: \"8022c05a-1143-44ea-87f3-14aee1e2b256\") " pod="kube-system/cilium-hwkqn" Jul 2 11:38:56.023682 env[1560]: time="2024-07-02T11:38:56.023475038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hwkqn,Uid:8022c05a-1143-44ea-87f3-14aee1e2b256,Namespace:kube-system,Attempt:0,}" Jul 2 11:38:56.048083 env[1560]: time="2024-07-02T11:38:56.047877988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:38:56.048083 env[1560]: time="2024-07-02T11:38:56.047974966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:38:56.048083 env[1560]: time="2024-07-02T11:38:56.048012382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:38:56.048561 env[1560]: time="2024-07-02T11:38:56.048364416Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232 pid=4830 runtime=io.containerd.runc.v2 Jul 2 11:38:56.081603 systemd[1]: Started cri-containerd-c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232.scope. Jul 2 11:38:56.101370 env[1560]: time="2024-07-02T11:38:56.101326779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hwkqn,Uid:8022c05a-1143-44ea-87f3-14aee1e2b256,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\"" Jul 2 11:38:56.103136 env[1560]: time="2024-07-02T11:38:56.103112627Z" level=info msg="CreateContainer within sandbox \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 11:38:56.109732 env[1560]: time="2024-07-02T11:38:56.109603848Z" level=info msg="CreateContainer within sandbox \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2d4bcfa55b22f82b67919945238aa9cb79d6aea1ce0f2f993a9e1050d2a12b5\"" Jul 2 11:38:56.110754 env[1560]: time="2024-07-02T11:38:56.110668987Z" level=info msg="StartContainer for \"c2d4bcfa55b22f82b67919945238aa9cb79d6aea1ce0f2f993a9e1050d2a12b5\"" Jul 2 11:38:56.146773 systemd[1]: Started cri-containerd-c2d4bcfa55b22f82b67919945238aa9cb79d6aea1ce0f2f993a9e1050d2a12b5.scope. Jul 2 11:38:56.201337 env[1560]: time="2024-07-02T11:38:56.201214947Z" level=info msg="StartContainer for \"c2d4bcfa55b22f82b67919945238aa9cb79d6aea1ce0f2f993a9e1050d2a12b5\" returns successfully" Jul 2 11:38:56.225581 systemd[1]: cri-containerd-c2d4bcfa55b22f82b67919945238aa9cb79d6aea1ce0f2f993a9e1050d2a12b5.scope: Deactivated successfully. Jul 2 11:38:56.268966 env[1560]: time="2024-07-02T11:38:56.268897634Z" level=info msg="shim disconnected" id=c2d4bcfa55b22f82b67919945238aa9cb79d6aea1ce0f2f993a9e1050d2a12b5 Jul 2 11:38:56.269224 env[1560]: time="2024-07-02T11:38:56.268966853Z" level=warning msg="cleaning up after shim disconnected" id=c2d4bcfa55b22f82b67919945238aa9cb79d6aea1ce0f2f993a9e1050d2a12b5 namespace=k8s.io Jul 2 11:38:56.269224 env[1560]: time="2024-07-02T11:38:56.268990457Z" level=info msg="cleaning up dead shim" Jul 2 11:38:56.277725 env[1560]: time="2024-07-02T11:38:56.277618759Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:38:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4914 runtime=io.containerd.runc.v2\n" Jul 2 11:38:56.684208 env[1560]: time="2024-07-02T11:38:56.684091545Z" level=info msg="CreateContainer within sandbox \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 11:38:56.693510 env[1560]: time="2024-07-02T11:38:56.693489019Z" level=info msg="CreateContainer within sandbox \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b92f972a2c6cd49282504c9ef50bd5f53388dc80bd8b3722d19f8ddab1afa36f\"" Jul 2 11:38:56.693810 env[1560]: time="2024-07-02T11:38:56.693793453Z" level=info msg="StartContainer for \"b92f972a2c6cd49282504c9ef50bd5f53388dc80bd8b3722d19f8ddab1afa36f\"" Jul 2 11:38:56.702179 systemd[1]: Started cri-containerd-b92f972a2c6cd49282504c9ef50bd5f53388dc80bd8b3722d19f8ddab1afa36f.scope. Jul 2 11:38:56.715575 env[1560]: time="2024-07-02T11:38:56.715544024Z" level=info msg="StartContainer for \"b92f972a2c6cd49282504c9ef50bd5f53388dc80bd8b3722d19f8ddab1afa36f\" returns successfully" Jul 2 11:38:56.719811 systemd[1]: cri-containerd-b92f972a2c6cd49282504c9ef50bd5f53388dc80bd8b3722d19f8ddab1afa36f.scope: Deactivated successfully. Jul 2 11:38:56.730627 env[1560]: time="2024-07-02T11:38:56.730565202Z" level=info msg="shim disconnected" id=b92f972a2c6cd49282504c9ef50bd5f53388dc80bd8b3722d19f8ddab1afa36f Jul 2 11:38:56.730627 env[1560]: time="2024-07-02T11:38:56.730596994Z" level=warning msg="cleaning up after shim disconnected" id=b92f972a2c6cd49282504c9ef50bd5f53388dc80bd8b3722d19f8ddab1afa36f namespace=k8s.io Jul 2 11:38:56.730627 env[1560]: time="2024-07-02T11:38:56.730604300Z" level=info msg="cleaning up dead shim" Jul 2 11:38:56.734871 env[1560]: time="2024-07-02T11:38:56.734844250Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:38:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4974 runtime=io.containerd.runc.v2\n" Jul 2 11:38:57.334561 kubelet[2556]: I0702 11:38:57.334456 2556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="19a0435e-30ea-412e-b214-9d3f571d084c" path="/var/lib/kubelet/pods/19a0435e-30ea-412e-b214-9d3f571d084c/volumes" Jul 2 11:38:57.694629 env[1560]: time="2024-07-02T11:38:57.694407201Z" level=info msg="CreateContainer within sandbox \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 11:38:57.706824 env[1560]: time="2024-07-02T11:38:57.706782020Z" level=info msg="CreateContainer within sandbox \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"78650d7eeb0a623039d761e4557d4623883834fa45d6d7e4e7430bcc5babc5f6\"" Jul 2 11:38:57.707345 env[1560]: time="2024-07-02T11:38:57.707313074Z" level=info msg="StartContainer for \"78650d7eeb0a623039d761e4557d4623883834fa45d6d7e4e7430bcc5babc5f6\"" Jul 2 11:38:57.717736 systemd[1]: Started cri-containerd-78650d7eeb0a623039d761e4557d4623883834fa45d6d7e4e7430bcc5babc5f6.scope. Jul 2 11:38:57.731281 env[1560]: time="2024-07-02T11:38:57.731188712Z" level=info msg="StartContainer for \"78650d7eeb0a623039d761e4557d4623883834fa45d6d7e4e7430bcc5babc5f6\" returns successfully" Jul 2 11:38:57.732827 systemd[1]: cri-containerd-78650d7eeb0a623039d761e4557d4623883834fa45d6d7e4e7430bcc5babc5f6.scope: Deactivated successfully. Jul 2 11:38:57.743530 env[1560]: time="2024-07-02T11:38:57.743475832Z" level=info msg="shim disconnected" id=78650d7eeb0a623039d761e4557d4623883834fa45d6d7e4e7430bcc5babc5f6 Jul 2 11:38:57.743530 env[1560]: time="2024-07-02T11:38:57.743502487Z" level=warning msg="cleaning up after shim disconnected" id=78650d7eeb0a623039d761e4557d4623883834fa45d6d7e4e7430bcc5babc5f6 namespace=k8s.io Jul 2 11:38:57.743530 env[1560]: time="2024-07-02T11:38:57.743508482Z" level=info msg="cleaning up dead shim" Jul 2 11:38:57.746959 env[1560]: time="2024-07-02T11:38:57.746941757Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:38:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5029 runtime=io.containerd.runc.v2\n" Jul 2 11:38:57.846458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78650d7eeb0a623039d761e4557d4623883834fa45d6d7e4e7430bcc5babc5f6-rootfs.mount: Deactivated successfully. Jul 2 11:38:58.702796 env[1560]: time="2024-07-02T11:38:58.702669878Z" level=info msg="CreateContainer within sandbox \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 11:38:58.714146 env[1560]: time="2024-07-02T11:38:58.714099287Z" level=info msg="CreateContainer within sandbox \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1a764715eccff1719b9221dbe5ea6da193222fff6650e157edef7cf40b6058a0\"" Jul 2 11:38:58.714493 env[1560]: time="2024-07-02T11:38:58.714480234Z" level=info msg="StartContainer for \"1a764715eccff1719b9221dbe5ea6da193222fff6650e157edef7cf40b6058a0\"" Jul 2 11:38:58.723860 systemd[1]: Started cri-containerd-1a764715eccff1719b9221dbe5ea6da193222fff6650e157edef7cf40b6058a0.scope. Jul 2 11:38:58.735140 env[1560]: time="2024-07-02T11:38:58.735115623Z" level=info msg="StartContainer for \"1a764715eccff1719b9221dbe5ea6da193222fff6650e157edef7cf40b6058a0\" returns successfully" Jul 2 11:38:58.735485 systemd[1]: cri-containerd-1a764715eccff1719b9221dbe5ea6da193222fff6650e157edef7cf40b6058a0.scope: Deactivated successfully. Jul 2 11:38:58.744564 env[1560]: time="2024-07-02T11:38:58.744504873Z" level=info msg="shim disconnected" id=1a764715eccff1719b9221dbe5ea6da193222fff6650e157edef7cf40b6058a0 Jul 2 11:38:58.744564 env[1560]: time="2024-07-02T11:38:58.744533691Z" level=warning msg="cleaning up after shim disconnected" id=1a764715eccff1719b9221dbe5ea6da193222fff6650e157edef7cf40b6058a0 namespace=k8s.io Jul 2 11:38:58.744564 env[1560]: time="2024-07-02T11:38:58.744540671Z" level=info msg="cleaning up dead shim" Jul 2 11:38:58.748080 env[1560]: time="2024-07-02T11:38:58.748059795Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:38:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5085 runtime=io.containerd.runc.v2\n" Jul 2 11:38:58.847700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a764715eccff1719b9221dbe5ea6da193222fff6650e157edef7cf40b6058a0-rootfs.mount: Deactivated successfully. Jul 2 11:38:59.713154 env[1560]: time="2024-07-02T11:38:59.713065249Z" level=info msg="CreateContainer within sandbox \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 11:38:59.724372 env[1560]: time="2024-07-02T11:38:59.724346180Z" level=info msg="CreateContainer within sandbox \"c4f0a771ec59a5e1a302b625cb8620a7a15f9a3e1a540680d6a0e93630021232\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3cdf64566127358a2de43378abea147020c53034799f0f3d178790f7060dba9a\"" Jul 2 11:38:59.724768 env[1560]: time="2024-07-02T11:38:59.724731471Z" level=info msg="StartContainer for \"3cdf64566127358a2de43378abea147020c53034799f0f3d178790f7060dba9a\"" Jul 2 11:38:59.725582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368709995.mount: Deactivated successfully. Jul 2 11:38:59.734455 systemd[1]: Started cri-containerd-3cdf64566127358a2de43378abea147020c53034799f0f3d178790f7060dba9a.scope. Jul 2 11:38:59.748253 env[1560]: time="2024-07-02T11:38:59.748197727Z" level=info msg="StartContainer for \"3cdf64566127358a2de43378abea147020c53034799f0f3d178790f7060dba9a\" returns successfully" Jul 2 11:38:59.893269 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 11:39:00.734898 kubelet[2556]: I0702 11:39:00.734849 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hwkqn" podStartSLOduration=5.734825891 podStartE2EDuration="5.734825891s" podCreationTimestamp="2024-07-02 11:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:39:00.734678429 +0000 UTC m=+465.460734280" watchObservedRunningTime="2024-07-02 11:39:00.734825891 +0000 UTC m=+465.460881738" Jul 2 11:39:02.760187 systemd-networkd[1307]: lxc_health: Link UP Jul 2 11:39:02.784193 systemd-networkd[1307]: lxc_health: Gained carrier Jul 2 11:39:02.784341 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 11:39:04.583404 systemd-networkd[1307]: lxc_health: Gained IPv6LL Jul 2 11:39:08.593084 sshd[4789]: pam_unix(sshd:session): session closed for user core Jul 2 11:39:08.595024 systemd[1]: sshd@24-139.178.91.9:22-139.178.68.195:59768.service: Deactivated successfully. Jul 2 11:39:08.595573 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 11:39:08.596071 systemd-logind[1596]: Session 25 logged out. Waiting for processes to exit. Jul 2 11:39:08.596878 systemd-logind[1596]: Removed session 25.